6. Install and Build the HPC-Stack
The HPC-Stack is already installed on Level 1 systems (e.g., Cheyenne, Hera, Orion). Installation is not necessary.
HPC-Stack installation will vary from system to system because there are so many possible combinations of operating systems, compilers, MPI’s, and package versions. Installation via an EPIC-provided container is recommended to reduce this variability. However, users may choose a non-container approach to installation if they prefer.
MPI stands for Message Passing Interface. An MPI is a standardized communication system used in parallel programming. It establishes portable and efficient syntax for the exchange of messages and data between multiple processors that are used by a single computer program. An MPI is required for high-performance computing (HPC).
6.1. Install and Build the HPC-Stack in a Singularity Container
The Earth Prediction Innovation Center (EPIC) provides several containers available for the installation of the HPC-Stack either individually or combined with Unified Forecast System (UFS) applications:
6.1.1. Install Singularity
To install the HPC-Stack via Singularity container, first install the Singularity package according to the Singularity Installation Guide. This will include the installation of dependencies and the installation of the Go programming language. SingularityCE Version 3.7 or above is recommended.
Docker containers can only be run with root privileges, and users cannot have root privileges on HPC’s. Therefore, it is not possible to build the HPC-Stack inside a Docker container on an HPC system. A Docker image may be pulled, but it must be run inside a container such as Singularity. Docker can, however, be used to build the HPC-Stack on a local system.
6.1.2. Build and Run the Container
Pull and build the container.
singularity pull ubuntu20.04-gnu9.3.sif docker://noaaepic/ubuntu20.04-gnu9.3 singularity build --sandbox ubuntu20.04-gnu9.3 ubuntu20.04-gnu9.3.sif cd ubuntu20.04-gnu9.3
Make a directory (e.g.,
contrib) in the container if one does not exist:
mkdir contrib cd ..
From the local working directory, start the container and run an interactive shell within it. This command also binds the local working directory to the container so that data can be shared between them.
singularity shell -e --writable --bind /<local_dir>:/contrib ubuntu20.04-gnu9.3
Make sure to update
<local_dir>with the name of your local working directory.
6.1.3. Build the HPC-Stack
Clone the HPC-Stack repository (from inside the Singularity shell initialized above).
git clone https://github.com/NOAA-EMC/hpc-stack cd hpc-stack
Set up the build environment. Be sure to change the
prefixargument in the code below to your system’s install location (likely within the
./setup_modules.sh -p <prefix> -c config/config_custom.sh
<prefix>is the directory where the software packages will be installed with a default value of
$HOME/opt. For example, if the HPC-Stack is installed in the user’s directory, the prefix might be
Enter YES/YES/YES when the option is presented. Then modify
build_stack.shwith the following commands:
sed -i "10 a source /usr/share/lmod/6.6/init/bash" ./build_stack.sh sed -i "10 a export PATH=/usr/local/sbin:/usr/local/bin:$PATH" ./build_stack.sh sed -i "10 a export LD_LIBRARY_PATH=/usr/local/lib64:/usr/local/lib:$LD_LIBRARY_PATH" ./build_stack.sh
Build the environment. This may take several hours to complete.
./build_stack.sh -p <prefix> -c config/config_custom.sh -y stack/stack_custom.yaml -m
Load the required modules, making sure to change the
<prefix>to the location of the module files.
source /usr/share/lmod/lmod/init/bash module use <prefix>/hpc-modules/modulefiles/stack module load hpc hpc-gnu hpc-openmpi module avail
From here, the user can continue to install and run applications that depend on the HPC-Stack, such as the UFS Short Range Weather (SRW) Application.
6.2. Non-Container HPC-Stack Installation and Build (General/Linux)
6.2.1. Install Prerequisites
To install the HPC-Stack locally, the following pre-requisites must be installed:
Python 3: Can be obtained either from the main distributor or from Anaconda.
Compilers: Distributions of Fortran, C, and C++ compilers that work for your system.
Message Passing Interface (MPI) libraries for multi-processor and multi-core communications, configured to work with your corresponding Fortran, C, and C++ compilers.
Programs and software packages: Lmod, CMake, make, wget, curl, git, and the TIFF library.
For detailed instructions on how to build the HPC-Stack on two particular configurations of MacOS, see Chapter 6.3
To determine whether these prerequisites are installed, query the environment variables (for
Lmod) or the location and version of the packages (for
git). For example:
echo $LMOD_PKG which cmake cmake --version
Methods for determining whether
libtiff is installed vary between systems. Users can try the following approaches:
whereis libtiff locate libtiff ldconfig -p | grep libtiff ls /usr/lib64/libtiff* ls /usr/lib/libtiff*
If compilers or MPI’s need to be installed, consult the HPC-Stack Prerequisites document for further guidance.
6.2.2. Configure the Build
Choose the COMPILER, MPI, and PYTHON version, and specify any other aspects of the build that you would like. For Level 1 systems, a default configuration can be found in the applicable
config/config_<platform>.sh file. For Level 2-4 systems, selections can be made by editing the
config/config_custom.sh file to reflect the appropriate compiler, MPI, and Python choices for your system. If Lmod is installed on your system, you can view package options using the
module avail command.
Some of the parameter settings available are:
HPC_COMPILER: This defines the vendor and version of the compiler you wish to use for this build. The format is the same as what you would typically use in a
module loadcommand. For example,
gcc -vto determine your compiler and version.
HPC_MPI: This is the MPI library you wish to use. The format is the same as for HPC_COMPILER. For example:
HPC_PYTHON: This is the Python interpreter to use for the build. The format is the same as for HPC_COMPILER, for example:
python --versionto determine the current version of Python.
Other variables include USE_SUDO, DOWNLOAD_ONLY, NOTE, PKGDIR, LOGDIR, OVERWRITE, NTHREADS, MAKE_CHECK, MAKE_VERBOSE, and VENVTYPE. For more information on their use, see HPC-Stack Parameters.
If you only want to install select components of the HPC-Stack, you can edit the
stack/stack_custom.yaml file to omit unwanted components. The
stack/stack_custom.yaml file lists the software packages to be built along with their version, options, compiler flags, and any other package-specific options. A full listing of components is available in the HPC-Stack Components section.
6.2.3. Set Up Compiler, MPI, Python & Module System
This step is required if you are using
Lmod modules for managing the software stack. Lmod is installed across all Level 1 and Level 2 systems and in the containers provided. If
LMod is not desired or used, the user can skip ahead to Step 6.2.4.
After preparing the system configuration in
./config/config_<platform>.sh, run the following command from the top directory:
./setup_modules.sh -p <prefix> -c <configuration>
<prefix> is the directory where the software packages will be installed during the HPC-Stack build. The default value is $HOME/opt. The software installation trees will branch directly off of
<prefix>, while the module files will be located in the
<prefix> requires an absolute path; it will not work with a relative path.
<configuration> points to the configuration script that you wish to use, as described in Step 6.2.2. The default configuration file is
The compiler and MPI modules can be handled separately from the rest of the build in order to exploit site-specific installations that maximize performance. In this case, the compiler and MPI modules are preceded by an
hpc- label. For example, to load the Intel compiler module and the Intel MPI (IMPI) software library, enter:
module load hpc-intel/2020 module load hpc-impi/2020
hpc- modules are really meta-modules that load the compiler/MPI library and modify the MODULEPATH so that the user has access to the software packages that will be built in Step 6.2.4. On HPC systems, these meta-modules load the native modules provided by the system administrators.
In short, you may prefer not to load the compiler or MPI modules directly. Instead, loading the hpc- meta-modules as demonstrated above will provide everything needed to load software libraries.
It may be necessary to set certain source and path variables in the
build_stack.sh script. For example:
source /usr/share/lmod/6.6/init/bash source /usr/share/lmod/lmod/init/bash export PATH=/usr/local/sbin:/usr/local/bin:$PATH export LD_LIBRARY_PATH=/usr/local/lib64:/usr/local/lib:$LD_LIBRARY_PATH export LD_LIBRARY_PATH=/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH
It may also be necessary to initialize
Lmod when using a user-specific
module purge export BASH_ENV=$HOME/<Lmod-installation-dir>/lmod/lmod/init/bash source $BASH_ENV export LMOD_SYSTEM_DEFAULT_MODULES=<module1>:<module2>:<module3> module --initial_load --no_redirect restore module use <$HOME>/<your-modulefiles-dir>
<Lmod-installation-dir>is the top directory where Lmod is installed
<module1>, ...,<moduleN>is a comma-separated list of modules to load by default
<$HOME>/<your-modulefiles-dir>is the directory where additional custom modules may be built with Lmod (e.g., $HOME/modulefiles).
6.2.4. Build the HPC-Stack
Now all that remains is to build the stack:
./build_stack.sh -p <prefix> -c <configuration> -y <yaml_file> -m
Here the -m option is only required when you need to build your own modules and LMod is used for managing the software stack. It should be omitted otherwise.
<configuration> are the same as in Step 6.2.3, namely a reference to the absolute-path installation prefix and a corresponding configuration file in the
config directory. As in Step 6.2.3, if this argument is omitted, the default is to use
<yaml_file> represents a user configurable yaml file containing a list of packages that need to be built in the stack along with their versions and package options. The default value of
Steps Step 6.2.2, Step 6.2.3, and Step 6.2.4 need to be repeated for each compiler/MPI combination that you wish to install. The new packages will be installed alongside any previously-existing packages that may already have been built from other compiler/MPI combinations.
From here, the user can continue to install and run applications that depend on the HPC-Stack.
6.3. Install and Build HPC-Stack on MacOS
HPC-Stack can be installed and built on MacOS systems. The following two options have been tested:
Option 1: MacBookAir 2020, M1 chip (arm64, running natively), 4+4 cores, Big Sur 11.6.4, GNU compiler suite v.11.2.0_3 (gcc, gfortran, g++); no MPI pre-installed
Option 2: MacBook Pro 2015, 2.8 GHz Quad-Core Intel Core i7 (x86_64), Catalina OS X 10.15.7, GNU compiler suite v.11.2.0_3 (gcc, gfortran, g++); no MPI pre-installed
Examples throughout this chapter presume that the user is running Terminal.app with a bash shell environment. If this is not the case, users will need to adjust commands to fit their command line application and shell environment.
6.3.1. Prerequisites for Building HPC-Stack
126.96.36.199. Install Homebrew and Xcode Command-Line Tools (CLT)
Open Terminal.app and a web browser. Go to https://brew.sh, copy the command-line installation directive, and run it in a new Terminal window. Terminal will request a
sudo access password. The installation command will look similar to the following:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
This will install Homebrew, Xcode CLT, and Ruby.
An alternative way to install the Xcode command-line tools (CLT) is as follows:
188.8.131.52. Install Compilers
Install GNU compiler suite (version 11) and gfortran:
brew install gcc@11
Create symbolic links from the version-specific binaries to gcc and g++. A
sudo password may be requested. The path will likely be
/opt/homebrew/bin/gcc-11 (Option 1), or
/usr/local/bin/gcc-11 (Option 2).
which gcc-11 cd /usr/local/bin/ (OR cd /opt/homebrew/bin/ ) ln -s gcc-11 gcc ln -s g++-11 g++
There is no need to create a link for gfortran if this is the first installation of this compiler. If an earlier version of gfortran exists, you may rename it (e.g., to “gfortran-old”) and create a link to the new installation:
ln -s gfortran-11 gfortran
Verify the paths for the compiler binaries:
which gcc which g++ which gfortran
Verify that they show the correct version of GNU installed:
gcc --version g++ --version gfortran --version
184.108.40.206. Install CMake
Install the cmake utility via homebrew:
brew install cmake
220.127.116.11. Install/Upgrade Make
To install the make utility via homebrew:
brew install make
To upgrade the make utility via homebrew:
brew upgrade make
18.104.22.168. Install Openssl@3
To install the openssl@3 package, run:
brew install openssl@3
Note the messages at the end of the installation. Depending on what they say, users may need to add the location of the openssl@3 binaries to the environment variable
$PATH. To add it to the
echo 'export PATH="/opt/homebrew/opt/openssl@3/bin:$PATH"' >> ~/.bashrc
Users may also need to set certain flags so that the compilers can find the openssl@3 package:
export LDFLAGS="-L/opt/homebrew/opt/openssl@3/lib" export CPPFLAGS="-I/opt/homebrew/opt/openssl@3/include"
22.214.171.124. Install Lmod
Install Lmod, which is the module management environment, run:
brew install lmod
You may need to add the Lmod environment initialization to your shell profile, e.g., to
For the Option 1 installation, add:
export BASH_ENV="/opt/homebrew/opt/lmod/init/bash" source $BASH_ENV
For the Option 2 installation, add:
export BASH_ENV="/usr/local/opt/lmod/init/bash" source $BASH_ENV
126.96.36.199. Install libpng
The libpng library has issues when building on MacOS during the HPC-Stack bundle build. Therefore, it must be installed separately. To install the libpng library, run:
brew install libpng
188.8.131.52. Install wget
Install the Wget software package:
brew install wget
184.108.40.206. Install or Update Python3
First, verify that Python3 is installed, and check the current version:
which python3 python3 --version
The first command should return
/usr/bin/python3 and the second should return
Python 3.8.2 or similar (the exact version is unimportant).
If necessary, download an updated version of Python3 for MacOS from https://www.python.org/downloads. The version 3.9.11 64-bit universal2 installer package is recommended (i.e.,
python-3.9.11-macos11.pkg). Double-click on the installer package, and accept the license terms. An administrative level password will be requested for the installation. At the end of the installation, run
Install Certificates.command by double-clicking on the shell script in Finder.app that opens and runs it.
Start a new bash session (type
bash in the existing terminal), and verify the installed version:
The output should now correspond to the Python version you installed.
220.127.116.11. Install Git
Install git and dependencies:
brew install git
6.3.2. Building HPC-Stack
18.104.22.168. Clone HPC-Stack
Download HPC-Stack code from GitHub:
git clone https://github.com/NOAA-EMC/hpc-stack.git cd hpc-stack
The configuration files are
./config/config_<machine>.sh. For Option 1,
mac_m1_gnu and for Option 2,
./stack/stack_<machine>.yaml file lists the libraries that will be built as part of HPC-Stack, in addition to library-specific options. These can be altered based on user preferences.
22.214.171.124. Lmod Environment
Verify the initialization of Lmod environment, or add it to the configuration file
./config/config_<machine>.sh, as in Step 126.96.36.199.
For Option 1:
export BASH_ENV="/opt/homebrew/opt/lmod/init/profile" source $BASH_ENV
For Option 2:
export BASH_ENV="/usr/local/opt/lmod/init/profile" source $BASH_ENV
188.8.131.52. Specify Compiler, Python, and MPI
Specify the combination of compilers, python libraries, and MPI libraries in the configuration file
export HPC_COMPILER="gnu/11.2.0_3" export HPC_MPI="openmpi/4.1.2" (Option 1 only) export HPC_MPI="mpich/3.3.2" (Option 2 only) export HPC_PYTHON="python/3.10.2"
Comment out any export statements not relevant to the system, and make sure that version numbers reflect the versions installed on the system (which may differ from the versions listed here).
184.108.40.206. Set Appropriate Flags
When using gfortran version 10 or higher, verify that the following flags are set in
For Option 1:
export STACK_FFLAGS="-fallow-argument-mismatch -fallow-invalid-boz"
For Option 2:
export STACK_FFLAGS=“-fallow-argument-mismatch -fallow-invalid-boz” export STACK_CXXFLAGS="-march=native"
220.127.116.11. Set Environment Variables
Set the environmental variables for compiler paths in
./config/config_<machine>.sh. The variable
GNU below refers to the directory where the compiler binaries are located. For example, with Option 1,
GNU=/opt/homebrew/bin/gcc, and with Option 2:
export GNU="path/to/compiler/binaries" export CC=$GNU/gcc export FC=$GNU/gfortran export CXX=$GNU/g++ export SERIAL_CC=$GNU/gcc export SERIAL_FC=$GNU/gfortran export SERIAL_CXX=$GNU/g++
18.104.22.168. Specify MPI Libraries
Specify the MPI libraries to be built within the HPC-Stack in
openmpi/4.1.2 (Option 1) and
mpich/3.3.2 (Option 2) have been built successfully.
mpi: build: YES flavor: openmpi version: 4.1.2
mpi: build: YES flavor: mpich version: 3.3.2
libpng library to NO in
./stack/stack_<machine>.yaml to avoid problems during the HPC-Stack build. Leave the defaults for other libraries and versions in
libpng: build: NO
22.214.171.124. Set Up the Modules and Environment
Set up the modules and environment:
./setup_modules.sh -c config/config_<machine>.sh -p $HPC_INSTALL_DIR | tee setup_modules.log
mac_m1_gnu (Option 1), or
mac_gnu (Option 2), and
$HPC_INSTALL_DIR is the absolute path for the installation directory of the HPC-Stack. You will be asked to choose whether or not to use “native” installations of Python, the compilers, and the MPI. “Native” means that they are already installed on your system. Thus, you answer “YES” to python, “YES” to gnu compilers, and “NO” for MPI/mpich.
126.96.36.199. Building HPC-Stack
Build the modules:
./build_stack.sh -c config/config_<machine>.sh -p $HPC_INSTALL_DIR -y stack/stack_<machine>.yaml -m 2>&1 | tee build_stack.log
-prequires an absolute path (full path) of the installation directory!
-moption is needed to build separate modules for each library package.
6.4. Installation of the HPC-Stack Prerequisites
A wide variety of compiler and MPI options are available. Certain combinations may play well together, whereas others may not.
The following system, compiler, and MPI combinations have been tested successfully:
SUSE Linux Enterprise Server 12.4
Intel compilers 2020.0 (ifort, icc, icps)
Intel MPI wrappers (mpif90, mpicc, mpicxx)
Linux CentOS 7
Intel compilers 2020.0 (ifort, icc, icps)
Intel MPI (mpiifort, mpiicc, mpiicpc)
Compilers and MPI libraries can be downloaded from the following websites:
6.5. Build Parameters
6.5.1. Compiler & MPI
This defines the vendor and version of the compiler you wish to use for this build. The format is the same as what you would typically use in a module load command. For example,
HPC_COMPILER=intel/2020. Options include:
For information on setting compiler flags, see Section 6.7.1 Additional Notes.
The MPI library you wish to use for this build. The format is the same as for HPC_COMPILER; for example:
HPC_MPI=impi/2020. Current MPI types accepted are openmpi, mpich, impi, cray, and cray*. Options include:
For example, when using Intel-based compilers and Intel’s implementation of the MPI interface, the
config/config_custom.sh should contain the following specifications:
export SERIAL_CC=icc export SERIAL_FC=ifort export SERIAL_CXX=icpc export MPI_CC=mpiicc export MPI_FC=mpiifort export MPI_CXX=mpiicpc
This will set the C, Fortran, and C++ compilers and MPI’s.
To verify that your chosen MPI build (e.g., mpiicc) is based on the corresponding serial compiler (e.g., icc), use the
-show option to query the MPI’s. For example,
will display output like this:
$ icc -I<LONG_INCLUDE_PATH_FOR_MPI> -L<ANOTHER_MPI_LIBRARY_PATH> -L<ANOTHER_MPI_PATH> -<libraries, liners, build options...> -X<something> --<enable/disable/with some options> -l<library> -l<another_library> -l<yet-another-library>
The message you need from this prompt is “icc”, which confirms that your mpiicc build is based on icc. It may happen that if you query the “mpicc -show” on your system, it is based on “gcc” (or something else).
6.5.2. Other Parameters
The Python interpretor you wish to use for this build. The format is the same as for
HPC_COMPILER, for example:
If the directory where the software packages will be installed (
$PREFIX) requires root permission to write to, such as
/opt/modules, then this flag should be enabled. For example,
The stack allows the option to download the source code for all the software without performing the installation. This is especially useful for installing the stack on machines that do not allow internet connectivity to websites hosting the software (e.g., GitHub). For more information, see Section 6.7.4 Additional Notes.
To enable a boolean flag, use a single-digit
T. To disable, use
F (case insensitive).
is the directory where tarred or zipped software files will be downloaded and compiled. Unlike
$PREFIX, this is a relative path based on the root path of the repository. Individual software packages can be downloaded manually to this directory and untarred, but this is not required. Build scripts will look for the directory
The directory where log files from the build will be written, relative to the root path of the repository.
If set to
T, this flag will cause the build script to remove the current installation, if any exists, and replace it with the new version of each software package in question. If this variable is not set, the build will bypass software packages that are already installed.
The number of threads to use for parallel builds.
Run make check after build.
Print out extra information to the log files during the build.
Set the type of python environment to build. Value depends on whether using pip or conda. Set
VENVTYPE=pyvenvwhen using pip and
VENVTYPE=condaenvwhen using Miniconda for creating virtual environments. Default is
6.6. HPC-Stack Components
The HPC-Stack packages are built in Step 6.2.4 using the
build_stack.sh script. The following software can optionally be built with the scripts under
Compilers and MPI libraries
HPC Stack - Third Party Libraries
Python and Virtual Environments
6.7. HPC-Stack Additional Notes
6.7.1. Setting Compiler Flags and Other Options
Often it is necessary to specify compiler flags (e.g.,
gfortran-10 -fallow-argument-mismatch for the packages via
FFLAGS. There are 2 ways this can be achieved:
For all packages: One can define variable e.g.,
STACK_FFLAGS=-fallow-argument-mismatchin the config file
config_custom.sh. This will append
FFLAGSin every build script under libs.
Package specific flags: To compile only the specific package under
libswith the above compiler flag, one can define variable
<package>section of the YAML file
stack_custom.yaml. This will append
FFLAGSin the build script for that package only.
6.7.2. Adding a New Library or Package
If you want to add a new library to the stack you need to follow these steps:
Write a new build script in
libs, using existing scripts as a template.
Define a new section in the
yamlfile for that library/package in
If the package is a python virtual environment, add a
environment.ymlfile listing the python packages required to install the package. These files should be named and placed in
VENVTYPE=pyvenvwill use the
Add a call to the new build script in
Create a new module template at the appropriate place in the modulefiles directory, using exising files as a template.
Update the HPC Components file to include the name of the new library or package.
6.7.3. Configuring for a new HPC
If you want to port this to a new HPC, you need to follow these steps:
Write a new config file
config/config_<hpc>.sh, using existing config files as a template. Also create a new yaml file
config/stack_<hpc>.yaml, using existing yaml files as a template.
Add/remove basic modules for that HPC.
Choose the appropriate Compiler/MPI combination.
If a template modulefile does not exist for that Compiler/MPI combinattion, create module templates at the appropriate place in the
modulefilesdirectory, using existing files as a template (e.g.,
If the new HPC system provides some basic modules for e.g., Git, CMake, etc., they can be loaded in
6.7.4. Using the DOWNLOAD_ONLY Option
If an HPC (e.g., NOAA RDHPCS Hera) does not allow access to online software via
git clone, you will have to download all the packages using the
DOWNLOAD_ONLY option in the
build_stack.sh as you would on a machine that does allow access to online software with
DOWNLOAD_ONLY=YES and all the packages will be downloaded in the
pkg directory. Transfer the contents of the
pkg directory to the machine where you wish to install the HPC-Stack, and execute
build_stack.sh script will detect the already-downloaded packages and use them rather than fetching them.
6.7.5. Using the HPC-Stack
If Lmod is used to manage the software stack, you will need to activate the HPC-Stack in order to use it. This is done by loading the
module use $PREFIX/modulefiles/stack module load hpc/1.0.0
This will put the
hpc-<compilerName> module in your
MODULEPATH, which can be loaded as:
module load hpc-<compilerName>/<compilerVersion>
If the HPC-Stack is not managed via modules, you need to add
$PREFIXto the PATH as follows:
export PATH="$PREFIX/bin:$PATH" export LD_LIBRARY_PATH="$PREFIX/lib:$LD_LIBRARY_PATH" export CMAKE_PREFIX_PATH="$PREFIX"
6.7.6. Known Workaround for Certain Installations of Lmod
On some machines (e.g., WCOSS_DELL_P3), LMod is built to disable loading of default modulefiles and requires the user to load the module with an explicit version of the module (e.g.,
module load netcdf/4.7.4instead of
module load netcdf). The latter looks for the
defaultmodule which is either the latest version or a version that is marked as default. To circumvent this, it is necessary to place the following lines in
modulefiles/stack/hpc/hpc.luaprior to executing
setenv("LMOD_EXACT_MATCH", "no") setenv("LMOD_EXTENDED_DEFAULT", "yes")
See more on the Lmod website.
6.7.7. Known Issues
NetCDF-C++ does not build with LLVM Clang. It can be disabled by setting
disable_cxx: YESin the stack file under the NetCDF section.
Json-schema-validator does not build with LLVM Clang. It can be disabled in the stack file in the json-schema-validator-section.