Skip to content

Commit

Permalink
Merge branch 'NOAA-EMC:develop' into enablepio_cycle_uglo15km
Browse files Browse the repository at this point in the history
  • Loading branch information
JessicaMeixner-NOAA authored Dec 30, 2024
2 parents 8971454 + 1c37f90 commit 4097a37
Show file tree
Hide file tree
Showing 36 changed files with 730 additions and 274 deletions.
1 change: 1 addition & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -211,3 +211,4 @@ ush/python/pygfs/utils/marine_da_utils.py @guillaumevernieres @AndrewEichmann-NO

# Specific workflow scripts
workflow/generate_workflows.sh @DavidHuber-NOAA
workflow/build_compute.py @DavidHuber-NOAA @aerorahul
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,9 @@ parm/wafs

# Ignore sorc and logs folders from externals
#--------------------------------------------
sorc/build.xml
sorc/build.db
sorc/build_lock.db
sorc/*log
sorc/logs
sorc/calc_analysis.fd
Expand Down
4 changes: 1 addition & 3 deletions ci/Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -120,9 +120,7 @@ pipeline {
def error_logs_message = ""
dir("${HOMEgfs}/sorc") {
try {
sh(script: './build_all.sh -kgu') // build the global-workflow executables for GFS variant (UFS-wx-model, WW3 pre/post executables)
sh(script: './build_ww3prepost.sh -w > ./logs/build_ww3prepost_gefs.log 2>&1') // build the WW3 pre/post processing executables for GEFS variant
sh(script: './build_ufs.sh -w -e gefs_model.x > ./logs/build_ufs_gefs.log 2>&1') // build the UFS-wx-model executable for GEFS variant
sh(script: './build_compute.sh all') // build the global-workflow executables
} catch (Exception error_build) {
echo "Failed to build global-workflow: ${error_build.getMessage()}"
if ( fileExists("logs/error.logs") ) {
Expand Down
56 changes: 19 additions & 37 deletions docs/source/clone.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,35 +18,39 @@ Clone the `global-workflow` and `cd` into the `sorc` directory:
git clone --recursive https://github.com/NOAA-EMC/global-workflow
cd global-workflow/sorc

For forecast-only (coupled or uncoupled) build of the components:
.. _build_examples:

The build_all.sh script can be used to build all required components of the global workflow. The accepted arguments is a list of systems to be built. This includes builds for GFS and GEFS forecast-only experiments, GSI and GDASApp-based DA for cycled GFS experiments. See `feature availability <hpc.html#feature-availability-by-hpc>`__ to see which system(s) are available on each supported system.

::

./build_all.sh
./build_all.sh [gfs] [gefs] [gs] [gdas] [all]

For cycled (w/ data assimilation) use the `-g` option during build:
For example, to run GFS experiments with GSI DA, execute:

::

./build_all.sh -g
./build_all.sh gfs gsi

For coupled cycling (include new UFSDA) use the `-gu` options during build:
This builds the GFS, UFS-utils, GFS-utils, WW3 with PDLIB (structured wave grids), UPP, GSI, GSI-monitor, and GSI-utils executables.

[Currently only available on Hera, Orion, and Hercules]
For coupled cycling (include new UFSDA) execute:

::

./build_all.sh -gu
./build_all.sh gfs gdas

This builds all of the same executables, except it builds the GDASApp instead of the GSI.

For building without PDLIB (unstructured grid) for the wave model, use the `-w` options during build:
To run GEFS (forecast-only) execute:

::

./build_all.sh -w
./build_all.sh gefs

This builds the GEFS, UFS-utils, GFS-utils, WW3 *without* PDLIB (unstructure wave grids), and UPP executables.

Build workflow components and link workflow artifacts such as executables, etc.
Once the building is complete, link workflow artifacts such as executables, configuration files, and scripts via

::

Expand Down Expand Up @@ -107,40 +111,19 @@ Under the ``/sorc`` folder is a script to build all components called ``build_al

::

./build_all.sh [-a UFS_app][-g][-h][-u][-v]
./build_all.sh [-a UFS_app][-k][-h][-v] [list of system(s) to build]
-a UFS_app:
Build a specific UFS app instead of the default
-g:
Build GSI
-k:
Kill all builds immediately if one fails
-h:
Print this help message and exit
-j:
Specify maximum number of build jobs (n)
-u:
Build UFS-DA
-v:
Execute all build scripts with -v option to turn on verbose where supported

For forecast-only (coupled or uncoupled) build of the components:

::

./build_all.sh

For cycled (w/ data assimilation) use the `-g` option during build:

::

./build_all.sh -g

For coupled cycling (include new UFSDA) use the `-gu` options during build:

[Currently only available on Hera, Orion, and Hercules]

::

./build_all.sh -gu
Lastly, pass to build_all.sh a list of systems to build. This includes `gfs`, `gefs`, `sfs` (not fully supported), `gsi`, `gdas`, and `all`.

For examples of how to use this script, see :ref:`build examples <build_examples>`.

^^^^^^^^^^^^^^^
Link components
Expand All @@ -156,4 +139,3 @@ After running the checkout and build scripts run the link script:

Where:
``-o``: Run in operations (NCO) mode. This creates copies instead of using symlinks and is generally only used by NCO during installation into production.

68 changes: 27 additions & 41 deletions env/AWSPW.env
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,29 @@ else
exit 2
fi

if [[ "${step}" = "fcst" ]] || [[ "${step}" = "efcs" ]]; then
if [[ "${step}" = "prep" ]] || [[ "${step}" = "prepbufr" ]]; then

export POE="NO"
export BACK="NO"
export sys_tp="AWSPW"
export launcher_PREP="srun"

elif [[ "${step}" = "prepsnowobs" ]]; then

export APRUN_CALCFIMS="${APRUN_default}"

elif [[ "${step}" = "prep_emissions" ]]; then

export APRUN="${APRUN_default}"

elif [[ "${step}" = "waveinit" ]] || [[ "${step}" = "waveprep" ]] || [[ "${step}" = "wavepostsbs" ]] || [[ "${step}" = "wavepostbndpnt" ]] || [[ "${step}" = "wavepostbndpntbll" ]] || [[ "${step}" = "wavepostpnt" ]]; then

export CFP_MP="YES"
if [[ "${step}" = "waveprep" ]]; then export MP_PULSE=0 ; fi
export wavempexec=${launcher}
export wave_mpmd=${mpmd_opt}

elif [[ "${step}" = "fcst" ]] || [[ "${step}" = "efcs" ]]; then

export launcher="srun --mpi=pmi2 -l"

Expand All @@ -52,52 +74,16 @@ elif [[ "${step}" = "waveinit" ]] || [[ "${step}" = "waveprep" ]] || [[ "${step}

elif [[ "${step}" = "post" ]]; then

export NTHREADS_NP=${NTHREADS1}
export APRUN_NP="${APRUN_default}"

export NTHREADS_DWN=${threads_per_task_dwn:-1}
[[ ${NTHREADS_DWN} -gt ${max_threads_per_task} ]] && export NTHREADS_DWN=${max_threads_per_task}
export APRUN_DWN="${launcher} -n ${ntasks_dwn}"

elif [[ "${step}" = "atmos_products" ]]; then

export USE_CFP="YES" # Use MPMD for downstream product generation on Hera
export NTHREADS_UPP=${NTHREADS1}
export APRUN_UPP="${APRUN_default} --cpus-per-task=${NTHREADS_UPP}"

elif [[ "${step}" = "oceanice_products" ]]; then

export NTHREADS_OCNICEPOST=${NTHREADS1}
export APRUN_OCNICEPOST="${launcher} -n 1 --cpus-per-task=${NTHREADS_OCNICEPOST}"

elif [[ "${step}" = "ecen" ]]; then

export NTHREADS_ECEN=${NTHREADSmax}
export APRUN_ECEN="${APRUN_default}"

export NTHREADS_CHGRES=${threads_per_task_chgres:-12}
[[ ${NTHREADS_CHGRES} -gt ${max_tasks_per_node} ]] && export NTHREADS_CHGRES=${max_tasks_per_node}
export APRUN_CHGRES="time"

export NTHREADS_CALCINC=${threads_per_task_calcinc:-1}
[[ ${NTHREADS_CALCINC} -gt ${max_threads_per_task} ]] && export NTHREADS_CALCINC=${max_threads_per_task}
export APRUN_CALCINC="${APRUN_default}"

elif [[ "${step}" = "esfc" ]]; then

export NTHREADS_ESFC=${NTHREADSmax}
export APRUN_ESFC="${APRUN_default}"

export NTHREADS_CYCLE=${threads_per_task_cycle:-14}
[[ ${NTHREADS_CYCLE} -gt ${max_tasks_per_node} ]] && export NTHREADS_CYCLE=${max_tasks_per_node}
export APRUN_CYCLE="${APRUN_default}"

elif [[ "${step}" = "epos" ]]; then

export NTHREADS_EPOS=${NTHREADSmax}
export APRUN_EPOS="${APRUN_default}"

elif [[ "${step}" = "fit2obs" ]]; then
elif [[ "${step}" = "atmos_products" ]]; then

export NTHREADS_FIT2OBS=${NTHREADS1}
export MPIRUN="${APRUN_default}"
export USE_CFP="YES" # Use MPMD for downstream product generation on AWS

fi
17 changes: 11 additions & 6 deletions env/AZUREPW.env
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ export mpmd_opt="--multi-prog --output=mpmd.%j.%t.out"
# Configure MPI environment
export OMP_STACKSIZE=2048000
export NTHSTACK=1024000000
export UCX_TLS=ud,sm,self

ulimit -s unlimited
ulimit -a
Expand Down Expand Up @@ -50,6 +51,10 @@ elif [[ "${step}" = "waveinit" ]] || [[ "${step}" = "waveprep" ]] || [[ "${step}
export wavempexec=${launcher}
export wave_mpmd=${mpmd_opt}

elif [[ "${step}" = "prep_emissions" ]]; then

export APRUN="${APRUN_default}"

elif [[ "${step}" = "post" ]]; then

export NTHREADS_NP=${NTHREADS1}
Expand All @@ -71,33 +76,33 @@ elif [[ "${step}" = "oceanice_products" ]]; then
elif [[ "${step}" = "ecen" ]]; then

export NTHREADS_ECEN=${NTHREADSmax}
export APRUN_ECEN="${APRUN}"
export APRUN_ECEN="${APRUN_default}"

export NTHREADS_CHGRES=${threads_per_task_chgres:-12}
[[ ${NTHREADS_CHGRES} -gt ${max_tasks_per_node} ]] && export NTHREADS_CHGRES=${max_tasks_per_node}
export APRUN_CHGRES="time"

export NTHREADS_CALCINC=${threads_per_task_calcinc:-1}
[[ ${NTHREADS_CALCINC} -gt ${max_threads_per_task} ]] && export NTHREADS_CALCINC=${max_threads_per_task}
export APRUN_CALCINC="${APRUN}"
export APRUN_CALCINC="${APRUN_default}"

elif [[ "${step}" = "esfc" ]]; then

export NTHREADS_ESFC=${NTHREADSmax}
export APRUN_ESFC="${APRUN}"
export APRUN_ESFC="${APRUN_default}"

export NTHREADS_CYCLE=${threads_per_task_cycle:-14}
[[ ${NTHREADS_CYCLE} -gt ${max_tasks_per_node} ]] && export NTHREADS_CYCLE=${max_tasks_per_node}
export APRUN_CYCLE="${APRUN}"
export APRUN_CYCLE="${APRUN_default}"

elif [[ "${step}" = "epos" ]]; then

export NTHREADS_EPOS=${NTHREADSmax}
export APRUN_EPOS="${APRUN}"
export APRUN_EPOS="${APRUN_default}"

elif [[ "${step}" = "fit2obs" ]]; then

export NTHREADS_FIT2OBS=${NTHREADS1}
export MPIRUN="${APRUN}"
export MPIRUN="${APRUN_default}"

fi
4 changes: 2 additions & 2 deletions env/GOOGLEPW.env
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ if [[ "${step}" = "fcst" ]] || [[ "${step}" = "efcs" ]]; then

elif [[ "${step}" = "prep_emissions" ]]; then

export APRUN
export APRUN="${APRUN_default}"

elif [[ "${step}" = "waveinit" ]] || [[ "${step}" = "waveprep" ]] || [[ "${step}" = "wavepostsbs" ]] || [[ "${step}" = "wavepostbndpnt" ]] || [[ "${step}" = "wavepostbndpntbll" ]] || [[ "${step}" = "wavepostpnt" ]]; then

Expand Down Expand Up @@ -102,6 +102,6 @@ elif [[ "${step}" = "epos" ]]; then
elif [[ "${step}" = "fit2obs" ]]; then

export NTHREADS_FIT2OBS=${NTHREADS1}
export MPIRUN="${APRUN}"
export MPIRUN="${APRUN_default}"

fi
3 changes: 3 additions & 0 deletions modulefiles/module_base.noaacloud.lua
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,11 @@ Load environment to run GFS on noaacloud
local spack_mod_path=(os.getenv("spack_mod_path") or "None")
prepend_path("MODULEPATH", spack_mod_path)

load("gnu")
load(pathJoin("stack-intel", (os.getenv("stack_intel_ver") or "None")))
load(pathJoin("stack-intel-oneapi-mpi", (os.getenv("stack_impi_ver") or "None")))
unload("gnu")

load(pathJoin("python", (os.getenv("python_ver") or "None")))

load(pathJoin("jasper", (os.getenv("jasper_ver") or "None")))
Expand Down
6 changes: 3 additions & 3 deletions modulefiles/module_gwci.noaacloud.lua
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,10 @@ help([[
Load environment to run GFS workflow setup scripts on noaacloud
]])

prepend_path("MODULEPATH", "/contrib/spack-stack/spack-stack-1.6.0/envs/unified-env/install/modulefiles/Core")
prepend_path("MODULEPATH", "/contrib/spack-stack-rocky8/spack-stack-1.6.0/envs/ue-env/install/modulefiles/Core")

load(pathJoin("stack-intel", os.getenv("2021.3.0")))
load(pathJoin("stack-intel-oneapi-mpi", os.getenv("2021.3.0")))
load(pathJoin("stack-intel", os.getenv("2021.10.0")))
load(pathJoin("stack-intel-oneapi-mpi", os.getenv("2021.10.0")))

load(pathJoin("netcdf-c", os.getenv("4.9.2")))
load(pathJoin("netcdf-fortran", os.getenv("4.6.1")))
Expand Down
13 changes: 7 additions & 6 deletions modulefiles/module_gwsetup.noaacloud.lua
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,18 @@ Load environment to run GFS workflow setup scripts on noaacloud

load(pathJoin("rocoto"))

prepend_path("MODULEPATH", "/contrib/spack-stack/spack-stack-1.6.0/envs/unified-env/install/modulefiles/Core")
prepend_path("MODULEPATH", "/contrib/spack-stack-rocky8/spack-stack-1.6.0/envs/ue-intel/install/modulefiles/Core")

local stack_intel_ver=os.getenv("stack_intel_ver") or "2021.3.0"
local python_ver=os.getenv("python_ver") or "3.10.3"
load("gnu")
local stack_intel_ver=os.getenv("stack_intel_ver") or "2021.10.0"
local stack_mpi_ver=os.getenv("stack_mpi_ver") or "2021.10.0"

load(pathJoin("stack-intel", stack_intel_ver))
load(pathJoin("python", python_ver))
load(pathJoin("stack-intel-oneapi-mpi", stack_mpi_ver))
unload("gnu")

load("py-jinja2")
load("py-pyyaml")
load("py-numpy")
local git_ver=os.getenv("git_ver") or "1.8.3.1"
load(pathJoin("git", git_ver))

whatis("Description: GFS run setup environment")
4 changes: 4 additions & 0 deletions parm/archive/enkf.yaml.j2
Original file line number Diff line number Diff line change
Expand Up @@ -3,13 +3,15 @@ enkf:
target: "{{ ATARDIR }}/{{ cycle_YMDH }}/{{ RUN }}.tar"
required:
# Logs
{% if RUN == 'enkfgdas' %}
{% for mem in range(1, nmem_ens + 1) %}
- "logs/{{ cycle_YMDH }}/{{ RUN }}_fcst_mem{{ '%03d' % mem }}.log"
{% endfor %}
{% for fhr in range(fhmin, fhmax + 1, fhout) %}
- "logs/{{ cycle_YMDH }}/{{ RUN }}_epos{{ '%03d' % (fhr - fhmin) }}.log"
{% endfor %}
- "logs/{{ cycle_YMDH }}/{{ RUN }}_echgres.log"
{% endif %}
- "logs/{{ cycle_YMDH }}/{{ RUN }}_esfc.log"
{% for grp in range(IAUFHRS | length) %}
- "logs/{{ cycle_YMDH }}/{{ RUN }}_ecen{{ '%03d' % grp }}.log"
Expand Down Expand Up @@ -37,13 +39,15 @@ enkf:
{% endfor %}

# Ensemble mean and spread
{% if RUN == 'enkfgdas' %}
{% for fhr in range(3, fhmax + 1, 3) %}
- "{{ COMIN_ATMOS_HISTORY_ENSSTAT | relpath(ROTDIR) }}/{{ head }}atmf{{ '%03d' % fhr }}.ensmean.nc"
- "{{ COMIN_ATMOS_HISTORY_ENSSTAT | relpath(ROTDIR) }}/{{ head }}sfcf{{ '%03d' % fhr }}.ensmean.nc"
{% if ENKF_SPREAD %}
- "{{ COMIN_ATMOS_HISTORY_ENSSTAT | relpath(ROTDIR) }}/{{ head }}atmf{{ '%03d' % fhr }}.ensspread.nc"
{% endif %}
{% endfor %}
{% endif %}

# Ensemble mean state
{% if not DO_JEDIATMENS %}
Expand Down
2 changes: 2 additions & 0 deletions parm/archive/enkf_grp.yaml.j2
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,14 @@ enkf_grp:
{% set COMIN_ATMOS_RESTART_MEM = COMIN_ATMOS_RESTART_MEM_list[imem] %}

# Forecast data
{% if RUN == 'enkfgdas' %}
{% for fhr in range(3, 10, 3) %}
- "{{ COMIN_ATMOS_HISTORY_MEM | relpath(ROTDIR) }}/{{ head }}atmf{{ "%03d" % fhr }}.nc"
{% endfor %}

# Only store the 6-hour surface forecast
- "{{ COMIN_ATMOS_HISTORY_MEM | relpath(ROTDIR) }}/{{ head }}sfcf006.nc"
{% endif %}

# Store the individual member analysis data
{% if not lobsdiag_forenkf %}
Expand Down
Loading

0 comments on commit 4097a37

Please sign in to comment.