wrf-python

A collection of diagnostic and interpolation routines for use with output from the Weather Research and Forecasting (WRF-ARW) Model.

This package provides over 30 diagnostic calculations, several interpolation routines, and utilities to help with plotting via cartopy, basemap, or PyNGL. The functionality is similar to what is provided by the NCL WRF package.

When coupled with either matplotlib or PyNGL, users can create plots like this:

_images/matthew.png

Documentation

What’s New

Releases

v1.3.2 (February 2019)

  • Release 1.3.2
  • Coordinate name index positions are no longer assumed and are searched instead. Some users use xarray to rewrite WRF output files, and xarray might reorder the coordinate name positions.
  • Fixed a segfault issue with CAPE when more than 150 vertical levels are used (e.g. LES runs).
  • setup.py will now bootstrap the numpy installation (thanks bbonenfant!).

v1.3.1 (January 2019)

  • Release 1.3.1
  • Added the ompgen template file to the manifest.

v1.3.0 (December 2018)

  • Release 1.3.0
  • Fixed FutureWarning issue with destag routine (thank you honnorat!)
  • Fixed computational problems with updraft_helicity, and values are no longer scaled by 1000.
  • Removed version constraints for wrapt and setuptools.
  • Fixed xarray being a hard dependency.
  • Fixed unit issues with vinterp when pressure values are extracted below ground. Also added support for height fields in km and pressure fields in hPa. The documentation has been improved.
  • Fixed the smooth2d routine so that it actually works. It never worked correctly before (nor did it work in NCL). Users can now specify the center weight of the kernel and the documentation has been updated to describe how it works.
  • Fixed the storm relative helicity algorithm so that it works in the southern hemisphere. The raw algorithm now requires latitude input if used in the southern hemisphere, otherwise the northern hemisphere is assumed.
  • Fixed an issue with the latest version of cartopy 0.17 (thanks honnorat!)
  • Fixed an issue where invalid keyword arguments weren’t throwing errors when extracting standard WRF variables.
  • Fixed minor issues related to moving nests when using line interpolation and vertical cross sections. It is still an error to request all times when using lat/lon coordinates with a moving nest, but otherwise knows how to run when all times are requested. This never really worked quite right.
  • Removed the pyf file from setup.py since it is generated via the build system.
  • Added an autolevels parameter for the vertical cross section so that users can specify the number of vertical levels to use if they don’t want to specify them manually.
  • The interplevel routine has been improved. Users can now specify a single level, multiple levels, or a 2D array (e.g. PBLH) to interpolate to. Performance has been improved when interpolating a multiple product field like wspd_wdir.
  • Products that produce multiple outputs can now have the outputs requested individually. See Table of Subproduct Diagnostics for a list of what is available.
  • Much of this version of wrf-python has been back ported to NCL in the upcoming 6.6.0 release. The diagnostics should produce the same results in both packages.
  • Now released under the Apache 2.0 license.

v1.2.0 (May 2018)

  • Release 1.2.0
  • Previous versions of wrf-python promoted the strings used in xarray (e.g. name, attributes) to Unicode strings for Python 2.7. This caused problems when porting examples for PyNGL to use wrf-python in Python 3.x. All strings are now the native string type for the Python version being used. While this change should be transparent to most users, any users that worked with the xarray name or attribute values on Python 2.7 may run in to string related errors, so we’ve decided to bump the major version number.

v1.1.3 (March 2018)

  • Release 1.1.3
  • Fixed/Enhanced the cloud top temperature diagnostic.
    • Optical depth was not being calculated correctly when cloud ice mixing ratio was not available.
    • Fixed an indexing bug that caused crashes on Windows, but should have been crashing on all platforms.
    • Users can now specify if they want cloud free regions to use fill values, rather than the default behavior of using the surface temperature.
    • Users can now specify the optical depth required to trigger the cloud top temperature calculation. However, the default value of 1.0 should be sufficient for most users.
  • Added ‘th’ alias for the theta product.
  • Fixed a crash issue related to updraft helicity when a dictionary is used as the input.
  • Dictionary inputs now work correctly with xy_to_ll and ll_to_xy.
  • The cape_2d diagnostic can now work with a single column of data, just like cape_3d.

v1.1.2 (February 2018)

  • Release 1.1.2
  • Fix OpenMP directive issue with cloud top temperature.

v1.1.1 (February 2018)

  • Release 1.1.1
  • Added script for building on Cheyenne with maxed out Intel settings, which also required a patch for numpy.distutils.
  • Fixed a few unicode characters hiding in a docstring that were causing problems on Cheyenne, and also building the docs with Sphinx on Python 2.x.
  • Fix issue with np.amax not working with xarray on Cheyenne, causing an error with the mdbz product.
  • Fix cape_2d private variable bug when running with multiple CPUs.

v1.1.0 (January 2018)

  • Release 1.1.0
  • Computational routines now support multiple cores using OpenMP. See Using OpenMP for details on how to use this new feature.
  • The CAPE routines should be noticeably faster, even in the single threaded case (thank you supreethms1809!).
  • wrf.getvar() now works correctly with non-gridded NetCDF variables
  • The cloud fraction diagnostic has changed:
    • Users can now select their own cloud threshold levels, and can choose between a vertical coordinate defined as height (AGL), height (MSL), or pressure.
    • The default vertical coordinate type has been changed to be height (AGL). This ensures that clouds appear over mountainous regions. If you need the old behavior, set the vert_type argument to ‘pressure’.
    • Fixed a bug involving the cloud threshold search algorithm, where if the surface was higher than the threshold for a cloud level, the algorithm would use whatever was there before (uninitialized variable bug). This caused some interesting visualization issues when plotted. Now, whenever the surface is above a cloud level threshold, a fill value is used to indicate that data is unavailable for that location.
  • The cartopy object for LambertConformal should now work correctly in the southern hemisphere.
  • Fixed a bug with the PolarStereographic projection missing a geobounds argument (thank you hanschen!).
  • Renamed the modules containing the ‘get_product’ routines used by wrf.getvar() to avoid naming conflicts with the raw computational routine names. Users should be using wrf.getvar() instead of these routines, but for those that imported the ‘get_product’ routines directly, you will need to modify your code.
  • Fixed a uniqueness issue with the internal coordinate cache that was causing crashes when input data is changed to a different file in a jupyter notebook cell.
  • Added code to better support building wheels on Windows (thank you letmaik!)
  • Improved support for scipy.io.netcdf objects.
  • Added a new ‘zstag’ diagnostic that returns the height values for the vertically staggered grid.
  • A DOI is now available for wrf-python. Please cite wrf-python if you are using it for your research. (See Citation)
  • Fixed issue with vertcross and interpline not working correctly when a projection object is used. Users will now have to supply the lower left latitude and longitude corner point.
  • Beginning with numpy 1.14, wrf-python can be built using the MSVC compiler with gfortran. WRF-Python can now be built for Python 3.5+ on services like AppVeyor.

v1.0.5 (September 2017)

  • Release 1.0.5
  • Reduced the CI test file sizes by half.

v1.0.4 (September 2017)

  • Release 1.0.4
  • Fix warnings with CI tests which were caused by fill values being written as NaN to the NetCDF result file.
  • Added the __eq__ operator to the WrfProj projection base class.
  • Fixed array order issue when using the raw CAPE routine with 1D arrays.

v1.0.3 (June 2017)

  • Relase 1.0.3
  • Fixed an issue with the cartopy Mercator subclass where the xlimits were being calculated to the same value (or very close), causing blank plots.

v1.0.2 (May 2017)

  • Release 1.0.2
  • Fixed issue with the wspd_wdir product types when sequences of files are used.

v1.0.1 (March 2017)

  • Release 1.0.1
  • Fixed issue with initialization of PolarStereographic and LatLon map projection objects.
  • Fixed issue where XTIME could be included in the coordinate list of a variable, but the actual XTIME variable could be missing. NCL allows this, so wrf-python should as well.

v1.0.0 (March 2017)

  • Release 1.0.0.
  • Fixed issue with not being able to set the thread-local coordinate cache to 0 to disable it. Also, the cache will now correctly resize itself when the size is reduced to less than its current setting.
  • Fixed an issue with the ‘0000-00-00 00:00:00’ time used in geo_em files causing crashes due to the invalid time. The time is now set to numpy.datetime64(‘NaT’).
  • Fixed issue with wrf.cape_3d not working correctly with a single column of data.

Installation

Required Dependencies

  • Python 2.7, 3.4, or 3.5+
  • numpy (1.11 or later; 1.14 required to build on Windows)
  • wrapt (1.10 or later)
  • setuptools (38.0 or later)

Plotting Packages

  • PyNGL (1.4.3 or later)
  • matplotlib (1.4.3 or later)
    • cartopy (0.13 or later)
    • basemap (1.0.8 or later)

Installing via Conda

The easiest way to install wrf-python is using Conda:

conda install -c conda-forge wrf-python

Note

If you use conda to install wrf-python on a supercomputer like Yellowstone or Cheyenne, we recommend that you do not load any python related modules via the ‘module load’ command. The packages installed by the ‘module load’ system will not play nicely with packages installed via conda.

Further, some systems will install python packages to a ~/.local directory, which will be found by the miniconda python interpreter and cause various import problems. If you have a ~/.local directory, we strongly suggest renaming it (mv ~/.local ~/.local_backup).

Installing on Yellowstone

On Yellowstone, wrf-python can also be installed using the module load system, if this is preferred over using conda.

Unfortunately, because wrf-python requires newer dependencies, it is not available using the ‘all-python-libs’ module, so many of the dependencies need to be manually installed (most are for xarray).

Also, make sure you are running in the gnu/4.8.2 compiler environment or you will get import errors for a missing libquadmath library when you go to import wrf-python.

To install:

module load gnu/4.8.2 or module swap intel gnu/4.8.2
module load python/2.7.7
module load numpy/1.11.0 wrapt/1.10.10 scipy/0.17.1 bottleneck/1.1.0 numexpr/2.6.0 pyside/1.1.2 matplotlib/1.5.1 pandas/0.18.1 netcdf4python/1.2.4 xarray/0.8.2
module load wrf-python/1.0.1

Installing via Source Code

Installation via source code will require a Fortran and C compiler in order to run f2py. You can get them here.

The source code is available via github:

https://github.com/NCAR/wrf-python

Or PyPI:

https://pypi.python.org/pypi/wrf-python

To install, if you do not need OpenMP support, change your working directory to the wrf-python source directory and run:

$ pip install .

Beginning with wrf-python 1.1, OpenMP is supported, but preprocessing the ompgen.F90 file is required, which also requires running f2py to build the .pyf file. To simplify this process, you can use the build scripts in the build_scripts directory as a guide, or just call the script directly.

Below is a sample from a build script for GNU compiler with OpenMP enabled:

cd ../fortran/build_help

gfortran -o sizes -fopenmp omp_sizes.f90

python sub_sizes.py

cd ..

gfortran -E ompgen.F90 -fopenmp -cpp -o omp.f90

f2py *.f90 -m _wrffortran -h wrffortran.pyf --overwrite-signature

cd ..

python setup.py clean --all

python setup.py config_fc --f90flags="-mtune=generic -fopenmp" build_ext --libraries="gomp" build

pip install .

Beginning with numpy 1.14, f2py extensions can now be built using the MSVC compiler and mingw gfortran compiler. Numpy 1.14 is required to build wrf-python for Python 3.5+.

Note

If you are building on a supercomputer and receiving linker related errors (e.g. missing symbols, undefined references, etc), you probably need to unset the LDFLAGS environment variable. System administrators on supercomputing systems often define LDFLAGS for you so that you don’t need to worry about where libraries like NetCDF are installed. Unfortunately, this can cause problems with the numpy.distutils build system. To fix, using the build command from above:

$ unset LDFLAGS python setup.py config_fc --f90flags="-mtune=generic -fopenmp" build_ext --libraries="gomp" build

Table of Available Diagnostics

Variable Name Description Available Units Additional Keyword Arguments
avo Absolute Vorticity 10-5 s-1  
eth/theta_e Equivalent Potential Temperature

K

degC

degF

units (str) : Set to desired units. Default is ‘K’.
cape_2d 2D CAPE (MCAPE/MCIN/LCL/LFC) J kg-1 ; J kg-1 ; m ; m missing (float): Fill value for output only
cape_3d 3D CAPE and CIN J kg-1 missing (float): Fill value for output only
ctt Cloud Top Temperature

degC

K

degF

fill_nocloud (boolean): Set to True to use fill values for cloud free regions rather than surface temperature. Default is False.

missing (float): The fill value to use when fill_nocloud is True.

opt_thresh (float): The optical depth required to trigger the cloud top temperature calculation. Default is 1.0.

units (str) : Set to desired units. Default is ‘degC’.

cloudfrac Cloud Fraction %

vert_type (str): The vertical coordinate type for the cloud thresholds. Must be ‘height_agl’, ‘height_msl’, or ‘pres’. Default is ‘height_agl’.

low_thresh (float): The low cloud threshold (meters for ‘height_agl’ and ‘height_msl’, pascals for ‘pres’). Default is 300 m (97000 Pa)

mid_thresh (float): The mid cloud threshold (meters for ‘height_agl’ and ‘height_msl’, pascals for ‘pres’). Default is 2000 m (80000 Pa)

high_thresh (float): The high cloud threshold (meters for ‘height_agl’ and ‘height_msl’, pascals for ‘pres’). Default is 6000 m (45000 Pa)

dbz Reflectivity dBZ

do_variant (boolean): Set to True to enable variant calculation. Default is False.

do_liqskin (boolean): Set to True to enable liquid skin calculation. Default is False.

mdbz Maximum Reflectivity dBZ

do_variant (boolean): Set to True to enable variant calculation. Default is False.

do_liqskin (boolean): Set to True to enable liquid skin calculation. Default is False.

geopt/geopotential Geopotential for the Mass Grid m2 s-2  
geopt_stag Geopotential for the Vertically Staggered Grid m2 s-2  
helicity Storm Relative Helicity m2 s-2 top (float): The top level for the calculation in meters. Default is 3000.0.
lat Latitude decimal degrees  
lon Longitude decimal degrees  
omg/omega Omega Pa s-1  
p/pres

Full Model Pressure

(in specified units)

Pa

hPa

mb

torr

mmhg

atm

units (str) : Set to desired units. Default is ‘Pa’.
pressure Full Model Pressure (hPa) hPa  
pvo Potential Vorticity PVU  
pw Precipitable Water kg m-2  
rh Relative Humidity %  
rh2 2m Relative Humidity %  
slp Sea Level Pressure

hPa

hPa

mb

torr

mmhg

atm

units (str) : Set to desired units. Default is ‘hPa’.
T2 2m Temperature K  
ter Model Terrain Height

m

km

dm

ft

mi

units (str) : Set to desired units. Default is ‘m’.
td2 2m Dew Point Temperature

degC

K

degF

units (str) : Set to desired units. Default is ‘degC’.
td Dew Point Temperature

degC

K

degF

units (str) : Set to desired units. Default is ‘degC’.
tc Temperature in Celsius degC  
th/theta Potential Temperature

K

degC

degF

units (str) : Set to desired units. Default is ‘K’.
temp Temperature (in specified units)

K

degC

degF

units (str) : Set to desired units. Default is ‘K’.
tk Temperature in Kelvin K  
times Times in the File or Sequence    
xtimes

XTIME Coordinate

(if applicable)

minutes since

start of

model run

 
tv Virtual Temperature

K

degC

degF

units (str) : Set to desired units. Default is ‘K’.
twb Wet Bulb Temperature

K

degC

degF

units (str) : Set to desired units. Default is ‘K’.
updraft_helicity Updraft Helicity m2 s-2

bottom (float): The bottom level for the calculation in meters. Default is 2000.0.

top (float): The top level for the calculation in meters. Default is 5000.0.

ua U-component of Wind on Mass Points

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
va V-component of Wind on Mass Points

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
wa W-component of Wind on Mass Points

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
uvmet10

10 m U and V Components of Wind

Rotated to Earth Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
uvmet

U and V Components of Wind

Rotated to Earth Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
wspd_wdir

Wind Speed and Direction (wind_from_direction)

in Grid Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
wspd_wdir10

10m Wind Speed and Direction (wind_from_direction)

in Grid Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
uvmet_wspd_wdir

Wind Speed and Direction (wind_from_direction)

Rotated to Earth Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
uvmet10_wspd_wdir

10m Wind Speed and Direction (wind_from_direction)

Rotated to Earth Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
z/height Model Height for Mass Grid

m

km

dm

ft

mi

msl (boolean): Set to False to return AGL values. True is for MSL. Default is True.

units (str) : Set to desired units. Default is ‘m’.

height_agl Model Height for Mass Grid (AGL)

m

km

dm

ft

mi

units (str) : Set to desired units. Default is ‘m’.
zstag Model Height for Vertically Staggered Grid

m

km

dm

ft

mi

msl (boolean): Set to False to return AGL values. True is for MSL. Default is True.

units (str) : Set to desired units. Default is ‘m’.

Table of Subproduct Diagnostics

Some diagnostics (e.g. cape_2d) include multiple products in its output. These products have been broken out in to individual diagnostics to help those utilities that are unable to work with multiple outputs. These individual diagnostics can be requested like any other diagnostic using wrf.getvar(). These are summarized in the table below.

Variable Name Calculated From Description Available Units Additional Keyword Arguments
mcape cape_2d 2D Max CAPE J kg-1 missing (float): Fill value for output only
mcin cape_2d 2D Max CIN J kg-1 missing (float): Fill value for output only
lcl cape_2d 2D Lifted Condensation Level J kg-1 missing (float): Fill value for output only
lfc cape_2d 2D Level of Free Convection J kg-1 missing (float): Fill value for output only
cape3d_only cape_3d 3D CAPE J kg-1 missing (float): Fill value for output only
cin3d_only cape_3d 3D CIN J kg-1 missing (float): Fill value for output only
low_cloudfrac cloudfrac Cloud Fraction for Low Clouds %

vert_type (str): The vertical coordinate type for the cloud thresholds. Must be ‘height_agl’, ‘height_msl’, or ‘pres’. Default is ‘height_agl’.

low_thresh (float): The low cloud threshold (meters for ‘height_agl’ and ‘height_msl’, pascals for ‘pres’). Default is 300 m (97000 Pa)

mid_thresh (float): The mid cloud threshold (meters for ‘height_agl’ and ‘height_msl’, pascals for ‘pres’). Default is 2000 m (80000 Pa)

high_thresh (float): The high cloud threshold (meters for ‘height_agl’ and ‘height_msl’, pascals for ‘pres’). Default is 6000 m (45000 Pa)

mid_cloudfrac cloudfrac Cloud Fraction for Mid Clouds %

vert_type (str): The vertical coordinate type for the cloud thresholds. Must be ‘height_agl’, ‘height_msl’, or ‘pres’. Default is ‘height_agl’.

low_thresh (float): The low cloud threshold (meters for ‘height_agl’ and ‘height_msl’, pascals for ‘pres’). Default is 300 m (97000 Pa)

mid_thresh (float): The mid cloud threshold (meters for ‘height_agl’ and ‘height_msl’, pascals for ‘pres’). Default is 2000 m (80000 Pa)

high_thresh (float): The high cloud threshold (meters for ‘height_agl’ and ‘height_msl’, pascals for ‘pres’). Default is 6000 m (45000 Pa)

high_cloudfrac cloudfrac Cloud Fraction for High Clouds %

vert_type (str): The vertical coordinate type for the cloud thresholds. Must be ‘height_agl’, ‘height_msl’, or ‘pres’. Default is ‘height_agl’.

low_thresh (float): The low cloud threshold (meters for ‘height_agl’ and ‘height_msl’, pascals for ‘pres’). Default is 300 m (97000 Pa)

mid_thresh (float): The mid cloud threshold (meters for ‘height_agl’ and ‘height_msl’, pascals for ‘pres’). Default is 2000 m (80000 Pa)

high_thresh (float): The high cloud threshold (meters for ‘height_agl’ and ‘height_msl’, pascals for ‘pres’). Default is 6000 m (45000 Pa)

uvmet_wspd uvmet_wspd_wdir

Wind Speed

Rotated to Earth Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
uvmet_wdir uvmet_wspd_wdir

Wind Direction (wind_from_direction)

Rotated to Earth Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
uvmet10_wspd uvmet10_wspd_wdir

10m Wind Speed

Rotated to Earth Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
uvmet10_wdir uvmet10_wspd_wdir

10m Wind Direction (wind_from_direction)

Rotated to Earth Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
wspd wspd_wdir

Wind Speed

in Grid Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
wdir wspd_wdir

Wind Direction (wind_from_direction)

in Grid Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
wspd10 wspd_wdir10

10m Wind Speed

in Grid Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.
wdir10 wspd_wdir10

10m Wind Direction (wind_from_direction)

in Grid Coordinates

m s-1

km h-1

mi h-1

kt

ft s-1

units (str) : Set to desired units. Default is ‘m s-1’.

How To Use

Introduction

The API for wrf-python can be summarized as a variable computation/extraction routine, several interpolation routines, and a few plotting helper utilities. The API is kept as simple as possible to help minimize the learning curve for new programmers, students, and scientists. In the future, we plan to extend xarray for programmers desiring a more object oriented API, but this remains a work in progress.

The five most commonly used routines can be summarized as:

  • wrf.getvar() - Extracts WRF-ARW NetCDF variables and computes diagnostic variables that WRF does not compute (e.g. storm relative helicity). This is the routine that you will use most often.
  • wrf.interplevel() - Interpolates a three-dimensional field to a horizontal plane at a specified level using simple (fast) linear interpolation (e.g. 850 hPa temperature).
  • wrf.vertcross() - Interpolates a three-dimensional field to a vertical plane through a user-specified horizontal line (i.e. a cross section).
  • wrf.interpline() - Interpolates a two-dimensional field to a user-specified line.
  • wrf.vinterp() - Interpolates a three-dimensional field to user-specified ‘surface’ levels (e.g. theta-e levels). This is a smarter, albeit slower, version of wrf.interplevel().

Basic Usage

Computing Diagnostic Variables

The primary use for the wrf.getvar() function is to return diagnostic variables that require a calculation, since WRF does not produce these variables natively. These diagnostics include CAPE, storm relative helicity, omega, sea level pressure, etc. A table of all available diagnostics can be found here: Table of Available Diagnostics.

In the example below, sea level pressure is calculated and printed.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the Sea Level Pressure
slp = getvar(ncfile, "slp")

print(slp)

Result:

<xarray.DataArray u'slp' (south_north: 1059, west_east: 1799)>
array([[ 1012.220337,  1012.298157,  1012.247864, ...,  1010.132019,
         1009.932312,  1010.067078],
       [ 1012.432861,  1012.444763,  1012.33667 , ...,  1010.1073  ,
         1010.108459,  1010.047607],
       [ 1012.395447,  1012.380859,  1012.417053, ...,  1010.22937 ,
         1010.055969,  1010.026794],
       ...,
       [ 1009.042358,  1009.069214,  1008.987793, ...,  1019.19281 ,
         1019.144348,  1019.110596],
       [ 1009.224854,  1009.075134,  1008.986389, ...,  1019.071899,
         1019.042664,  1019.061279],
       [ 1009.188965,  1009.107117,  1008.979797, ...,  1018.917786,
         1018.956848,  1019.047485]], dtype=float32)
Coordinates:
    XLONG    (south_north, west_east) float32 -122.72 -122.693 -122.666 ...
    XLAT     (south_north, west_east) float32 21.1381 21.1451 21.1521 ...
    Time     datetime64[ns] 2016-10-07
Dimensions without coordinates: south_north, west_east
Attributes:
    FieldType: 104
    MemoryOrder: XY
    description: sea level pressure
    units: hPa
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)

Extracting WRF NetCDF Variables

In addition to computing diagnostic variables (see Computing Diagnostic Variables), the wrf.getvar() function can be used to extract regular WRF-ARW output NetCDF variables.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

p = getvar(ncfile, "P")

print(p)

Result:

<xarray.DataArray u'P' (bottom_top: 50, south_north: 1059, west_east: 1799)>
array([[[  1.217539e+03,   1.225320e+03, ...,   9.876406e+02,   1.001117e+03],
        [  1.238773e+03,   1.240047e+03, ...,   1.005297e+03,   9.991719e+02],
        ...,
        [  9.208594e+02,   9.059141e+02, ...,   1.902922e+03,   1.904805e+03],
        [  9.172734e+02,   9.091094e+02, ...,   1.894375e+03,   1.903422e+03]],

       [[  1.219562e+03,   1.210273e+03, ...,   9.973984e+02,   9.907891e+02],
        [  1.224578e+03,   1.223508e+03, ...,   9.985547e+02,   9.921172e+02],
        ...,
        [  9.012734e+02,   9.052031e+02, ...,   1.897766e+03,   1.894500e+03],
        [  9.137500e+02,   9.071719e+02, ...,   1.893273e+03,   1.893664e+03]],

       ...,
       [[  7.233154e+00,   7.224121e+00, ...,   3.627930e+00,   3.613770e+00],
        [  7.226318e+00,   7.358154e+00, ...,   3.725098e+00,   3.634033e+00],
        ...,
        [  5.354248e+00,   5.406006e+00, ...,   1.282715e+01,   1.264844e+01],
        [  5.295410e+00,   5.177490e+00, ...,   1.256274e+01,   1.257642e+01]],

       [[  2.362061e+00,   2.376221e+00, ...,   1.151367e+00,   1.156982e+00],
        [  2.342529e+00,   2.403809e+00, ...,   1.198486e+00,   1.155273e+00],
        ...,
        [  1.732910e+00,   1.768799e+00, ...,   4.247070e+00,   4.135498e+00],
        [  1.715332e+00,   1.657227e+00, ...,   4.036377e+00,   4.047852e+00]]], dtype=float32)
Coordinates:
    XLONG    (south_north, west_east) float32 -122.72 -122.693 -122.666 ...
    XLAT     (south_north, west_east) float32 21.1381 21.1451 21.1521 ...
    Time     datetime64[ns] 2016-10-07
Dimensions without coordinates: bottom_top, south_north, west_east
Attributes:
    FieldType: 104
    MemoryOrder: XYZ
    description: perturbation pressure
    units: Pa
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)

Disabling xarray and metadata

Sometimes you just want a regular numpy array and don’t care about metadata. This is often the case when you are working with compiled extensions. Metadata can be disabled in one of two ways.

  1. disable xarray completely
  2. set the meta function parameter to False.

The example below illustrates both.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar, disable_xarray

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Disable xarray completely
disable_xarray()
p_no_meta = getvar(ncfile, "P")
print (type(p_no_meta))
enable_xarray()

# Disable by using the meta parameter
p_no_meta = getvar(ncfile, "P", meta=False)
print (type(p_no_meta))

Result:

<type 'numpy.ndarray'>
<type 'numpy.ndarray'>

Extracting a Numpy Array from a DataArray

If you need to convert an xarray.DataArray to a numpy.ndarray, wrf-python provides the wrf.to_np() function for this purpose. Although an xarray.DataArary object already contains the xarray.DataArray.values attribute to extract the Numpy array, there is a problem when working with compiled extensions. The behavior for xarray (and pandas) is to convert missing/fill values to NaN, which may cause crashes when working with compiled extensions. Also, some existing code may be designed to work with numpy.ma.MaskedArray, and numpy arrays with NaN may not work with it.

The wrf.to_np() function does the following:

  1. If no missing/fill values are used, wrf.to_np() simply returns the xarray.DataArray.values attribute.
  2. If missing/fill values are used, then wrf.to_np() replaces the NaN values with the _FillValue found in the xarray.DataArray.attrs attribute (required) and a numpy.ma.MaskedArray is returned.
from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the 3D CAPE, which contains missing values
cape_3d = getvar(ncfile, "cape_3d")

# Since there are missing values, this should return a MaskedArray
cape_3d_ndarray = to_np(cape_3d)

print(type(cape_3d_ndarray))

Result:

<class 'numpy.ma.core.MaskedArray'>

Sequences of Files

Combining Multiple Files Using the ‘cat’ Method

The ‘cat’ (concatenate) method aggregates all files in the sequence along the ‘Time’ dimension, which will be the leftmost dimension for the output array. To include all of the times, in all of the files, in the output array, set the timeidx parameter to wrf.ALL_TIMES (an alias for None). If a single value is specified for timeidx, then the time index is assumed to be taken from the concatenation of all times for all files.

It is import to note that no sorting is performed in the wrf.getvar() routine, so all files in the sequence must be sorted prior to calling this function.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar, ALL_TIMES

# Creating a simple test list with three timesteps
wrflist = [Dataset("wrfout_d01_2016-10-07_00_00_00"),
           Dataset("wrfout_d01_2016-10-07_01_00_00"),
           Dataset("wrfout_d01_2016-10-07_02_00_00")]

# Extract the 'P' variable for all times
p_cat = getvar(wrflist, "P", timeidx=ALL_TIMES, method="cat")

print(p_cat)

Result:

<xarray.DataArray u'P' (Time: 3, bottom_top: 50, south_north: 1059, west_east: 1799)>
array([[[[  1.21753906e+03,   1.22532031e+03,   1.22030469e+03, ...,
        1.00760156e+03,   9.87640625e+02,   1.00111719e+03],
     [  1.23877344e+03,   1.24004688e+03,   1.22926562e+03, ...,
        1.00519531e+03,   1.00529688e+03,   9.99171875e+02],
     [  1.23503906e+03,   1.23367188e+03,   1.23731250e+03, ...,
        1.01739844e+03,   1.00005469e+03,   9.97093750e+02],
     ...,
     [  1.77978516e+00,   1.77050781e+00,   1.79003906e+00, ...,
        4.22949219e+00,   4.25659180e+00,   4.13647461e+00],
     [  1.73291016e+00,   1.76879883e+00,   1.77978516e+00, ...,
        4.24047852e+00,   4.24707031e+00,   4.13549805e+00],
     [  1.71533203e+00,   1.65722656e+00,   1.67480469e+00, ...,
        4.06884766e+00,   4.03637695e+00,   4.04785156e+00]]]], dtype=float32)
Coordinates:
    XLONG        (south_north, west_east) float32 -122.72 -122.693 -122.666 ...
    XLAT         (south_north, west_east) float32 21.1381 21.1451 21.1521 ...
  * Time         (Time) datetime64[ns] 2016-10-07 2016-10-07 2016-10-07
    datetime     (Time) datetime64[ns] 2016-10-07T00:00:00 ...
Dimensions without coordinates: bottom_top, south_north, west_east
Attributes:
    FieldType: 104
    MemoryOrder: XYZ
    description: perturbation pressure
    units: Pa
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)

Combining Multiple Files Using the ‘join’ Method

The ‘join’ method combines a sequence of files by adding a new leftmost dimension for the file/sequence index. In situations where there are multiple files with multiple times, and the last file contains less times than the previous files, the remaining arrays will be arrays filled with missing values. There are checks in place within the wrf-python algorithms to look for these missing arrays, but be careful when calling compiled routines outside of wrf-python.

In most cases, timeidx parameter should be set to wrf.ALL_TIMES. If a timeidx value is specified, then this time index is used when extracting the variable from each file. In cases where there are multiple files with multiple time steps, this is probably nonsensical, since the nth time index for each file represents a different time.

In general, join is rarely used, so the concatenate method should be used for most cases.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar, ALL_TIMES


# Creating a simple test list with three timesteps
wrflist = [Dataset("wrfout_d01_2016-10-07_00_00_00"),
           Dataset("wrfout_d01_2016-10-07_01_00_00"),
           Dataset("wrfout_d01_2016-10-07_02_00_00")]

# Extract the 'P' variable for all times
p_join = getvar(wrflist, "P", timeidx=ALL_TIMES, method="join")

print(p_join)

Result:

<xarray.DataArray u'P' (file: 3, bottom_top: 50, south_north: 1059, west_east: 1799)>
array([[[[  1.217539e+03, ...,   1.001117e+03],
         ...,
         [  9.172734e+02, ...,   1.903422e+03]],
        ...,
        [[  2.362061e+00, ...,   1.156982e+00],
         ...,
         [  1.715332e+00, ...,   4.047852e+00]]]], dtype=float32)
Coordinates:
    XLONG     (south_north, west_east) float32 -122.72 -122.693 -122.666 ...
    XLAT      (south_north, west_east) float32 21.1381 21.1451 21.1521 ...
  * file      (file) int64 0 1 2
    datetime  (file) datetime64[ns] 2016-10-07 ...
Dimensions without coordinates: bottom_top, south_north, west_east
Attributes:
    FieldType: 104
    MemoryOrder: XYZ
    description: perturbation pressure
    units: Pa
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)

Note how the ‘Time’ dimension was replaced with the ‘file’ dimension, due to numpy’s automatic squeezing of the single element ‘Time’ dimension. To maintain the ‘Time’ dimension, set the squeeze parameter to False.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar, ALL_TIMES


# Creating a simple test list with three timesteps
wrflist = [Dataset("wrfout_d01_2016-10-07_00_00_00"),
           Dataset("wrfout_d01_2016-10-07_01_00_00"),
           Dataset("wrfout_d01_2016-10-07_02_00_00")]

# Extract the 'P' variable for all times
p_join = getvar(wrflist, "P", timeidx=ALL_TIMES, method="join", squeeze=False)

print(p_join)

Result

<xarray.DataArray u'P' (file: 3, Time: 1, bottom_top: 50, south_north: 1059, west_east: 1799)>
array([[[[[  1.217539e+03, ...,   1.001117e+03],
      ...,
      [  9.172734e+02, ...,   1.903422e+03]],

     ...,
     [[  2.362061e+00, ...,   1.156982e+00],
      ...,
      [  1.715332e+00, ...,   4.047852e+00]]]]], dtype=float32)
Coordinates:
    XLONG     (south_north, west_east) float32 -122.72 -122.693 -122.666 ...
    XLAT      (south_north, west_east) float32 21.1381 21.1451 21.1521 ...
  * file      (file) int64 0 1 2
    datetime  (file, Time) datetime64[ns] 2016-10-07 2016-10-07 2016-10-07
Dimensions without coordinates: Time, bottom_top, south_north, west_east
Attributes:
    FieldType: 104
    MemoryOrder: XYZ
    description: perturbation pressure
    units: Pa
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)

Dictionaries of WRF File Sequences

Dictionaries can also be used as input to the wrf.getvar() functions. This can be useful when working with ensembles. However, all WRF files in the dictionary must have the same dimensions. The result is an array where the leftmost dimension is the keys from the dictionary. Nested dictionaries are allowed.

The method argument is used to describe how each sequence in the dictionary will be combined.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar, ALL_TIMES

wrf_dict = {"ens1" : [Dataset("ens1/wrfout_d01_2016-10-07_00_00_00"),
                      Dataset("ens1/wrfout_d01_2016-10-07_01_00_00"),
                      Dataset("ens1/wrfout_d01_2016-10-07_02_00_00")],
            "ens2" : [Dataset("ens2/wrfout_d01_2016-10-07_00_00_00"),
                      Dataset("ens2/wrfout_d01_2016-10-07_01_00_00"),
                      Dataset("ens2/wrfout_d01_2016-10-07_02_00_00")]
            }

p = getvar(wrf_dict, "P", timeidx=ALL_TIMES)

print(p)

Result:

<xarray.DataArray 'P' (key_0: 2, Time: 2, bottom_top: 50, south_north: 1059, west_east: 1799)>
array([[[[[  1.217539e+03, ...,   1.001117e+03],
          ...,
          [  9.172734e+02, ...,   1.903422e+03]],

         ...,
         [[  2.362061e+00, ...,   1.156982e+00],
          ...,
          [  1.715332e+00, ...,   4.047852e+00]]]]], dtype=float32)
Coordinates:
    XLONG     (south_north, west_east) float32 -122.72 -122.693 -122.666 ...
    XLAT      (south_north, west_east) float32 21.1381 21.1451 21.1521 ...
  * Time      (Time) datetime64[ns] 2016-10-07 ...
    datetime  (Time) datetime64[ns] 2016-10-07 ...
  * key_0     (key_0) <U6 u'label1' u'label2'
Dimensions without coordinates: bottom_top, south_north, west_east
Attributes:
    FieldType: 104
    MemoryOrder: XYZ
    description: perturbation pressure
    units: Pa
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)

Interpolation Routines

Interpolating to a Horizontal Level

The wrf.interplevel() function is used to interpolate a 3D field to a specific horizontal level, usually pressure or height.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar, interplevel

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Extract the Geopotential Height and Pressure (hPa) fields
z = getvar(ncfile, "z")
p = getvar(ncfile, "pressure")

# Compute the 500 MB Geopotential Height
ht_500mb = interplevel(z, p, 500.)

print(ht_500mb)

Result:

<xarray.DataArray u'height_500_hPa' (south_north: 1059, west_east: 1799)>
array([[ 5882.16992188,  5881.87939453,  5881.81005859, ...,
     5890.14501953,  5890.23583984,  5890.33349609],
   [ 5882.71777344,  5882.17529297,  5882.1171875 , ...,
     5890.37695312,  5890.38525391,  5890.27978516],
   [ 5883.32177734,  5882.47119141,  5882.34130859, ...,
     5890.48339844,  5890.42871094,  5890.17724609],
   ...,
   [ 5581.45800781,  5580.46826172,  5579.32617188, ...,
     5788.93554688,  5788.70507812,  5788.64453125],
   [ 5580.32714844,  5579.51611328,  5578.34863281, ...,
     5788.15869141,  5787.87304688,  5787.65527344],
   [ 5579.64404297,  5578.30957031,  5576.98632812, ...,
     5787.19384766,  5787.10888672,  5787.06933594]], dtype=float32)
Coordinates:
    XLONG        (south_north, west_east) float32 -122.72 -122.693 -122.666 ...
    XLAT         (south_north, west_east) float32 21.1381 21.1451 21.1521 ...
    Time         datetime64[ns] 2016-10-07
Dimensions without coordinates: south_north, west_east
Attributes:
    FieldType: 104
    units: m
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)
    level: 500 hPa
    missing_value: 9.96920996839e+36
    _FillValue: 9.96920996839e+36

Vertical Cross Sections

The wrf.vertcross() function is used to create vertical cross sections. To define a cross section, a start point and an end point needs to be specified. Alternatively, a pivot point and an angle may be used. The start point, end point, and pivot point are specified using a wrf.CoordPair object, and coordinates can either be in grid (x,y) coordinates or (latitude,longitude) coordinates. When using (latitude,longitude) coordinates, a NetCDF file object or a wrf.WrfProj object must be provided.

The vertical levels can also be specified using the levels parameter. If not specified, then approximately 100 levels will be chosen in 1% increments.

Example Using Start Point and End Point
from __future__ import print_function, division

from netCDF4 import Dataset
from wrf import getvar, vertcross, CoordPair

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the geopotential height (m) and pressure (hPa).
z = getvar(ncfile, "z")
p = getvar(ncfile, "pressure")

# Define a start point and end point in grid coordinates
start_point = CoordPair(x=0, y=(z.shape[-2]-1)//2)
end_point = CoordPair(x=-1, y=(z.shape[-2]-1)//2)

# Calculate the vertical cross section.  By setting latlon to True, this
# also calculates the latitude and longitude coordinates along the cross
# section line and adds them to the 'xy_loc' metadata to help with plotting.
p_vert = vertcross(p, z, start_point=start_point, end_point=end_point, latlon=True)

print(p_vert)

Result:

<xarray.DataArray u'pressure_cross' (vertical: 100, idx: 1798)>
array([[          nan,           nan,           nan, ...,           nan,
              nan,           nan],
   [ 989.66168213,  989.66802979,  989.66351318, ...,  988.05737305,
     987.99151611,  987.96917725],
   [ 959.49450684,  959.50109863,  959.50030518, ...,  958.96948242,
     958.92980957,  958.89294434],
   ...,
   [  24.28092003,   24.27359581,   24.27034378, ...,   24.24800491,
      24.2486496 ,   24.24947357],
   [  23.2868309 ,   23.27933884,   23.27607918, ...,   23.25231361,
      23.2530098 ,   23.25384521],
   [          nan,           nan,           nan, ...,           nan,
              nan,           nan]], dtype=float32)
Coordinates:
    Time      datetime64[ns] 2016-10-07
    xy_loc    (idx) object CoordPair(x=0.0, y=529.0, lat=34.5279502869, lon=-127.398925781) ...
  * vertical  (vertical) float32 0.0 261.828 523.656 785.484 1047.31 1309.14 ...
Dimensions without coordinates: idx
Attributes:
    FieldType: 104
    description: pressure
    units: hPa
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)
    orientation: (0.0, 529.0) to (1797.0, 529.0)
    missing_value: 9.96920996839e+36
    _FillValue: 9.96920996839e+36
Example Using Pivot Point and Angle
from __future__ import print_function, division

from netCDF4 import Dataset
from wrf import getvar, vertcross, CoordPair

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the geopotential height (m) and pressure (hPa).
z = getvar(ncfile, "z")
p = getvar(ncfile, "pressure")

# Define a pivot point and angle in grid coordinates, with the
# pivot point being the center of the grid.
pivot_point = CoordPair(x=(z.shape[-1]-1)//2, y=(z.shape[-2]-1)//2)
angle = 90.0

# Calculate the vertical cross section.  By setting latlon to True, this
# also calculates the latitude and longitude coordinates along the line
# and adds them to the metadata to help with plotting labels.
p_vert = vertcross(p, z, pivot_point=pivot_point, angle=angle, latlon=True)

print (p_vert)

Result:

<xarray.DataArray u'pressure_cross' (vertical: 100, idx: 1798)>
array([[          nan,           nan,           nan, ...,           nan,
              nan,           nan],
   [ 989.66168213,  989.66802979,  989.66351318, ...,  988.05737305,
     987.99151611,  987.96917725],
   [ 959.49450684,  959.50109863,  959.50030518, ...,  958.96948242,
     958.92980957,  958.89294434],
   ...,
   [  24.28092003,   24.27359581,   24.27034378, ...,   24.24800491,
      24.2486496 ,   24.24947357],
   [  23.2868309 ,   23.27933884,   23.27607918, ...,   23.25231361,
      23.2530098 ,   23.25384521],
   [          nan,           nan,           nan, ...,           nan,
              nan,           nan]], dtype=float32)
Coordinates:
    Time      datetime64[ns] 2016-10-07
    xy_loc    (idx) object CoordPair(x=0.0, y=529.0, lat=34.5279502869, lon=-127.398925781) ...
  * vertical  (vertical) float32 0.0 261.828 523.656 785.484 1047.31 1309.14 ...
Dimensions without coordinates: idx
Attributes:
    FieldType: 104
    description: pressure
    units: hPa
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)
    orientation: (0.0, 529.0) to (1797.0, 529.0) ; center=CoordPair(x=899.0, y=529.0) ; angle=90.0
    missing_value: 9.96920996839e+36
    _FillValue: 9.96920996839e+36
Example Using Lat/Lon Coordinates
from __future__ import print_function, division

from netCDF4 import Dataset
from wrf import getvar, vertcross, CoordPair

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the geopotential height (m) and pressure (hPa).
z = getvar(ncfile, "z")
p = getvar(ncfile, "pressure")
lats = getvar(ncfile, "lat")
lons = getvar(ncfile, "lon")

# Making the same horizontal line, but with lats/lons
start_lat = lats[(lats.shape[-2]-1)//2, 0]
end_lat = lats[(lats.shape[-2]-1)//2, -1]
start_lon = lons[(lats.shape[-2]-1)//2, 0]
end_lon = lons[(lats.shape[-2]-1)//2, -1]

# Cross section line using start_point and end_point.
start_point = CoordPair(lat=start_lat, lon=start_lon)
end_point = CoordPair(lat=end_lat, lon=end_lon)

# When using lat/lon coordinates, you must supply a WRF netcdf file object,
# or a projection object with the lower left latitude and lower left
# longitude points.
p_vert = vertcross(p, z, wrfin=ncfile, start_point=start_point, end_point=end_point, latlon=True)
print(p_vert)

Result:

<xarray.DataArray u'pressure_cross' (vertical: 100, idx: 1798)>
array([[          nan,           nan,           nan, ...,           nan,
              nan,           nan],
   [ 989.66168213,  989.66802979,  989.66351318, ...,  988.05737305,
     987.99151611,  987.96917725],
   [ 959.49450684,  959.50109863,  959.50030518, ...,  958.96948242,
     958.92980957,  958.89294434],
   ...,
   [  24.28092003,   24.27359581,   24.27034378, ...,   24.24800491,
      24.2486496 ,   24.24947357],
   [  23.2868309 ,   23.27933884,   23.27607918, ...,   23.25231361,
      23.2530098 ,   23.25384521],
   [          nan,           nan,           nan, ...,           nan,
              nan,           nan]], dtype=float32)
Coordinates:
    Time      datetime64[ns] 2016-10-07
    xy_loc    (idx) object CoordPair(x=0.0, y=529.0, lat=34.5279502869, lon=-127.398925781) ...
  * vertical  (vertical) float32 0.0 261.828 523.656 785.484 1047.31 1309.14 ...
Dimensions without coordinates: idx
Attributes:
    FieldType: 104
    description: pressure
    units: hPa
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)
    orientation: (0.0, 529.0) to (1797.0, 529.0)
    missing_value: 9.96920996839e+36
    _FillValue: 9.96920996839e+36
Example Using Specified Vertical Levels
from __future__ import print_function, division

from netCDF4 import Dataset
from wrf import getvar, vertcross, CoordPair

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the geopotential height (m) and pressure (hPa).
z = getvar(ncfile, "z")
p = getvar(ncfile, "pressure")
lats = getvar(ncfile, "lat")
lons = getvar(ncfile, "lon")

# Making the same horizontal line, but with lats/lons
start_lat = lats[(lats.shape[-2]-1)//2, 0]
end_lat = lats[(lats.shape[-2]-1)//2, -1]
start_lon = lons[(lats.shape[-2]-1)//2, 0]
end_lon = lons[(lats.shape[-2]-1)//2, -1]

# Pressure using start_point and end_point.  These were obtained using
start_point = CoordPair(lat=start_lat, lon=start_lon)
end_point = CoordPair(lat=end_lat, lon=end_lon)

# Specify vertical levels
levels = [1000., 2000., 3000.]

# Calculate the cross section
p_vert = vertcross(p, z, wrfin=ncfile, levels=levels, start_point=start_point, end_point=end_point, latlon=True)

print(p_vert)

Result:

<xarray.DataArray u'pressure_cross' (vertical: 3, idx: 1798)>
array([[ 906.375     ,  906.38043213,  906.39367676, ...,  907.6661377 ,
         907.63006592,  907.59191895],
       [ 804.24737549,  804.26885986,  804.28076172, ...,  806.98632812,
         806.95556641,  806.92608643],
       [ 713.24578857,  713.2722168 ,  713.27886963, ...,  716.09594727,
         716.06610107,  716.03503418]], dtype=float32)
Coordinates:
    Time      datetime64[ns] 2016-10-07
    xy_loc    (idx) object CoordPair(x=0.0, y=529.0, lat=34.5279502869, lon=-127.398925781) ...
  * vertical  (vertical) float32 1000.0 2000.0 3000.0
Dimensions without coordinates: idx
Attributes:
    FieldType: 104
    description: pressure
    units: hPa
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)
    orientation: (0.0, 529.0) to (1797.0, 529.0)
    missing_value: 9.96920996839e+36
    _FillValue: 9.96920996839e+36

Interpolating Two-Dimensional Fields to a Line

Two-dimensional fields can be interpolated along a line, in a manner similar to the vertical cross section (see Vertical Cross Sections), using the wrf.interpline() function. To define the line to interpolate along, a start point and an end point needs to be specified. Alternatively, a pivot point and an angle may be used. The start point, end point, and pivot point are specified using a wrf.CoordPair object, and coordinates can either be in grid (x,y) coordinates or (latitude,longitude) coordinates. When using (latitude,longitude) coordinates, a NetCDF file object or a wrf.WrfProj object must also be provided.

Example Using Start Point and End Point
from __future__ import print_function, division

from netCDF4 import Dataset
from wrf import getvar, interpline, CoordPair

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the 2m temperature
t2 = getvar(ncfile, "T2")

# Create a south-north line in the center of the domain using
# start point and end point
start_point = CoordPair(x=(t2.shape[-1]-1)//2, y=0)
end_point = CoordPair(x=(t2.shape[-1]-1)//2, y=-1)

# Calculate the vertical cross section.  By setting latlon to True, this
# also calculates the latitude and longitude coordinates along the line
# and adds them to the metadata to help with plotting labels.
t2_line = interpline(t2, start_point=start_point, end_point=end_point, latlon=True)

print(t2_line, "\n")

Result:

<xarray.DataArray u'T2_line' (line_idx: 1058)>
array([ 302.07214355,  302.08505249,  302.08688354, ...,  279.18557739,
        279.1998291 ,  279.23132324], dtype=float32)
Coordinates:
    Time      datetime64[ns] 2016-10-07
    xy_loc    (line_idx) object CoordPair(x=899.0, y=0.0, lat=24.3645858765, lon=-97.5) ...
Dimensions without coordinates: line_idx
Attributes:
    FieldType: 104
    description: TEMP at 2 M
    units: K
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)
    orientation: (899.0, 0.0) to (899.0, 1057.0)
Example Using Pivot Point and Angle
from __future__ import print_function, division

from netCDF4 import Dataset
from wrf import getvar, interpline, CoordPair

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the 2m temperature
t2 = getvar(ncfile, "T2")

# Create a south-north line using pivot point and angle
pivot_point = CoordPair((t2.shape[-1]-1)//2, (t2.shape[-2]-1)//2)
angle = 0.0

# Calculate the vertical cross section.  By setting latlon to True, this
# also calculates the latitude and longitude coordinates along the line
# and adds them to the metadata to help with plotting labels.
t2_line = interpline(t2, pivot_point=pivot_point, angle=angle, latlon=True)

print(t2_line, "\n")

Result:

<xarray.DataArray u'T2_line' (line_idx: 1058)>
array([ 302.07214355,  302.08505249,  302.08688354, ...,  279.18557739,
        279.1998291 ,  279.23132324], dtype=float32)
Coordinates:
    Time      datetime64[ns] 2016-10-07
    xy_loc    (line_idx) object CoordPair(x=899.0, y=0.0, lat=24.3645858765, lon=-97.5) ...
Dimensions without coordinates: line_idx
Attributes:
    FieldType: 104
    description: TEMP at 2 M
    units: K
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)
    orientation: (899.0, 0.0) to (899.0, 1057.0) ; center=CoordPair(x=899, y=529) ; angle=0.0
Example Using Lat/Lon Coordinates
from __future__ import print_function, division

from netCDF4 import Dataset
from wrf import getvar, interpline, CoordPair

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

t2 = getvar(ncfile, "T2")
lats = getvar(ncfile, "lat")
lons = getvar(ncfile, "lon")

# Select the latitude,longitude points for a vertical line through
# the center of the domain.
start_lat = lats[0, (lats.shape[-1]-1)//2]
end_lat = lats[-1, (lats.shape[-1]-1)//2]
start_lon = lons[0, (lons.shape[-1]-1)//2]
end_lon = lons[-1, (lons.shape[-1]-1)//2]

# Create the CoordPairs
start_point = CoordPair(lat=start_lat, lon=start_lon)
end_point = CoordPair(lat=end_lat, lon=end_lon)

# Calculate the interpolated line.  To use latitude and longitude points,
# you must supply a WRF NetCDF file object, or a projection object along
# with the lower left latitude and lower left longitude points.
# Also, by setting latlon to True, this routine will
# also calculate the latitude and longitude coordinates along the line
# and adds them to the metadata to help with plotting labels.
t2_line = interpline(t2, wrfin=ncfile, start_point=start_point, end_point=end_point, latlon=True)

print (t2_line)

Result:

<xarray.DataArray u'T2_line' (line_idx: 1058)>
array([ 302.07214355,  302.08505249,  302.08688354, ...,  279.18557739,
        279.1998291 ,  279.23132324], dtype=float32)
Coordinates:
    Time      datetime64[ns] 2016-10-07
    xy_loc    (line_idx) object CoordPair(x=899.0, y=0.0, lat=24.3645858765, lon=-97.5) ...
Dimensions without coordinates: line_idx
Attributes:
    FieldType: 104
    description: TEMP at 2 M
    units: K
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)
    orientation: (899.0, 0.0) to (899.0, 1057.0)

Interpolating a 3D Field to a Surface Type

The wrf.vinterp() is used to interpolate a field to a type of surface. The available surfaces are pressure, geopotential height, theta, and theta-e. The surface levels to interpolate also need to be specified.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar, vinterp

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

tk = getvar(ncfile, "tk")
# Interpolate tk to theta-e levels
interp_levels = [200, 300, 500, 1000]

interp_field = vinterp(ncfile,
               field=tk,
               vert_coord="eth",
               interp_levels=interp_levels,
               extrapolate=True,
               field_type="tk",
               log_p=True)

print(interp_field)

Result:

<xarray.DataArray u'temp' (interp_level: 4, south_north: 1059, west_east: 1799)>
array([[[ 296.12872314,  296.1166687 ,  296.08905029, ...,  301.71026611,
          301.67956543,  301.67791748],
        [ 296.11352539,  295.95581055,  295.91555786, ...,  301.63052368,
          301.62905884,  301.65887451],
        [ 296.07556152,  295.91577148,  295.88214111, ...,  301.61499023,
          301.60287476,  301.63961792],
        ...,
        [ 219.11134338,  219.08581543,  219.08602905, ...,  218.29879761,
          218.30923462,  218.3787384 ],
        [ 219.09260559,  219.07765198,  219.08340454, ...,  218.2855072 ,
          218.30444336,  218.37931824],
        [ 219.07936096,  219.08181763,  219.10089111, ...,  218.31173706,
          218.34288025,  218.3687439 ]]], dtype=float32)
Coordinates:
    XLONG         (south_north, west_east) float32 -122.72 -122.693 -122.666 ...
    XLAT          (south_north, west_east) float32 21.1381 21.1451 21.1521 ...
    Time          datetime64[ns] 2016-10-07
  * interp_level  (interp_level) int64 200 300 500 1000
Dimensions without coordinates: south_north, west_east
Attributes:
    FieldType: 104
    MemoryOrder: XYZ
    description: temperature
    units: K
    stagger:
    coordinates: XLONG XLAT
    projection: LambertConformal(stand_lon=-97.5, moad_cen_lat=38.5000038147,
                                 truelat1=38.5, truelat2=38.5, pole_lat=90.0,
                                 pole_lon=0.0)
    vert_interp_type: eth

Lat/Lon <-> XY Routines

wrf-python includes a set of routines for converting back and forth between latitude,longitude space and x,y space. The methods are wrf.xy_to_ll(), wrf.xy_to_ll_proj(), wrf.ll_to_xy(), wrf.ll_to_xy_proj(). The latitude, longitude, x, and y parameters to these methods can contain sequences if multiple points are desired to be converted.

Example With Single Coordinates

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar, interpline, CoordPair, xy_to_ll, ll_to_xy

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

lat_lon = xy_to_ll(ncfile, 400, 200)

print(lat_lon)

x_y = ll_to_xy(ncfile, lat_lon[0], lat_lon[1])

print (x_y)

Result:

<xarray.DataArray u'latlon' (lat_lon: 2)>
array([  28.55816408, -112.67827617])
Coordinates:
  * lat_lon   (lat_lon) <U3 u'lat' u'lon'
    xy_coord  object CoordPair(x=400, y=200)
Dimensions without coordinates: idx


<xarray.DataArray u'xy' (x_y: 2)>
array([400, 200])
Coordinates:
    latlon_coord  object CoordPair(lat=28.5581640822, lon=-112.678276173)
  * x_y           (x_y) <U1 u'x' u'y'
Dimensions without coordinates: idx

Example With Multiple Coordinates

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar, interpline, CoordPair, xy_to_ll, ll_to_xy

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

lat_lon = xy_to_ll(ncfile, [400,105], [200,205])

print(lat_lon)

x_y = ll_to_xy(ncfile, lat_lon[0,:], lat_lon[1,:])

print (x_y)

Result:

<xarray.DataArray u'latlon' (lat_lon: 2, idx: 2)>
array([[  28.55816408,   27.03835783],
       [-112.67827617, -121.36392174]])
Coordinates:
  * lat_lon   (lat_lon) <U3 u'lat' u'lon'
    xy_coord  (idx) object CoordPair(x=400, y=200) CoordPair(x=105, y=205)
Dimensions without coordinates: idx


<xarray.DataArray u'xy' (x_y: 2, idx: 2)>
array([[400, 105],
       [200, 205]])
Coordinates:
    latlon_coord  (idx) object CoordPair(lat=28.5581640822, lon=-112.678276173) ...
  * x_y           (x_y) <U1 u'x' u'y'
Dimensions without coordinates: idx

Mapping Helper Routines

wrf-python includes several routines to assist with plotting, primarily for obtaining the mapping object used for cartopy, basemap, and PyNGL. For all three plotting systems, the mapping object can be determined directly from a variable when using xarray, or can be obtained from the WRF output file(s) if xarray is turned off.

Also included are utilities for extracting the geographic boundaries directly from xarray variables. This can be useful in situations where you only want to work with subsets (slices) of a large domain, but don’t want to define the map projection over the subset region.

Cartopy Example Using a Variable

In this example, we’re going to extract the cartopy mapping object from a diagnostic variable (slp), the lat,lon coordinates, and the geographic boundaries. Next, we’re going to take a subset of the diagnostic variable and extract the geographic boundaries. Some of the variables will be printed for demonstration.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar, get_cartopy, latlon_coords, geo_bounds

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Use SLP for the example variable
slp = getvar(ncfile, "slp")

# Get the cartopy mapping object
cart_proj = get_cartopy(slp)

print (cart_proj)

# Get the latitude and longitude coordinate.  This is usually needed for plotting.
lats, lons = latlon_coords(slp)

# Get the geobounds for the SLP variable
bounds = geo_bounds(slp)

print (bounds)

# Get the geographic boundaries for a subset of the domain
slp_subset = slp[150:250, 150:250]
slp_subset_bounds = geo_bounds(slp_subset)

print (slp_subset_bounds)

Result:

<cartopy.crs.LambertConformal object at 0x115374290>
GeoBounds(CoordPair(lat=25.9246292114, lon=-119.675048828), CoordPair(lat=29.0761833191, lon=-117.46484375))
GeoBounds(CoordPair(lat=25.9246292114, lon=-119.675048828), CoordPair(lat=29.0761833191, lon=-117.46484375))

Cartopy Example Using WRF Output Files

In this example, the cartopy mapping object and geographic boundaries will be extracted directly from the netcdf variable.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import get_cartopy, geo_bounds

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the cartopy mapping object from the netcdf file
cart_proj = get_cartopy(wrfin=ncfile)

print (cart_proj)

# Get the geobounds from the netcdf file (by default, uses XLAT, XLONG)
# You can supply a variable name to get the staggered boundaries
bounds = geo_bounds(wrfin=ncfile)

print (bounds)

Result:

<cartopy.crs.LambertConformal object at 0x11d3be650>
GeoBounds(CoordPair(lat=21.1381225586, lon=-122.719528198), CoordPair(lat=47.8436355591, lon=-60.9013671875))

Basemap Example Using a Variable

In this example, we’re going to extract the basemap mapping object from a diagnostic variable (slp), the lat,lon coordinates, and the geographic boundaries. Next, we’re going to take a subset of the diagnostic variable and extract the geographic boundaries. Some of the variables will be printed for demonstration.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar, get_basemap, latlon_coords, geo_bounds

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

slp = getvar(ncfile, "slp")

# Get the basemap mapping object
bm = get_basemap(slp)

print (bm)

# Get the latitude and longitude coordinate.  This is usually needed for plotting.
lats, lons = latlon_coords(slp)

# Get the geobounds for the SLP variable
bounds = geo_bounds(slp)

print(bounds)

# Get the geographic boundaries for a subset of the domain
slp_subset = slp[150:250, 150:250]
slp_subset_bounds = geo_bounds(slp_subset)

print (slp_subset_bounds)

Result:

<mpl_toolkits.basemap.Basemap object at 0x114d65650>
GeoBounds(CoordPair(lat=21.1381225586, lon=-122.719528198), CoordPair(lat=47.8436355591, lon=-60.9013671875))
GeoBounds(CoordPair(lat=25.9246292114, lon=-119.675048828), CoordPair(lat=29.0761833191, lon=-117.46484375)

Basemap Example Using WRF Output Files

In this example, the basemap mapping object and geographic boundaries will be extracted directly from the netcdf variable.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import get_basemap, geo_bounds

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the basemap object from the netcdf file
bm = get_basemap(wrfin=ncfile)

print (bm)

# Get the geographic boundaries from the netcdf file
bounds = geo_bounds(wrfin=ncfile)

print (bounds)

Result:

<mpl_toolkits.basemap.Basemap object at 0x125bb4750>
GeoBounds(CoordPair(lat=21.1381225586, lon=-122.719528198), CoordPair(lat=47.8436355591, lon=-60.9013671875))

PyNGL Example Using a Variable

In this example, we’re going to extract the basemap mapping object from a diagnostic variable (slp), the lat,lon coordinates, and the geographic boundaries. Next, we’re going to take a subset of the diagnostic variable and extract the geographic boundaries. Some of the variables will be printed for demonstration.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import getvar, get_pyngl, latlon_coords, geo_bounds

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Use SLP as the example variable
slp = getvar(ncfile, "slp")

# Get the pyngl resources from the variable
pyngl_resources = get_pyngl(slp)

print (pyngl_resources)

# Get the latitude and longitude coordinate.  This is needed for plotting.
lats, lons = latlon_coords(slp)

# Get the geobounds from the SLP variable
bounds = geo_bounds(slp)

print(bounds)

# Get the geographic boundaries for a subset of the domain
slp_subset = slp[150:250, 150:250]
slp_subset_bounds = geo_bounds(slp_subset)

print (slp_subset_bounds)

Result:

<Ngl.Resources instance at 0x114cabbd8>
GeoBounds(CoordPair(lat=21.1381225586, lon=-122.719528198), CoordPair(lat=47.8436355591, lon=-60.9013671875))
GeoBounds(CoordPair(lat=25.9246292114, lon=-119.675048828), CoordPair(lat=29.0761833191, lon=-117.46484375))

PyNGL Example Using WRF Output Files

In this example, the basemap mapping object and geographic boundaries will be extracted directly from the netcdf variable.

from __future__ import print_function

from netCDF4 import Dataset
from wrf import get_pyngl, geo_bounds

ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the pyngl resources from the netcdf file
pyngl_resources = get_pyngl(wrfin=ncfile)

print (pyngl_resources)

# Get the geographic boundaries from the netcdf file
bounds = geo_bounds(wrfin=ncfile)

print (bounds)

Result:

<Ngl.Resources instance at 0x115391f80>
GeoBounds(CoordPair(lat=21.1381225586, lon=-122.719528198), CoordPair(lat=47.8436355591, lon=-60.9013671875))

Moving Nests

When a domain nest is moving, the domain boundaries become a function of time when combining the files using the ‘cat’ method. When using ‘join’, the domain boundaries become a function of both file and time. As a result, the methods that depend on geographic boundaries (wrf.geo_bounds(), wrf.get_basemap(), etc) will return arrays of objects rather than a single object when multiple times and/or files are detected in the underlying coordinate variables.

An exception is wrf.get_cartopy(), which contains no geographic boundary information in the mapping object. Instead, the wrf.cartopy_xlim() and wrf.cartopy_ylim() methods can be used to get the array of matplotlib axes boundaries (returned in the axes projection coordinates).

Geographic Boundaries with Moving Nest Example

In this example, the geographic boundaries are extracted from a sequence of files that use a moving nest. The result will be an array of wrf.GeoBounds objects.

from __future__ import print_function

from glob import glob
from netCDF4 import Dataset as nc

from wrf import getvar, ALL_TIMES, geo_bounds

# Get all the domain 02 files
wrf_filenames = glob("wrf_files/wrf_vortex_multi/wrfout_d02_*")
ncfiles = [nc(x) for x in wrf_filenames]

# SLP is the example variable and includes all times
slp = getvar(ncfiles, "slp", timeidx=ALL_TIMES)

# Get the geographic boundaries
bounds = geo_bounds(slp)
print (bounds)

Result:

[ GeoBounds(CoordPair(lat=21.3020038605, lon=-90.5740585327), CoordPair(lat=29.0274410248, lon=-82.0291671753))
 GeoBounds(CoordPair(lat=21.3020038605, lon=-90.3042221069), CoordPair(lat=29.0274410248, lon=-81.7593231201))
 GeoBounds(CoordPair(lat=21.3020038605, lon=-90.8438949585), CoordPair(lat=29.0274410248, lon=-82.2990036011))
 GeoBounds(CoordPair(lat=21.3020038605, lon=-91.1137390137), CoordPair(lat=29.0274410248, lon=-82.5688400269))
 GeoBounds(CoordPair(lat=21.8039493561, lon=-91.6534042358), CoordPair(lat=29.4982528687, lon=-83.1085205078))
 GeoBounds(CoordPair(lat=22.0542640686, lon=-92.193107605), CoordPair(lat=29.7328338623, lon=-83.6481933594))
 GeoBounds(CoordPair(lat=22.5535621643, lon=-92.7327728271), CoordPair(lat=30.2003688812, lon=-84.1878738403))
 GeoBounds(CoordPair(lat=22.8025398254, lon=-93.0026092529), CoordPair(lat=30.4333114624, lon=-84.4577102661))
 GeoBounds(CoordPair(lat=23.0510597229, lon=-93.2724456787), CoordPair(lat=30.665681839, lon=-84.7275543213))]
Cartopy Mapping with Moving Nest Example

In this example, a cartopy mapping object is extracted from a variable that uses a moving nest. Since cartopy objects do not include geographic boundary information, only a single cartopy object is returned. However, if the axes xlimits and ylimits are desired, the wrf.cartopy_xlim() and wrf.cartopy_ylim() functions can be used to obtain the array of moving boundaries in the axes projected coordinate space.

from __future__ import print_function

from glob import glob
from netCDF4 import Dataset as nc

from wrf import getvar, ALL_TIMES, get_cartopy, cartopy_xlim, cartopy_ylim

# Get all of the domain 02 WRF output files
wrf_filenames = glob("wrf_files/wrf_vortex_multi/wrfout_d02_*")
ncfiles = [nc(x) for x in wrf_filenames]

# Use SLP as the example variable and include all times
slp = getvar(ncfiles, "slp", timeidx=ALL_TIMES)

# Get the cartopy mapping object
cart_proj = get_cartopy(slp)
print (cart_proj)
print ("\n")

# Get the array of axes x-limits
xlims = cartopy_xlim(slp)
print (xlims)
print ("\n")

# Get the array of axes y-limits
ylims = cartopy_ylim(slp)
print (ylims)

Result:

<wrf.projection.MercatorWithLatTS object at 0x13893c9b0>

[[-174999.8505754546, 774999.5806103835]
 [-145000.11853874932, 805000.1608638937]
 [-204999.58261215844, 744999.8485736783]
 [-235000.16286567, 715000.1165369744]
 [-294998.77872227144, 654999.804246759]
 [-355001.6356629085, 595000.34017335]
 [-415000.25151950994, 535000.0278831345]
 [-444999.98355621524, 505000.29584642925]
 [-474999.7155929191, 474999.7155929177]]

[[2424828.507236154, 3374828.14098255]
 [2424828.507236154, 3374828.14098255]
 [2424828.507236154, 3374828.14098255]
 [2424828.507236154, 3374828.14098255]
 [2484829.1182174017, 3434828.972518358]
 [2514829.1041871575, 3464828.196283651]
 [2574829.0041584675, 3524828.8880928173]
 [2604829.1786526926, 3554829.5610342724]
 [2634828.9016262344, 3584828.016406863]]
Basemap Mapping with Moving Nest Example

In this example, basemap objects are extracted from a variable that uses a moving nest. An array of basemap objects is returned because the basemap object includes geographic boundary information.

from __future__ import print_function

from glob import glob
from netCDF4 import Dataset as nc

from wrf import getvar, ALL_TIMES, get_basemap

# Get all of the domain 02 WRF output files
wrf_filenames = glob("wrf_files/wrf_vortex_multi/wrfout_d02_*")
ncfiles = [nc(x) for x in wrf_filenames]

# Use SLP as the reference variable and include all times
slp = getvar(ncfiles, "slp", timeidx=ALL_TIMES)

# Get the array of basemap objects
bm = get_basemap(slp)
print (bm)
print ("\n")

Result:

[<mpl_toolkits.basemap.Basemap object at 0x1327bc510>
 <mpl_toolkits.basemap.Basemap object at 0x115a9a790>
 <mpl_toolkits.basemap.Basemap object at 0x115a9a750>
 <mpl_toolkits.basemap.Basemap object at 0x115a9a7d0>
 <mpl_toolkits.basemap.Basemap object at 0x115a9a850>
 <mpl_toolkits.basemap.Basemap object at 0x115a9a8d0>
 <mpl_toolkits.basemap.Basemap object at 0x115a9a950>
 <mpl_toolkits.basemap.Basemap object at 0x115a9a9d0>
 <mpl_toolkits.basemap.Basemap object at 0x115a9aa50>]
PyNGL Mapping with Moving Nest Example

In this example, pyngl resource objects are extracted from a variable that uses a moving nest. An array of pyngl resource objects is returned because the pyngl object includes geographic boundary information.

from __future__ import print_function

from glob import glob
from netCDF4 import Dataset as nc

from wrf import getvar, ALL_TIMES, get_pyngl

# Get the domain 02 WRF output files
wrf_filenames = glob("wrf_files/wrf_vortex_multi/wrfout_d02_*")
ncfiles = [nc(x) for x in wrf_filenames]

# Use SLP as the example variable and include all times
slp = getvar(ncfiles, "slp", timeidx=ALL_TIMES)

# Get the array of pyngl resource objects
bm = get_pyngl(slp)
print (bm)
print ("\n")

Result:

[<Ngl.Resources instance at 0x140cd30e0>
 <Ngl.Resources instance at 0x11d3187a0>
 <Ngl.Resources instance at 0x11d3185a8>
 <Ngl.Resources instance at 0x11d3188c0>
 <Ngl.Resources instance at 0x11d318878>
 <Ngl.Resources instance at 0x11d3183f8>
 <Ngl.Resources instance at 0x11d318950>
 <Ngl.Resources instance at 0x11d318a70>
 <Ngl.Resources instance at 0x11d318710>]

Using OpenMP

Beginning in version 1.1, the Fortran computational routines in wrf-python make use of OpenMP directives. OpenMP enables the calculations to use multiple CPU cores, which can improve performance. In order to use OpenMP features, wrf-python has to be compiled with OpenMP enabled (most pre-built binary installations will have this enabled).

The Fortran computational routines have all been built using runtime scheduling, instead of compile time scheduling, so that the user can choose the scheduler type within their Python application. By default, the scheduling type is set to wrf.OMP_SCHED_STATIC using only 1 CPU core, so wrf-python will behave similarly to the non-OpenMP built versions. For the most part, the difference between the scheduling types is minimal, with the exception being the wrf.OMP_SCHED_DYNAMIC scheduler that is much slower due to the additional overhead associated with it. For new users, using the default scheduler should be sufficient.

Verifying that OpenMP is Enabled

To take advantage of the performance improvements offered by OpenMP, wrf-python needs to have been compiled with OpenMP features enabled. The example below shows how you can determine if OpenMP is enabled in your build of wrf-python.

from __future__ import print_function

from wrf import omp_enabled

print(omp_enabled())

Result:

True

Determining the Number of Available Processors

The example below shows how you can get the maximum number of processors that are available on your system.

from __future__ import print_function

from wrf import omp_get_num_procs

print(omp_get_num_procs())

Result:

8

Specifying the Number of Threads

To enable multicore support via OpenMP, specifying the maximum number of OpenMP threads (i.e. CPU cores) is the only step that you need to take.

In the example below, wrf.omp_set_num_threads() is used to set the maximum number of threads to use, and wrf.omp_get_max_threads() is used to retrieve (and print) the maximum number of threads used.

Note

Although there is an OpenMP routine named wrf.omp_get_num_threads(), this routine will always return 1 when called from the sequential part of the program. Use wrf.omp_get_max_threads() to return the value set by wrf.omp_set_num_threads().

from __future__ import print_function

from wrf import omp_set_num_threads, omp_get_max_threads

omp_set_num_threads(4)

print (omp_get_max_threads())

Result:

4

Setting a Different Scheduler Type

When an OpenMP directive is encountered in the Fortran code, a scheduler is used to determine how the work is divided among the threads. All of the Fortran routines are compiled to use a ‘runtime’ scheduler, which indicates that the scheduler type (from the four listed below) is to be chosen at runtime (i.e. inside a Python script)

By default, the scheduler chosen is the wrf.OMP_SCHED_STATIC scheduler, which should be sufficient for most users. However, OpenMP and wrf-python include the following options for the scheduler type:

  • wrf.OMP_SCHED_STATIC
  • wrf.OMP_SCHED_DYNAMIC
  • wrf.OMP_SCHED_GUIDED
  • wrf.OMP_SCHED_AUTO

Refer to the OpenMP Specification. for more information about these scheduler types. In local testing, wrf.OMP_SCHED_GUIDED produced the best results, but differences between wrf.OMP_SCHED_STATIC, wrf.OMP_SCHED_GUIDED, and wrf.OMP_SCHED_AUTO were minor. However, wrf.OMP_SCHED_DYNAMIC produced noticeably slower results due to the overhead of using a dynamic scheduler.

When setting a scheduler type, the wrf.omp_set_schedule() takes two arguments. The first is the scheduler type (one from the list above), and the second optional argument is a modifier, which is usually referred as the chunk size. If the modifier/chunk_size is set to 0, then the OpenMP default implementation is used. For wrf.OMP_SCHED_AUTO, the modifier is ignored.

If you are new to OpenMP and all this sounds confusing, don’t worry about setting a scheduler type. The default static scheduler will be good enough.

In the example below, the scheduler type is set to wrf.OMP_SCHED_GUIDED and uses the default chunk size of 0. The scheduler type is then read back using wrf.omp_get_schedule() and printed.

from __future__ import print_function

from wrf import omp_set_schedule, omp_get_schedule, OMP_SCHED_GUIDED

omp_set_schedule(OMP_SCHED_GUIDED, 0)

sched, modifier = omp_get_schedule()

print(sched, modifier)

Result:

3 1

Notice that the printed scheduler type (sched variable) is set to a value of 3, which is the actual integer constant value for the wrf.OMP_SCHED_GUIDED scheduler type. The modifier is returned as a value of 1, which is different than the 0 that was supplied to the wrf.omp_set_schedule() routine. This is because the 0 tells OpenMP to use its own default value for the scheduler, which is 1 for this type of scheduler.

Performance Tips

Memory Issues with wrf.ALL_TIMES

The use of wrf.ALL_TIMES for the timeidx parameter to wrf.getvar() is convenient for computing diagnostic variables across multiple files/times, but there is something that users should be aware of. When wrf.ALL_TIMES is set as the timeidx argument, all arrays used in the computation are extracted for all times before the computation is started. This can cause serious memory issues on smaller hardware systems like laptops.

In this example, the user wants to use a data set that is 289 x 39 x 300 x 300 and compute z for the entire data set. The user is using a laptop with 8 GB of memory.

from netCDF4 import Dataset
from wrf import getvar, ALL_TIMES

file_list = [Dataset("/path/to/file1"), Dataset("/path/to/file2"),...]
z = getvar(file_list, "z", ALL_TIMES)

Five hours later, the computation finished. What happened?

In wrf-python, all of the computational routines use 8-byte REAL variables so that both the 4-byte and 8-byte version of WRF output can be used. The calculation for z extracts three variables (P, PHB, and HGT) and returns a fourth array (RESULT). The RESULT will get cut in half to 4-byte REALs after the computation, but needs an 8-byte REAL when the result is computed.

Let’s look at the approximate amount memory needed:

P: 289 x 39 x 300 x 300 x 8 = 8,115,120,000 bytes (~8 GB!)

PHB: 289 x 39 x 300 x 300 x 8 = 8,115,120,000 bytes (~8 GB!)

HGT: 289 x 300 x 300 x 8 = 208,080,000 (~208 MB)

RESULT: 289 x 39 x 300 x 300 x 8 = 8,115,120,000 bytes (~8 GB!)

Yikes! So, in order to do this calculation using wrf.ALL_TIMES as the timeidx, over 24.2 GB are needed for this one calculation. When the laptop runs out of memory, it begins using the hard drive for swap memory, which runs hundreds of times slower than real memory.

To fix this situation, it is better to allocate the output array yourself and run the calculation for each time step in a loop (“loop-and-fill”). The required memory requirements change to:

(Note: only need to store the result in a 4-byte REAL)

FINAL_RESULT: 289 x 39 x 300 x 300 x 4 = 4,057560,000 bytes (~4 GB)

(Note: the numbers below are for each loop iteration)

P: 39 x 300 x 300 x 8 = 28,080,000 bytes (~28 MB)

PHB: 39 x 300 x 300 x 8 = 28,080,000 bytes (~28 MB)

HGT: 300 x 300 x 8 = 720,000 bytes (720 KB)

RESULT: 39 x 300 x 300 x 8 = 28,080,000 bytes (~28 MB)

Since the memory for the computation is deleted after each loop iteration, the total memory usage drops to approximately 4.1 GB.

The moral of the story is that you need to make sure that your system has enough memory to extract everything it needs up front if you want to use wrf.ALL_TIMES, otherwise it is better to “loop-and-fill” yourself.

Here is an example of the “loop-and-fill” technique:

from __future__ import print_function, division

import numpy as np
from netCDF4 import Dataset
from wrf import getvar, ALL_TIMES

filename_list = ["/path/to/file1", "/path/to/file2",...]

# Result shape (hard coded for this example)
result_shape = (289, 39, 300, 300)

# Only need 4-byte floats
z_final = np.empty(result_shape, np.float32)

# Modify this number if using more than 1 time per file
times_per_file = 1

for timeidx in range(result_shape[0]):
    # Compute the file index and the time index inside the file
    fileidx = timeidx // times_per_file
    file_timeidx = timeidx % times_per_file

    f = Dataset(filename_list[fileidx])
    z = getvar(f, "z", file_timeidx)

    z_final[timeidx,:] = z[:]
    f.close()

The cache Argument for wrf.getvar()

If you have read through the documentation, you may have noticed that the wrf.getvar() routine contains a cache argument. What is this for?

Internally, if metadata is turned on, a variable is extracted from the NetCDF file and its metadata is copied to form the result’s metadata. Often this variable is one of the computation’s function arguments, so rather than spend time extracting the variable again for the computation, it is placed in a cache (dictionary) and passed on to the computational function.

What isn’t widely known is that this cache argument can also be supplied by end users wishing to speed up their application. This can be useful in situations where numerous calculations are being performed on the same data set. For many algorithms, the time needed to extract the arrays from the NetCDF file is on par with the time needed to perform the calculation. If you are computing numerous diagnostics, extracting the variables up front allows you to only pay this extraction penalty once, rather than inside of each call to wrf.getvar().

The cache is nothing more than a dictionary where each key is the variable name (e.g. “P”) and the value is the xarray.DataArray or numpy.ndarray variable. Creating the cache dictionary is easy, since the wrf.extract_vars() routine returns a dictionary for a sequence of variables.

Note

The timeidx parameter supplied to extract_vars() must be the same timeidx parameter that you plan to use for wrf.getvar(). Otherwise, it will crash with dimension mismatch errors.

Some common variables that you can use to create an effective cache are: P, PB, PH, PHB, T, QVAPOR, HGT, PSFC, U, V, W.

Below is an example showing the same computation done with and without the cache. The execution time is printed. The hardware used is a 2.8 GHz Intel Core i7, which contains 4 CPU cores with 2 hyper threads (8 total threads). This will be interpreted as 8 CPUs for OpenMP.

from __future__ import print_function

import time
from netCDF4 import Dataset
from wrf import getvar, ALL_TIMES, extract_vars

# The first two files contain four times, the last file contains only one.
wrf_filenames = ["/path/to/wrfout_d02_2005-08-28_00:00:00",
                 "/path/to/wrfout_d02_2005-08-28_12:00:00",
                 "/path/to/wrfout_d02_2005-08-29_00:00:00"]

wrfin = [Dataset(x) for x in wrf_filenames]

start = time.time()
my_cache = extract_vars(wrfin, ALL_TIMES, ("P", "PSFC", "PB", "PH", "PHB",
                                           "T", "QVAPOR", "HGT", "U", "V",
                                           "W"))
end = time.time()
print ("Time taken to build cache: ", (end-start), "s")

vars = ("avo", "eth", "cape_2d", "cape_3d", "ctt", "dbz", "mdbz",
        "geopt", "helicity", "lat", "lon", "omg", "p", "pressure",
        "pvo", "pw", "rh2", "rh", "slp", "ter", "td2", "td", "tc",
        "theta", "tk", "tv", "twb", "updraft_helicity", "ua", "va",
        "wa", "uvmet10", "uvmet", "z", "cfrac", "zstag", "geopt_stag")

# No cache
start = time.time()
for var in vars:
    v = getvar(wrfin, var, ALL_TIMES)
end = time.time()
no_cache_time = (end-start)

print ("Time taken without variable cache: ", no_cache_time, "s")

# With a cache
start = time.time()
for var in vars:
    v = getvar(wrfin, var, ALL_TIMES, cache=my_cache)
end = time.time()
cache_time = (end-start)

print ("Time taken with variable cache: ", cache_time, "s")

improvement = ((no_cache_time-cache_time)/no_cache_time) * 100
print ("The cache decreased computation time by: ", improvement, "%")

Result:

Time taken to build cache:  0.28154706955 s
Time taken without variable cache:  11.0905270576 s
Time taken with variable cache:  8.25931215286 s
The cache decreased computation time by:  25.5282268378 %

By removing the repeated extraction of common variables in the wrf.getvar() routine, for the single threaded case, the computation time has been reduced by 25.5% in this particular example.

Things get more interesting when OpenMP is turned on and set to use the maximum number of processors (in this case 8 threads are used).

from __future__ import print_function

import time
from netCDF4 import Dataset
from wrf import (getvar, ALL_TIMES, extract_vars,
                 omp_set_num_threads, omp_get_num_procs)

# The first two files contain four times, the last file contains only one.
wrf_filenames = ["/path/to/wrfout_d02_2005-08-28_00:00:00",
                 "/path/to/wrfout_d02_2005-08-28_12:00:00",
                 "/path/to/wrfout_d02_2005-08-29_00:00:00"]

wrfin = [Dataset(x) for x in wrf_filenames]

start = time.time()
my_cache = extract_vars(wrfin, ALL_TIMES, ("P", "PSFC", "PB", "PH", "PHB",
                                           "T", "QVAPOR", "HGT", "U", "V",
                                           "W"))
end = time.time()
print ("Time taken to build cache: ", (end-start), "s")

omp_set_num_threads(omp_get_num_procs())

vars = ("avo", "eth", "cape_2d", "cape_3d", "ctt", "dbz", "mdbz",
        "geopt", "helicity", "lat", "lon", "omg", "p", "pressure",
        "pvo", "pw", "rh2", "rh", "slp", "ter", "td2", "td", "tc",
        "theta", "tk", "tv", "twb", "updraft_helicity", "ua", "va",
        "wa", "uvmet10", "uvmet", "z", "cfrac", "zstag", "geopt_stag")

# No cache
start = time.time()
for var in vars:
    v = getvar(wrfin, var, ALL_TIMES)
end = time.time()
no_cache_time = (end-start)

print ("Time taken without variable cache: ", no_cache_time, "s")

# With a cache
start = time.time()
for var in vars:
    v = getvar(wrfin, var, ALL_TIMES, cache=my_cache)
end = time.time()
cache_time = (end-start)

print ("Time taken with variable cache: ", cache_time, "s")

improvement = ((no_cache_time-cache_time)/no_cache_time) * 100
print ("The cache decreased computation time by: ", improvement, "%")

Result:

Time taken to build cache:  0.2700548172 s
Time taken without variable cache:  6.02652812004 s
Time taken with variable cache:  3.27777099609 s
The cache decreased computation time by:  45.6109565772 %

In this example, 4 CPU cores (8 total threads) are used. When the cache is used, the computation time drops by 45%, so almost half the time was spent simply extracting variables from the NetCDF file. When compared to the 11.09 s needed to compute the single threaded case with no variable cache, the computation time drops by roughly 70% (compared to 45% with 8 threads but no cache).

In summary, if you are computing a lot of diagnostic variables, consider using the cache argument to improve performance, particularly if you want to maximize your multithreaded performance with OpenMP.

Plotting Examples

The examples below show how wrf-python can be used to make plots with matplotlib (with basemap and cartopy) and PyNGL. None of these examples make use of xarray’s builtin plotting functions, since additional work is most likely needed to extend xarray in order to work correctly. This is planned for a future release.

A subset of the wrfout file used in these examples can be downloaded here.

Matplotlib With Cartopy

Cartopy is becoming the main tool for base mapping with matplotlib, but you should be aware of a few shortcomings when working with WRF data.

  • The builtin transformations of coordinates when calling the contouring functions do not work correctly with the rotated pole projection. The transform_points method needs to be called manually on the latitude and longitude arrays.
  • The rotated pole projection requires the x and y limits to be set manually using set_xlim and set_ylim.
  • You can’t place latitude and longitude labels on the axes when using any projection other than Mercator or LatLon.

Plotting a Two-dimensional Field

_images/cartopy_slp.png
from netCDF4 import Dataset
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
import cartopy.crs as crs
from cartopy.feature import NaturalEarthFeature

from wrf import (to_np, getvar, smooth2d, get_cartopy, cartopy_xlim,
                 cartopy_ylim, latlon_coords)

# Open the NetCDF file
ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the sea level pressure
slp = getvar(ncfile, "slp")

# Smooth the sea level pressure since it tends to be noisy near the
# mountains
smooth_slp = smooth2d(slp, 3, cenweight=4)

# Get the latitude and longitude points
lats, lons = latlon_coords(slp)

# Get the cartopy mapping object
cart_proj = get_cartopy(slp)

# Create a figure
fig = plt.figure(figsize=(12,6))
# Set the GeoAxes to the projection used by WRF
ax = plt.axes(projection=cart_proj)

# Download and add the states and coastlines
states = NaturalEarthFeature(category="cultural", scale="50m",
                             facecolor="none",
                             name="admin_1_states_provinces_shp")
ax.add_feature(states, linewidth=.5, edgecolor="black")
ax.coastlines('50m', linewidth=0.8)

# Make the contour outlines and filled contours for the smoothed sea level
# pressure.
plt.contour(to_np(lons), to_np(lats), to_np(smooth_slp), 10, colors="black",
            transform=crs.PlateCarree())
plt.contourf(to_np(lons), to_np(lats), to_np(smooth_slp), 10,
             transform=crs.PlateCarree(),
             cmap=get_cmap("jet"))

# Add a color bar
plt.colorbar(ax=ax, shrink=.98)

# Set the map bounds
ax.set_xlim(cartopy_xlim(smooth_slp))
ax.set_ylim(cartopy_ylim(smooth_slp))

# Add the gridlines
ax.gridlines(color="black", linestyle="dotted")

plt.title("Sea Level Pressure (hPa)")

plt.show()

Horizontal Interpolation to a Pressure Level

_images/cartopy_500.png
from netCDF4 import Dataset
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
import cartopy.crs as crs
from cartopy.feature import NaturalEarthFeature

from wrf import (getvar, interplevel, to_np, latlon_coords, get_cartopy,
                 cartopy_xlim, cartopy_ylim)

# Open the NetCDF file
ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Extract the pressure, geopotential height, and wind variables
p = getvar(ncfile, "pressure")
z = getvar(ncfile, "z", units="dm")
ua = getvar(ncfile, "ua", units="kt")
va = getvar(ncfile, "va", units="kt")
wspd = getvar(ncfile, "wspd_wdir", units="kts")[0,:]

# Interpolate geopotential height, u, and v winds to 500 hPa
ht_500 = interplevel(z, p, 500)
u_500 = interplevel(ua, p, 500)
v_500 = interplevel(va, p, 500)
wspd_500 = interplevel(wspd, p, 500)

# Get the lat/lon coordinates
lats, lons = latlon_coords(ht_500)

# Get the map projection information
cart_proj = get_cartopy(ht_500)

# Create the figure
fig = plt.figure(figsize=(12,9))
ax = plt.axes(projection=cart_proj)

# Download and add the states and coastlines
states = NaturalEarthFeature(category="cultural", scale="50m",
                             facecolor="none",
                             name="admin_1_states_provinces_shp")
ax.add_feature(states, linewidth=0.5, edgecolor="black")
ax.coastlines('50m', linewidth=0.8)

# Add the 500 hPa geopotential height contours
levels = np.arange(520., 580., 6.)
contours = plt.contour(to_np(lons), to_np(lats), to_np(ht_500),
                       levels=levels, colors="black",
                       transform=crs.PlateCarree())
plt.clabel(contours, inline=1, fontsize=10, fmt="%i")

# Add the wind speed contours
levels = [25, 30, 35, 40, 50, 60, 70, 80, 90, 100, 110, 120]
wspd_contours = plt.contourf(to_np(lons), to_np(lats), to_np(wspd_500),
                             levels=levels,
                             cmap=get_cmap("rainbow"),
                             transform=crs.PlateCarree())
plt.colorbar(wspd_contours, ax=ax, orientation="horizontal", pad=.05)

# Add the 500 hPa wind barbs, only plotting every 125th data point.
plt.barbs(to_np(lons[::125,::125]), to_np(lats[::125,::125]),
          to_np(u_500[::125, ::125]), to_np(v_500[::125, ::125]),
          transform=crs.PlateCarree(), length=6)

# Set the map bounds
ax.set_xlim(cartopy_xlim(ht_500))
ax.set_ylim(cartopy_ylim(ht_500))

ax.gridlines()

plt.title("500 MB Height (dm), Wind Speed (kt), Barbs (kt)")

plt.show()

Panel Plots From Front Page

This lengthy example shows how to make the panel plots on the first page of the documentation. For a simpler example of how to make a cross section plot, see Vertical Cross Section.

_images/matthew.png
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
import cartopy.crs as crs
import cartopy.feature as cfeature
from netCDF4 import Dataset

from wrf import (getvar, to_np, vertcross, smooth2d, CoordPair, GeoBounds,
                 get_cartopy, latlon_coords, cartopy_xlim, cartopy_ylim)

# Open the NetCDF file
ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the WRF variables
slp = getvar(ncfile, "slp")
smooth_slp = smooth2d(slp, 3)
ctt = getvar(ncfile, "ctt")
z = getvar(ncfile, "z")
dbz = getvar(ncfile, "dbz")
Z = 10**(dbz/10.)
wspd =  getvar(ncfile, "wspd_wdir", units="kt")[0,:]

# Set the start point and end point for the cross section
start_point = CoordPair(lat=26.76, lon=-80.0)
end_point = CoordPair(lat=26.76, lon=-77.8)

# Compute the vertical cross-section interpolation.  Also, include the
# lat/lon points along the cross-section in the metadata by setting latlon
# to True.
z_cross = vertcross(Z, z, wrfin=ncfile, start_point=start_point,
                    end_point=end_point, latlon=True, meta=True)
wspd_cross = vertcross(wspd, z, wrfin=ncfile, start_point=start_point,
                       end_point=end_point, latlon=True, meta=True)
dbz_cross = 10.0 * np.log10(z_cross)

# Get the lat/lon points
lats, lons = latlon_coords(slp)

# Get the cartopy projection object
cart_proj = get_cartopy(slp)

# Create a figure that will have 3 subplots
fig = plt.figure(figsize=(12,9))
ax_ctt = fig.add_subplot(1,2,1,projection=cart_proj)
ax_wspd = fig.add_subplot(2,2,2)
ax_dbz = fig.add_subplot(2,2,4)

# Download and create the states, land, and oceans using cartopy features
states = cfeature.NaturalEarthFeature(category='cultural', scale='50m',
                                      facecolor='none',
                                      name='admin_1_states_provinces_shp')
land = cfeature.NaturalEarthFeature(category='physical', name='land',
                                    scale='50m',
                                    facecolor=cfeature.COLORS['land'])
ocean = cfeature.NaturalEarthFeature(category='physical', name='ocean',
                                     scale='50m',
                                     facecolor=cfeature.COLORS['water'])

# Make the pressure contours
contour_levels = [960, 965, 970, 975, 980, 990]
c1 = ax_ctt.contour(lons, lats, to_np(smooth_slp), levels=contour_levels,
                    colors="white", transform=crs.PlateCarree(), zorder=3,
                    linewidths=1.0)

# Create the filled cloud top temperature contours
contour_levels = [-80.0, -70.0, -60, -50, -40, -30, -20, -10, 0, 10]
ctt_contours = ax_ctt.contourf(to_np(lons), to_np(lats), to_np(ctt),
                               contour_levels, cmap=get_cmap("Greys"),
                               transform=crs.PlateCarree(), zorder=2)

ax_ctt.plot([start_point.lon, end_point.lon],
            [start_point.lat, end_point.lat], color="yellow", marker="o",
            transform=crs.PlateCarree(), zorder=3)

# Create the color bar for cloud top temperature
cb_ctt = fig.colorbar(ctt_contours, ax=ax_ctt, shrink=.60)
cb_ctt.ax.tick_params(labelsize=5)

# Draw the oceans, land, and states
ax_ctt.add_feature(ocean)
ax_ctt.add_feature(land)
ax_ctt.add_feature(states, linewidth=.5, edgecolor="black")

# Crop the domain to the region around the hurricane
hur_bounds = GeoBounds(CoordPair(lat=np.amin(to_np(lats)), lon=-85.0),
                       CoordPair(lat=30.0, lon=-72.0))
ax_ctt.set_xlim(cartopy_xlim(ctt, geobounds=hur_bounds))
ax_ctt.set_ylim(cartopy_ylim(ctt, geobounds=hur_bounds))
ax_ctt.gridlines(color="white", linestyle="dotted")

# Make the contour plot for wind speed
wspd_contours = ax_wspd.contourf(to_np(wspd_cross), cmap=get_cmap("jet"))
# Add the color bar
cb_wspd = fig.colorbar(wspd_contours, ax=ax_wspd)
cb_wspd.ax.tick_params(labelsize=5)

# Make the contour plot for dbz
levels = [5 + 5*n for n in range(15)]
dbz_contours = ax_dbz.contourf(to_np(dbz_cross), levels=levels,
                               cmap=get_cmap("jet"))
cb_dbz = fig.colorbar(dbz_contours, ax=ax_dbz)
cb_dbz.ax.tick_params(labelsize=5)

# Set the x-ticks to use latitude and longitude labels
coord_pairs = to_np(dbz_cross.coords["xy_loc"])
x_ticks = np.arange(coord_pairs.shape[0])
x_labels = [pair.latlon_str() for pair in to_np(coord_pairs)]
ax_wspd.set_xticks(x_ticks[::20])
ax_wspd.set_xticklabels([], rotation=45)
ax_dbz.set_xticks(x_ticks[::20])
ax_dbz.set_xticklabels(x_labels[::20], rotation=45, fontsize=4)

# Set the y-ticks to be height
vert_vals = to_np(dbz_cross.coords["vertical"])
v_ticks = np.arange(vert_vals.shape[0])
ax_wspd.set_yticks(v_ticks[::20])
ax_wspd.set_yticklabels(vert_vals[::20], fontsize=4)
ax_dbz.set_yticks(v_ticks[::20])
ax_dbz.set_yticklabels(vert_vals[::20], fontsize=4)

# Set the x-axis and  y-axis labels
ax_dbz.set_xlabel("Latitude, Longitude", fontsize=5)
ax_wspd.set_ylabel("Height (m)", fontsize=5)
ax_dbz.set_ylabel("Height (m)", fontsize=5)

# Add a title
ax_ctt.set_title("Cloud Top Temperature (degC)", {"fontsize" : 7})
ax_wspd.set_title("Cross-Section of Wind Speed (kt)", {"fontsize" : 7})
ax_dbz.set_title("Cross-Section of Reflectivity (dBZ)", {"fontsize" : 7})

plt.show()

Matplotlib with Basemap

Although basemap is in maintenance mode only and becoming deprecated, it is still widely used by many programmers. Cartopy is becoming the preferred package for mapping, however it suffers from growing pains in some areas (can’t use latitude/longitude labels for many map projections). If you run in to these issues, basemap is likely to accomplish what you need.

Plotting a Two-Dimensional Field

_images/basemap_slp.png
from netCDF4 import Dataset
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
from mpl_toolkits.basemap import Basemap

from wrf import to_np, getvar, smooth2d, get_basemap, latlon_coords

# Open the NetCDF file
ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the sea level pressure
slp = getvar(ncfile, "slp")

# Smooth the sea level pressure since it tends to be noisy near the
# mountains
smooth_slp = smooth2d(slp, 3, cenweight=4)

# Get the latitude and longitude points
lats, lons = latlon_coords(slp)

# Get the basemap object
bm = get_basemap(slp)

# Create a figure
fig = plt.figure(figsize=(12,9))

# Add geographic outlines
bm.drawcoastlines(linewidth=0.25)
bm.drawstates(linewidth=0.25)
bm.drawcountries(linewidth=0.25)

# Convert the lats and lons to x and y.  Make sure you convert the lats and
# lons to numpy arrays via to_np, or basemap crashes with an undefined
# RuntimeError.
x, y = bm(to_np(lons), to_np(lats))

# Draw the contours and filled contours
bm.contour(x, y, to_np(smooth_slp), 10, colors="black")
bm.contourf(x, y, to_np(smooth_slp), 10, cmap=get_cmap("jet"))

# Add a color bar
plt.colorbar(shrink=.62)

plt.title("Sea Level Pressure (hPa)")

plt.show()

Horizontal Interpolation to a Pressure Level

_images/basemap_500.png
from netCDF4 import Dataset
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap

from wrf import getvar, interplevel, to_np, get_basemap, latlon_coords

# Open the NetCDF file
ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Extract the pressure, geopotential height, and wind variables
p = getvar(ncfile, "pressure")
z = getvar(ncfile, "z", units="dm")
ua = getvar(ncfile, "ua", units="kt")
va = getvar(ncfile, "va", units="kt")
wspd = getvar(ncfile, "wspd_wdir", units="kts")[0,:]

# Interpolate geopotential height, u, and v winds to 500 hPa
ht_500 = interplevel(z, p, 500)
u_500 = interplevel(ua, p, 500)
v_500 = interplevel(va, p, 500)
wspd_500 = interplevel(wspd, p, 500)

# Get the lat/lon coordinates
lats, lons = latlon_coords(ht_500)

# Get the basemap object
bm = get_basemap(ht_500)

# Create the figure
fig = plt.figure(figsize=(12,9))
ax = plt.axes()

# Convert the lat/lon coordinates to x/y coordinates in the projection space
x, y = bm(to_np(lons), to_np(lats))

# Add the 500 hPa geopotential height contours
levels = np.arange(520., 580., 6.)
contours = bm.contour(x, y, to_np(ht_500), levels=levels, colors="black")
plt.clabel(contours, inline=1, fontsize=10, fmt="%i")

# Add the wind speed contours
levels = [25, 30, 35, 40, 50, 60, 70, 80, 90, 100, 110, 120]
wspd_contours = bm.contourf(x, y, to_np(wspd_500), levels=levels,
                            cmap=get_cmap("rainbow"))
plt.colorbar(wspd_contours, ax=ax, orientation="horizontal", pad=.05)

# Add the geographic boundaries
bm.drawcoastlines(linewidth=0.25)
bm.drawstates(linewidth=0.25)
bm.drawcountries(linewidth=0.25)

# Add the 500 hPa wind barbs, only plotting every 125th data point.
bm.barbs(x[::125,::125], y[::125,::125], to_np(u_500[::125, ::125]),
         to_np(v_500[::125, ::125]), length=6)

plt.title("500 MB Height (dm), Wind Speed (kt), Barbs (kt)")

plt.show()

Panel Plots from the Front Page

This lengthy example shows how to make the panel plots on the first page of the documentation. For a simpler example of how to make a cross section plot, see Vertical Cross Section.

_images/basemap_front.png
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
from netCDF4 import Dataset

from wrf import (getvar, to_np, vertcross, smooth2d, CoordPair,
                 get_basemap, latlon_coords)

# Open the NetCDF file
ncfile = Dataset("wrfout_d01_2016-10-07_00_00_00")

# Get the WRF variables
slp = getvar(ncfile, "slp")
smooth_slp = smooth2d(slp, 3)
ctt = getvar(ncfile, "ctt")
z = getvar(ncfile, "z")
dbz = getvar(ncfile, "dbz")
Z = 10**(dbz/10.)
wspd =  getvar(ncfile, "wspd_wdir", units="kt")[0,:]

# Set the start point and end point for the cross section
start_point = CoordPair(lat=26.76, lon=-80.0)
end_point = CoordPair(lat=26.76, lon=-77.8)

# Compute the vertical cross-section interpolation.  Also, include the
# lat/lon points along the cross-section in the metadata by setting latlon
# to True.
z_cross = vertcross(Z, z, wrfin=ncfile, start_point=start_point,
                    end_point=end_point, latlon=True, meta=True)
wspd_cross = vertcross(wspd, z, wrfin=ncfile, start_point=start_point,
                       end_point=end_point, latlon=True, meta=True)
dbz_cross = 10.0 * np.log10(z_cross)

# Get the latitude and longitude points
lats, lons = latlon_coords(slp)

# Create the figure that will have 3 subplots
fig = plt.figure(figsize=(12,9))
ax_ctt = fig.add_subplot(1,2,1)
ax_wspd = fig.add_subplot(2,2,2)
ax_dbz = fig.add_subplot(2,2,4)

# Get the basemap object
bm = get_basemap(slp)

# Convert the lat/lon points in to x/y points in the projection space
x, y = bm(to_np(lons), to_np(lats))

# Make the pressure contours
contour_levels = [960, 965, 970, 975, 980, 990]
c1 = bm.contour(x, y, to_np(smooth_slp), levels=contour_levels,
                colors="white", zorder=3, linewidths=1.0, ax=ax_ctt)

# Create the filled cloud top temperature contours
contour_levels = [-80.0, -70.0, -60, -50, -40, -30, -20, -10, 0, 10]
ctt_contours = bm.contourf(x, y, to_np(ctt), contour_levels,
                           cmap=get_cmap("Greys"), zorder=2, ax=ax_ctt)

point_x, point_y = bm([start_point.lon, end_point.lon],
                      [start_point.lat, end_point.lat])
bm.plot([point_x[0], point_x[1]], [point_y[0], point_y[1]], color="yellow",
        marker="o", zorder=3, ax=ax_ctt)

# Create the color bar for cloud top temperature
cb_ctt = fig.colorbar(ctt_contours, ax=ax_ctt, shrink=.60)
cb_ctt.ax.tick_params(labelsize=5)

# Draw the oceans, land, and states
bm.drawcoastlines(linewidth=0.25, ax=ax_ctt)
bm.drawstates(linewidth=0.25, ax=ax_ctt)
bm.drawcountries(linewidth=0.25, ax=ax_ctt)
bm.fillcontinents(color=np.array([ 0.9375 , 0.9375 , 0.859375]),
                                 ax=ax_ctt,
                                 lake_color=np.array([0.59375 ,
                                                      0.71484375,
                                                      0.8828125 ]))
bm.drawmapboundary(fill_color=np.array([ 0.59375 , 0.71484375, 0.8828125 ]),
                   ax=ax_ctt)

# Draw Parallels
parallels = np.arange(np.amin(lats), 30., 2.5)
bm.drawparallels(parallels, ax=ax_ctt, color="white")

merids = np.arange(-85.0, -72.0, 2.5)
bm.drawmeridians(merids, ax=ax_ctt, color="white")

# Crop the image to the hurricane region
x_start, y_start = bm(-85.0, np.amin(lats))
x_end, y_end = bm(-72.0, 30.0)

ax_ctt.set_xlim([x_start, x_end])
ax_ctt.set_ylim([y_start, y_end])

# Make the contour plot for wspd
wspd_contours = ax_wspd.contourf(to_np(wspd_cross), cmap=get_cmap("jet"))
# Add the color bar
cb_wspd = fig.colorbar(wspd_contours, ax=ax_wspd)
cb_wspd.ax.tick_params(labelsize=5)

# Make the contour plot for dbz
levels = [5 + 5*n for n in range(15)]
dbz_contours = ax_dbz.contourf(to_np(dbz_cross), levels=levels,
                               cmap=get_cmap("jet"))
cb_dbz = fig.colorbar(dbz_contours, ax=ax_dbz)
cb_dbz.ax.tick_params(labelsize=5)

# Set the x-ticks to use latitude and longitude labels.
coord_pairs = to_np(dbz_cross.coords["xy_loc"])
x_ticks = np.arange(coord_pairs.shape[0])
x_labels = [pair.latlon_str() for pair in to_np(coord_pairs)]
ax_wspd.set_xticks(x_ticks[::20])
ax_wspd.set_xticklabels([], rotation=45)
ax_dbz.set_xticks(x_ticks[::20])
ax_dbz.set_xticklabels(x_labels[::20], rotation=45, fontsize=4)

# Set the y-ticks to be height.
vert_vals = to_np(dbz_cross.coords["vertical"])
v_ticks = np.arange(vert_vals.shape[0])
ax_wspd.set_yticks(v_ticks[::20])
ax_wspd.set_yticklabels(vert_vals[::20], fontsize=4)
ax_dbz.set_yticks(v_ticks[::20])
ax_dbz.set_yticklabels(vert_vals[::20], fontsize=4)

# Set the x-axis and  y-axis labels
ax_dbz.set_xlabel("Latitude, Longitude", fontsize=5)
ax_wspd.set_ylabel("Height (m)", fontsize=5)
ax_dbz.set_ylabel("Height (m)", fontsize=5)

# Add titles
ax_ctt.set_title("Cloud Top Temperature (degC)", {"fontsize" : 7})
ax_wspd.set_title("Cross-Section of Wind Speed (kt)", {"fontsize" : 7})
ax_dbz.set_title("Cross-Section of Reflectivity (dBZ)", {"fontsize" : 7})

plt.show()

Vertical Cross Section

Vertical cross sections require no mapping software package and can be plotted using the standard matplotlib API.

_images/cartopy_cross.png
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.cm import get_cmap
import cartopy.crs as crs
from cartopy.feature import NaturalEarthFeature
from netCDF4 import Dataset

from wrf import to_np, getvar, CoordPair, vertcross

# Open the NetCDF file
filename = "wrfout_d01_2016-10-07_00_00_00"
ncfile = Dataset(filename)

# Extract the model height and wind speed
z = getvar(ncfile, "z")
wspd =  getvar(ncfile, "uvmet_wspd_wdir", units="kt")[0,:]

# Create the start point and end point for the cross section
start_point = CoordPair(lat=26.76, lon=-80.0)
end_point = CoordPair(lat=26.76, lon=-77.8)

# Compute the vertical cross-section interpolation.  Also, include the
# lat/lon points along the cross-section.
wspd_cross = vertcross(wspd, z, wrfin=ncfile, start_point=start_point,
                       end_point=end_point, latlon=True, meta=True)

# Create the figure
fig = plt.figure(figsize=(12,6))
ax = plt.axes()

# Make the contour plot
wspd_contours = ax.contourf(to_np(wspd_cross), cmap=get_cmap("jet"))

# Add the color bar
plt.colorbar(wspd_contours, ax=ax)

# Set the x-ticks to use latitude and longitude labels.
coord_pairs = to_np(wspd_cross.coords["xy_loc"])
x_ticks = np.arange(coord_pairs.shape[0])
x_labels = [pair.latlon_str(fmt="{:.2f}, {:.2f}")
            for pair in to_np(coord_pairs)]
ax.set_xticks(x_ticks[::20])
ax.set_xticklabels(x_labels[::20], rotation=45, fontsize=8)

# Set the y-ticks to be height.
vert_vals = to_np(wspd_cross.coords["vertical"])
v_ticks = np.arange(vert_vals.shape[0])
ax.set_yticks(v_ticks[::20])
ax.set_yticklabels(vert_vals[::20], fontsize=8)

# Set the x-axis and  y-axis labels
ax.set_xlabel("Latitude, Longitude", fontsize=12)
ax.set_ylabel("Height (m)", fontsize=12)

plt.title("Vertical Cross Section of Wind Speed (kt)")

plt.show()

Cross Section with Mountains

The example below shows how to make a cross section with the mountainous terrain filled.

_images/cross_mtns.png
import numpy as np
from matplotlib import pyplot
from matplotlib.cm import get_cmap
from matplotlib.colors import from_levels_and_colors
from cartopy import crs
from cartopy.feature import NaturalEarthFeature, COLORS
from netCDF4 import Dataset
from wrf import (getvar, to_np, get_cartopy, latlon_coords, vertcross,
                 cartopy_xlim, cartopy_ylim, interpline, CoordPair)

wrf_file = Dataset("wrfout_d01_2010-06-04_00:00:00")

# Define the cross section start and end points
cross_start = CoordPair(lat=43.5, lon=-116.5)
cross_end = CoordPair(lat=43.5, lon=-114)

# Get the WRF variables
ht = getvar(wrf_file, "z", timeidx=-1)
ter = getvar(wrf_file, "ter", timeidx=-1)
dbz = getvar(wrf_file, "dbz", timeidx=-1)
max_dbz = getvar(wrf_file, "mdbz", timeidx=-1)
Z = 10**(dbz/10.) # Use linear Z for interpolation

# Compute the vertical cross-section interpolation.  Also, include the
# lat/lon points along the cross-section in the metadata by setting latlon
# to True.
z_cross = vertcross(Z, ht, wrfin=wrf_file,
                    start_point=cross_start,
                    end_point=cross_end,
                    latlon=True, meta=True)

# Convert back to dBz after interpolation
dbz_cross = 10.0 * np.log10(z_cross)

# Add back the attributes that xarray dropped from the operations above
dbz_cross.attrs.update(z_cross.attrs)
dbz_cross.attrs["description"] = "radar reflectivity cross section"
dbz_cross.attrs["units"] = "dBZ"

# To remove the slight gap between the dbz contours and terrain due to the
# contouring of gridded data, a new vertical grid spacing, and model grid
# staggering, fill in the lower grid cells with the first non-missing value
# for each column.

# Make a copy of the z cross data. Let's use regular numpy arrays for this.
dbz_cross_filled = np.ma.copy(to_np(dbz_cross))

# For each cross section column, find the first index with non-missing
# values and copy these to the missing elements below.
for i in range(dbz_cross_filled.shape[-1]):
    column_vals = dbz_cross_filled[:,i]
    # Let's find the lowest index that isn't filled. The nonzero function
    # finds all unmasked values greater than 0. Since 0 is a valid value
    # for dBZ, let's change that threshold to be -200 dBZ instead.
    first_idx = int(np.transpose((column_vals > -200).nonzero())[0])
    dbz_cross_filled[0:first_idx, i] = dbz_cross_filled[first_idx, i]

# Get the terrain heights along the cross section line
ter_line = interpline(ter, wrfin=wrf_file, start_point=cross_start,
                      end_point=cross_end)

# Get the lat/lon points
lats, lons = latlon_coords(dbz)

# Get the cartopy projection object
cart_proj = get_cartopy(dbz)

# Create the figure
fig = pyplot.figure(figsize=(8,6))
ax_cross = pyplot.axes()

dbz_levels = np.arange(5., 75., 5.)

# Create the color table found on NWS pages.
dbz_rgb = np.array([[4,233,231],
                    [1,159,244], [3,0,244],
                    [2,253,2], [1,197,1],
                    [0,142,0], [253,248,2],
                    [229,188,0], [253,149,0],
                    [253,0,0], [212,0,0],
                    [188,0,0],[248,0,253],
                    [152,84,198]], np.float32) / 255.0

dbz_map, dbz_norm = from_levels_and_colors(dbz_levels, dbz_rgb,
                                           extend="max")

# Make the cross section plot for dbz
dbz_levels = np.arange(5.,75.,5.)
xs = np.arange(0, dbz_cross.shape[-1], 1)
ys = to_np(dbz_cross.coords["vertical"])
dbz_contours = ax_cross.contourf(xs,
                                 ys,
                                 to_np(dbz_cross_filled),
                                 levels=dbz_levels,
                                 cmap=dbz_map,
                                 norm=dbz_norm,
                                 extend="max")
# Add the color bar
cb_dbz = fig.colorbar(dbz_contours, ax=ax_cross)
cb_dbz.ax.tick_params(labelsize=8)

# Fill in the mountain area
ht_fill = ax_cross.fill_between(xs, 0, to_np(ter_line),
                                facecolor="saddlebrown")

# Set the x-ticks to use latitude and longitude labels
coord_pairs = to_np(dbz_cross.coords["xy_loc"])
x_ticks = np.arange(coord_pairs.shape[0])
x_labels = [pair.latlon_str() for pair in to_np(coord_pairs)]

# Set the desired number of x ticks below
num_ticks = 5
thin = int((len(x_ticks) / num_ticks) + .5)
ax_cross.set_xticks(x_ticks[::thin])
ax_cross.set_xticklabels(x_labels[::thin], rotation=45, fontsize=8)

# Set the x-axis and  y-axis labels
ax_cross.set_xlabel("Latitude, Longitude", fontsize=12)
ax_cross.set_ylabel("Height (m)", fontsize=12)

# Add a title
ax_cross.set_title("Cross-Section of Reflectivity (dBZ)", {"fontsize" : 14})

pyplot.show()

API Reference

User API

Routines

Diagnostic Routine

The routine below is the primary routine for extracting variables from a WRF-ARW NetCDF file (or sequence of files) and performing diagnostic calculations.

wrf.getvar Returns basic diagnostics from the WRF ARW model output.
Interpolation Routines

The routines below are the primary routines used for performing interpolation calculations.

wrf.interplevel Return the three-dimensional field interpolated to a horizontal plane at the specified vertical level.
wrf.vertcross Return the vertical cross section for a three-dimensional field.
wrf.interpline Return the two-dimensional field interpolated along a line.
wrf.vinterp Return the field vertically interpolated to the given the type of surface and a set of new levels.
Lat-Lon to/from XY Routines

The routines below are used for converting back and forth between xy-grid space and latitude-longitude space.

wrf.ll_to_xy Return the x,y coordinates for a specified latitude and longitude.
wrf.xy_to_ll Return the latitude and longitude for specified x,y coordinates.
wrf.ll_to_xy_proj Return the x, y coordinates for a specified latitude and longitude.
wrf.xy_to_ll_proj Return the latitude and longitude for the specified x,y coordinates.
Grid Destaggering Routine

The routine below is used to convert a variable on a staggered grid to the unstaggered grid.

wrf.destagger Return the variable on the unstaggered grid.
Numpy Extraction Routine

The routine below is used to extract a numpy.ndarray from a xarray.DataArray. This routine must be used before passing the array object to a compiled extension.

wrf.to_np Return the numpy.ndarray contained in an xarray.DataArray instance.
Variable Extraction Routines

The routines below are primarily used internally by wrf.getvar(), but some users may find them useful to manually extract variables from a WRF NetCDF file (or a sequence of NetCDF files).

wrf.extract_vars Extract variables from a NetCDF file object or a sequence of NetCDF file objects.
wrf.combine_files Combine and return an array object for the sequence of WRF output files.
wrf.extract_dim Return the dimension size for the specified dimension name.
wrf.extract_global_attrs Return the global attribute(s).
wrf.extract_times Return a sequence of time objects.
Plotting Helper Routines

The routines below are used to assist with plotting.

wrf.geo_bounds Return the geographic boundaries for the variable or file(s).
wrf.latlon_coords Return the latitude and longitude coordinates from a xarray.DataArray object.
wrf.get_cartopy Return a cartopy.crs.Projection subclass for the map projection.
wrf.get_basemap Return a matplotlib.mpl_toolkits.basemap.Basemap object
wrf.get_pyngl Return a Ngl.Resources object for the map projection.
wrf.cartopy_xlim Return the x-axis limits in the projected coordinates.
wrf.cartopy_ylim Return the y-axis limits in the projected coordinates.
Raw Diagnostic Routines

The routines below can be used when working with variables that are not contained in a WRF-ARW NetCDF file. They can also be used with non-WRF data. However, if you are working with WRF-ARW NetCDF files, use wrf.getvar() instead.

Keep in mind that these routines were developed for WRF-ARW, so your mileage may vary when working with non-WRF data. Also, the vast majority of these routines do not allow for missing values in any of the input arrays, so make sure they are removed before calling these routines.

wrf.xy Return the x,y points for a line within a two-dimensional grid.
wrf.interp1d Return the linear interpolation of a one-dimensional variable.
wrf.interp2dxy Return a cross section for a three-dimensional field.
wrf.interpz3d Return the field interpolated to a specified pressure or height level.
wrf.slp Return the sea level pressure.
wrf.tk Return the temperature.
wrf.td Return the dewpoint temperature.
wrf.rh Return the relative humidity.
wrf.uvmet Return the u,v components of the wind rotated to earth coordinates.
wrf.smooth2d Return the field smoothed.
wrf.cape_2d Return the two-dimensional MCAPE, MCIN, LCL, and LFC.
wrf.cape_3d Return the three-dimensional CAPE and CIN.
wrf.cloudfrac Return the cloud fraction.
wrf.ctt Return the cloud top temperature.
wrf.dbz Return the simulated radar reflectivity.
wrf.srhel Return the storm relative helicity.
wrf.udhel Return the updraft helicity.
wrf.avo Return the absolute vorticity.
wrf.pvo Return the potential vorticity.
wrf.eth Return the equivalent potential temperature.
wrf.wetbulb Return the wetbulb temperature.
wrf.tvirtual Return the virtual temperature.
wrf.omega Return omega.
wrf.pw Return the precipitable water.
OpenMP Runtime Library Routines

The routines below are the OpenMP runtime libraries that have been wrapped for wrf-python. The entire library (OpenMP 3.x) has been wrapped, but many of the routines are only useful inside of an OpenMP thread, so they aren’t useful from inside the Python interpreter. Also, the Fortran code in wrf-python is fairly simple in terms of threading, so features like nested threads aren’t used. The documentation below is split in to the useful OpenMP functions and the less useful functions.

The documentation for each routine was taken directly from the OpenMP Specification. Read the specification for more details about these routines.

Useful OpenMP Routines

The routines below are useful when called from within a Python program. These routines handle setting the number of threads, setting up the scheduler, and timing.

It is also important to note that the OpenMP directives within the Fortran code all specify a runtime scheduler. This means that the user can control the type of scheduling to use from within their Python application by using the routines below.

wrf.omp_enabled Return True if OpenMP is enabled.
wrf.omp_set_num_threads Specify the number of threads to use.
wrf.omp_get_max_threads Return the maximum number of threads that can be used in a parallel region.
wrf.omp_get_num_procs Return the number of processors on the device.
wrf.omp_set_dynamic Enable or disable dynamic adjustment of the number of threads available for the execution of subsequent parallel regions by setting the value of the dyn-var ICV.
wrf.omp_get_dynamic Return the value of the dyn-var ICV, which determines whether dynamic adjustment of the number of threads is enabled or disabled.
wrf.omp_set_schedule Set the schedule that is applied when runtime is used as schedule kind, by setting the value of the run-sched-var ICV.
wrf.omp_get_schedule Return the schedule that is applied when the runtime schedule is used.
wrf.omp_get_thread_limit Return the maximum number of OpenMP threads available to participate in the current contention group.
wrf.omp_get_wtime Return elapsed wall clock time in seconds.
wrf.omp_get_wtick Return the precision of the timer used by wrf.omp_get_wtime().
Less Useful OpenMP Routines

The routines below are less useful because wrf-python does not use nested parallelism and some of the routines are only applicable when called from within an OpenMP thread.

wrf.omp_get_num_threads Return the number of threads in the current team.
wrf.omp_get_thread_num Return the thread number, within the current team, of the calling thread.
wrf.omp_in_parallel Return 1 if the active-levels-var ICV is greater than zero; otherwise, return 0.
wrf.omp_set_nested Enable or disable nested parallelism, by setting the nest-var ICV
wrf.omp_get_nested Return the value of the nest-var ICV, which determines if nested parallelism is enabled or disabled
wrf.omp_set_max_active_levels Limit the number of nested active parallel regions on the device, by setting the max-active-levels-var ICV.
wrf.omp_get_max_active_levels Return the value of the max-active-levels-var ICV, which determines the maximum number of nested active parallel regions on the device
wrf.omp_get_level Return the value of the levels-var ICV.
wrf.omp_get_ancestor_thread_num Return, for a given nested level of the current thread, the thread number of the ancestor of the current thread.
wrf.omp_get_team_size Return, for a given nested level of the current thread, the size of the thread team to which the ancestor or the current thread belongs
wrf.omp_get_active_level Return the value of the active-level-vars ICV.
wrf.omp_in_final Return 1 (True) if the routine is executed in a final task region; otherwise, it returns 0 (False).
wrf.omp_init_lock Initialize a simple OpenMP lock.
wrf.omp_init_nest_lock Initialize a nestable OpenMP lock.
wrf.omp_destroy_lock Destroy a simple OpenMP lock.
wrf.omp_destroy_nest_lock Destroy a nestable OpenMP lock.
wrf.omp_set_lock Set a simple OpenMP lock.
wrf.omp_set_nest_lock Set a nestable OpenMP lock.
wrf.omp_unset_lock Unset a simple OpenMP lock.
wrf.omp_unset_nest_lock Unset a nestable OpenMP lock.
wrf.omp_test_lock Test a simple OpenMP lock.
wrf.omp_test_nest_lock Test a nestable OpenMP lock.
Configuration Routines

The routines below are used to configure wrf-python by enabling or disabling third party packages. For the most part, these settings are configured automatically based on the presence of a third party package. However, disabling xarray can be useful when you want to turn off all metadata in one place.

wrf.xarray_enabled Return True if xarray is installed and enabled.
wrf.disable_xarray Disable xarray.
wrf.enable_xarray Enable xarray if it is installed.
wrf.cartopy_enabled Return True if cartopy is installed and enabled.
wrf.disable_cartopy Disable cartopy.
wrf.enable_cartopy Enable cartopy if it is installed.
wrf.basemap_enabled Return True if basemap is installed and enabled.
wrf.disable_basemap Disable basemap.
wrf.enable_basemap Enable basemap if it is installed.
wrf.pyngl_enabled Return True if pyngl is installed and enabled.
wrf.enable_pyngl Enable pyngl if it is installed.
wrf.disable_pyngl Disable pyngl.
wrf.set_cache_size Set the maximum number of items that the threadlocal cache can retain.
wrf.get_cache_size Return the maximum number of items that the threadlocal cache can retain.
wrf.omp_enabled Return True if OpenMP is enabled.
Miscellaneous Routines

The routines below are primarily used internally, but some users may find them helpful for other purposes.

wrf.is_time_coord_var Return True if the input variable name is a time coordinate.
wrf.get_coord_pairs Return a tuple for the variable names of the coordinate pair used for the 2D curvilinear coordinate variable.
wrf.is_multi_time_req Return True if the requested time index is for wrf.ALL_TIMES or None.
wrf.is_multi_file Return True if the input argument is an iterable.
wrf.has_time_coord Return True if the input file or sequence contains the time coordinate variable.
wrf.is_mapping Return True if the input file or sequence is a mapping type.
wrf.latlon_coordvars Return the first found latitude and longitude coordinate names from a NetCDF variable dictionary.
wrf.is_coordvar Returns True if the variable is a coordinate variable.
wrf.get_iterable Returns a resettable iterable object.
wrf.is_moving_domain Return True if the domain is a moving nest.
wrf.npbytes_to_str Return a bytes object for the raw character array.
wrf.is_standard_wrf_var Return True if the variable is a standard WRF variable and not a diagnostic.
wrf.is_staggered Return True if the variable is on a staggered grid.
wrf.get_left_indexes Returns a tuple for the extra leftmost dimension sizes.
wrf.iter_left_indexes Yield the iteration tuples for a sequence of dimensions sizes.
wrf.get_right_slices Return an indexing tuple where the left dimensions are held to a fixed value and the right dimensions are set to slice objects.
wrf.get_proj_params Return a tuple of latitude, longitude, and projection parameters from a WRF output file object or a sequence of WRF output file objects.
wrf.psafilepath Return the full path to the ‘psadilookup.dat’ file.
wrf.get_id Return the cache id.
wrf.getproj Return a wrf.WrfProj subclass.
wrf.cache_item Store an item in the threadlocal cache.
wrf.get_cached_item Return an item from the threadlocal cache.
wrf.ll_points Return the lower left latitude and longitude point(s).
wrf.pairs_to_latlon Return latitude and longitude arrays from a sequence of wrf.CoordPair objects.

Classes

Exceptions
wrf.DiagnosticError Raised when an error occurs in a diagnostic routine.
CoordPair Class

The class below is used for storing coordinate metadata from routines that use a single point for an (x, y) or (lat, lon) location.

wrf.CoordPair A class that stores (x, y) and/or (latitude, longitude) coordinate pairs.
CoordPair Methods
wrf.CoordPair.latlon_str Return a str for the (latitude, longitude) coordinate pair.
wrf.CoordPair.xy_str Return a str for the (x,y) coordinate pair.
GeoBounds Class

The class below is used for specifying geographic boundaries.

wrf.GeoBounds A class that stores the geographic boundaries.
Projection Classes

The classes below are used to hold the projection information in the ‘projection’ entry within a xarray.DataArray.attrs attribute.

Projection Base Class

The base class for all map projection types.

wrf.WrfProj A base class for storing map projection information from WRF data.
Projection Base Class Methods

The class methods for all projection types.

wrf.WrfProj.basemap Return a matplotlib.mpl_toolkits.basemap.Basemap object for the map projection.
wrf.WrfProj.cartopy Return a cartopy.crs.Projection subclass for the map projection.
wrf.WrfProj.cartopy_xlim Return the x extents in projected coordinates for cartopy.
wrf.WrfProj.cartopy_ylim Return the y extents in projected coordinates for cartopy.
wrf.WrfProj.pyngl Return a Ngl.Resources object for the map projection.
wrf.WrfProj.cf Return a dictionary with the NetCDF CF parameters for the projection.
wrf.WrfProj.proj4 Return the PROJ.4 string for the map projection.
Projection Subclasses

See wrf.WrfProj for methods and attributes.

wrf.NullProjection A wrf.WrfProj subclass for empty projections.
wrf.LambertConformal A wrf.WrfProj subclass for Lambert Conformal Conic projections.
wrf.Mercator A wrf.WrfProj subclass for Mercator projections.
wrf.PolarStereographic A wrf.WrfProj subclass for Polar Stereographic projections.
wrf.LatLon A wrf.WrfProj subclass for Lat Lon projections.
wrf.RotatedLatLon A wrf.WrfProj subclass for Rotated Lat Lon projections.

Internal API

Routines

Extraction and Diagnostic Routines

The routines below are called internally by wrf.getvar().

wrf.g_cape.get_2dcape Return the two-dimensional fields of MCAPE, MCIN, LCL, and LFC.
wrf.g_cape.get_3dcape Return the three-dimensional CAPE and CIN.
wrf.g_cloudfrac.get_cloudfrac Return the cloud fraction for low, mid, and high level clouds.
wrf.g_ctt.get_ctt Return the cloud top temperature.
wrf.g_dbz.get_dbz Return the simulated radar reflectivity.
wrf.g_dbz.get_max_dbz Return the maximum simulated radar reflectivity.
wrf.g_dewpoint.get_dp Return the dewpoint temperature.
wrf.g_dewpoint.get_dp_2m Return the 2m dewpoint temperature.
wrf.g_geoht.get_geopt Return the geopotential.
wrf.g_geoht.get_height Return the geopotential height.
wrf.g_geoht.get_height_agl Return the geopotential height (AGL).
wrf.g_geoht.get_stag_geopt Return the geopotential for the vertically staggered grid.
wrf.g_geoht.get_stag_height Return the geopotential height for the vertically staggered grid.
wrf.g_helicity.get_srh Return the storm relative helicity.
wrf.g_helicity.get_uh Return the updraft helicity.
wrf.g_omega.get_omega Return Omega.
wrf.g_pressure.get_pressure Return the pressure in the specified units.
wrf.g_pressure.get_pressure_hpa Return the pressure in [hPa].
wrf.g_pw.get_pw Return the preciptiable water.
wrf.g_rh.get_rh Return the relative humidity.
wrf.g_rh.get_rh_2m Return the 2m relative humidity.
wrf.g_slp.get_slp Return the sea level pressure in the specified units.
wrf.g_temp.get_theta Return the potential temperature.
wrf.g_temp.get_temp Return the temperature in the specified units.
wrf.g_temp.get_eth Return the equivalent potential temperature.
wrf.g_temp.get_tv Return the virtual temperature.
wrf.g_temp.get_tw Return the wetbulb temperature.
wrf.g_temp.get_tk Return the temperature in [K].
wrf.g_temp.get_tc Return the temperature in [degC].
wrf.g_terrain.get_terrain Return the terrain height in the specified units.
wrf.g_times.get_times Return a sequence of time objects.
wrf.g_times.get_xtimes Return a sequence of time objects.
wrf.g_uvmet.get_uvmet Return the u,v wind components rotated to earth coordinates.
wrf.g_uvmet.get_uvmet10 Return the u,v components for the 10m winds rotated to earth coordinates.
wrf.g_uvmet.get_uvmet_wspd_wdir Return the wind speed and wind direction for the wind rotated to earth coordinates.
wrf.g_uvmet.get_uvmet10_wspd_wdir Return the wind speed and wind direction for the 10m wind rotated to earth coordinates.
wrf.g_vorticity.get_avo Return the absolute vorticity.
wrf.g_vorticity.get_pvo Return the potential vorticity.
wrf.g_wind.get_u_destag Return the u-component of the wind on mass points.
wrf.g_wind.get_v_destag Return the v-component of the wind on mass points.
wrf.g_wind.get_w_destag Return the w-component of the wind on mass points.
wrf.g_wind.get_destag_wspd_wdir Return the wind speed and wind direction for the wind in the projected coordinate (i.e.
wrf.g_wind.get_destag_wspd_wdir10 Return the wind speed and wind direction for the 10m wind in projected coordinate (i.e.
wrf.g_wind.get_destag_wspd Return the wind speed in the projected coordinate (i.e.
wrf.g_wind.get_destag_wdir Return the wind direction in the projected coordinate (i.e.
wrf.g_wind.get_destag_wspd10 Return the wind speed for the 10m wind in projected coordinate (i.e.
wrf.g_wind.get_destag_wdir10 Return the wind direction for the 10m wind in projected coordinate (i.e.
wrf.g_uvmet.get_uvmet_wspd Return the wind speed for the wind rotated to earth coordinates.
wrf.g_uvmet.get_uvmet_wdir Return the wind direction for the wind rotated to earth coordinates.
wrf.g_uvmet.get_uvmet10_wspd Return the wind speed for the 10m wind rotated to earth coordinates.
wrf.g_uvmet.get_uvmet10_wdir Return the wind direction for the 10m wind rotated to earth coordinates.
wrf.g_cloudfrac.get_low_cloudfrac Return the cloud fraction for the low level clouds.
wrf.g_cloudfrac.get_mid_cloudfrac Return the cloud fraction for the mid level clouds.
wrf.g_cloudfrac.get_high_cloudfrac Return the cloud fraction for the high level clouds.
wrf.g_cape.get_cape2d_only Return the two-dimensional field of MCAPE (Max Convective Available Potential Energy).
wrf.g_cape.get_cin2d_only Return the two-dimensional field of MCIN (Max Convective Inhibition).
wrf.g_cape.get_lcl Return the two-dimensional field of LCL (Lifted Condensation Level).
wrf.g_cape.get_lfc Return the two-dimensional field of LFC (Level of Free Convection).
wrf.g_cape.get_3dcape_only Return the three-dimensional field of CAPE (Convective Available Potential Energy).
wrf.g_cape.get_3dcin_only Return the three-dimensional field of CIN (Convective Inhibition).

Decorators

Algorithm Decorators

The decorators below are used for performing common operations related to diagnostic and interpolation calculations.

wrf.decorators.convert_units A decorator that converts the units from the wrapped function’s output.
wrf.decorators.left_iteration A decorator to handle iterating over the leftmost dimensions.
wrf.decorators.cast_type A decorator to handle type casting.
wrf.decorators.extract_and_transpose A decorator to extract the data array from a xarray.DataArray
wrf.decorators.check_args A decorator to check that the wrapped function’s arguments are valid.
wrf.specialdec.uvmet_left_iter A decorator to handle iterating over the leftmost dimensions for the uvmet diagnostic.
wrf.specialdec.cape_left_iter A decorator to handle iterating over the leftmost dimensions for the cape diagnostic.
wrf.specialdec.cloudfrac_left_iter A decorator to handle iterating over the leftmost dimensions for the cloud fraction diagnostic.
Metadata Decorators

The decorators below are used for performing common operations related to setting metadata.

wrf.metadecorators.copy_and_set_metadata A decorator that sets the metadata for a wrapped function’s output.
wrf.metadecorators.set_wind_metadata A decorator that sets the metadata for a wrapped wind function’s output.
wrf.metadecorators.set_cape_metadata A decorator that sets the metadata for a wrapped CAPE function’s output.
wrf.metadecorators.set_cloudfrac_metadata A decorator that sets the metadata for a wrapped cloud fraction function’s output.
wrf.metadecorators.set_latlon_metadata A decorator that sets the metadata for a wrapped latlon function’s output.
wrf.metadecorators.set_height_metadata A decorator that sets the metadata for a wrapped height function’s output.
wrf.metadecorators.set_interp_metadata A decorator that sets the metadata for a wrapped interpolation function.
wrf.metadecorators.set_alg_metadata A decorator that sets the metadata for a wrapped raw diagnostic function.
wrf.metadecorators.set_uvmet_alg_metadata A decorator that sets the metadata for the wrapped raw UVMET diagnostic function.
wrf.metadecorators.set_cape_alg_metadata A decorator that sets the metadata for the wrapped raw CAPE diagnostic function.
wrf.metadecorators.set_cloudfrac_alg_metadata A decorator that sets the metadata for the wrapped raw cloud fraction diagnostic function.
wrf.metadecorators.set_destag_metadata A decorator that sets the metadata for the wrapped raw destaggering function.
Decorator Utilities

The routines below are used within the decorators.

wrf.either A callable class that determines which variable is present in the file.
wrf.combine_dims A callable class that mixes dimension sizes from different function arguments.
wrf.from_var A callable class that retrieves attributes from the function argument.
wrf.from_args Return a mapping of argument name to value for the called function.
wrf.args_to_list Return all of the function arguments, including defaults, as a list.
wrf.arg_location Return the function arguments as a single list along with the index within that list for a specified argument name.

Classes

Iterable Wrapper Class

The class below is an Iterable wrapper class and provides an __iter__ function that always returns the beginning of the sequence, regardless of the Iterable type.

wrf.IterWrapper A wrapper class for generators and custom iterable classes that returns a new iterator to the start of the sequence when IterWrapper.__iter__() is called.

Frequently Asked Questions

wrf.getvar() uses more memory when xarray is enabled. Is there a memory leak?

The difference in memory usage can be observed when running the following code with and without xarray enabled (see wrf.disable_xarray()):

from netCDF4 import Dataset
import wrf

# Uncomment to disable xarray
# wrf.disable_xarray()

for i in range(150):
    f = Dataset('wrfout_d01_2005-07-17_12_00_00.nc')
    p = wrf.getvar(f, 'pressure')
    f.close()

When xarray is enabled, there is an internal thread-local cache used to hold the XLAT and XLONG coordinate variables, along with a boolean variable to remember if a particular file or sequence is from a moving nest. This is useful when working with sequences of WRF files, so that the XLAT and XLONG variables aren’t repeatedly extracted. This cache is limited to holding 20 items by default, where the key to each cache entry is the object ID for the file or sequence of files. In this particular example, a new file object is created for each iteration, which will have a new object ID, so the 20 cache spots are going to be filled up. Memory usage will be quite a bit higher for the xarray-enabled case, but the memory usage will eventually top off.

There is a function wrf.set_cache_size() that can be used to decrease the number of files/sequences in the cache. By setting the cache size to 0, the cache can be disabled completely.

Can I use xarray.Dataset as an input to the wrf-python functions?

Currently, wrf-python is designed to use the API for PyNIO and netCDF4-python, which differs from the xarray.Dataset API. However, there is an undocumented feature for Dataset objects that can be used as workaround. Each Dataset object has a xarray.Dataset._file_obj.ds attribute that can be used to obtain the underlying netCDF file object. Keep in mind that this is not in the public API for xarray, so it could break in a future release.

Here is an example:

import xarray
import wrf

ds = xarray.open_dataset(FILENAME)
slp = wrf.getvar(ds._file_obj.ds, "slp")

In a future release of wrf-python, direct support for Dataset objects will be added and this will no longer be necessary.

Why is wrf-python taking hours to run?

The most likely culprit is insufficient memory for the calculation you are trying to perform.

See Performance Tips for more information.

Support

Github

For crashes and other issues related to the software, you can submit an issue to the Github repository.

This should be used strictly for crashes and bugs. For general usage questions, please use the Google Group.

Submitting Files

When issues are encountered, we often need to see the file that is problematic. As long as the file is not so large that it fills up the FTP server, users can submit files via FTP using the following instructions:

ftp ftp.cgd.ucar.edu
anonymous
<use your email address for the password>
cd incoming
put <your file>
put <your file>
.
.
.
quit

Note

For security reasons, you cannot list the contents of this directory, and neither can we. You must tell us the exact names of the files that you uploaded in order for us to retrieve them.

Google Group

The best way to receive support is to use the [wrfpython-talk] group on Google. Users can ask usage questions, submit bug reports, make feature requests, etc. All users of wrf-python are welcome to join.

Below, you can view the most recent discussions:


Contributor Guide

Introduction

Thank you for your interest in contributing to the WRF-Python project. WRF-Python is made up of a very small amount of developers, tasked with supporting more than one project, so we rely on outside contributions to help keep the project moving forward.

The guidelines below help to ensure that the developers and outside collaborators remain on the same page regarding contributions.

Source Code Location

The source code is available on GitHub:

Git Flow

This project follows the GitFlow Workflow, which you can read about here if it is new to you:

https://leanpub.com/git-flow/read

For external contributors, this isn’t important, other than making you aware that when you first clone the repository, you will be on the develop branch, which is what you should use for your development.

Since you will be submitting pull requests for your contributions, you don’t really need to know much about GitFlow, other than making sure that you are not developing off of the main branch.

Ways to Contribute

Users are encouraged to contribute various ways. This includes:

  • Submitting a bug report
  • Submitting bug fixes
  • Submitting new Fortran computational routines
  • Submitting new Python computational routines
  • Submitting fully wrapped computational routines
  • Fixing documentation errors
  • Creating new examples in the documentation (e.g. plotting examples)

Ground Rules

Please follow the Code of Conduct.

  • Please create an issue on GitHub for any pull request you wish to submit, except for documentation issues.
  • Each pull request should be for a logical collection of changes. You can submit multiple bug fixes in a single pull request if the bugs are related. Otherwise, please submit seperate pull requests.
  • Do not commit changes to files that are unrelated to your bug fix (e.g. .gitignore).
  • The pull request and code review process is not immediate, so please be patient.

Submitting Bug Reports

Submitting bug reports is the easiest way to contribute. You will need to create an account on GitHub to submit a report.

  1. Go to the issues page here:

    https://github.com/NCAR/wrf-python/issues

  2. Check to see if an issue has already been created for the problem that you are having.

  3. If an issue already exists for your problem, feel free to add any additional information to the issue conversation.

  4. If there is not an issue created yet for your problem, use the “New Issue” button to start your new issue.

  5. Please provide as much information as you can for the issue. Please supply your version of WRF-Python you are using and which platform you are using (e.g. conda-forge build on OSX). Supply a code snippet if you are doing something more detailed than simply calling wrf.getvar().

  6. If you are getting a crash (e.g. segmentation fault), we will most likely need to see your data file if we cannot reproduce the problem here. See Submitting Files.

Submitting Fortran Computational Routines

If you have Fortran computational routines that you’d like to contribute, but don’t know how to wrap them in to Python, please follow the instructions below.

  1. Only Fortran 90 code will be accepted, so please port your F77 code to F90.
  2. Follow the Fortran Style Guide.
  3. Please only submit routines relevant to WRF-Python (e.g. diagnostics, interpolation). General purpose climate/meteorology should go in to the SkyLab project (a project providing similar functionality as NCL).
  4. If you are unsure if you should contribute your Fortran code, make an issue on GitHub and we can begin a discussion there.
  5. Place your code in the fortran/contrib directory in the WRF-Python source tree.
  6. Document your code with a text file that has the same name as your Fortran file, but ending in .rst. This file should placed with your F90 code in the fortran/contrib directory. Your documentation can use restructured text formatting, or just plain text. This documentation will be used for the docstring when Python wrappers are made.
  7. If you are unable to provide any type of test for your routine, please ensure that your documentation describes what your computation should produce. You can submit auxiallary documentation and/or images for this purpose if needed.

Submitting Python Computational Routines

If you would like to submit a computational routine written in Python, but don’t know how to integrate it with the rest of WRF-Python’s internals (e.g. left indexing, arg checking, etc), feel free to submit the pure Python routine. Below is the guide for submitting pure Python routines.

  1. These routines should be placed in src/wrf/contrib.py. These algorithms will not be imported in to WRF-Python’s default namespace.
  2. Follow the Python Style Guide.
  1. Write your computation as dimension unaware as possible. For example, adding pressure and perturbation pressure is simply P + PB.
  2. If dimensionality is needed, then write for the minimum dimensionality required to make the computation for one time step (if applicable). For example, if you’re computing CAPE, then you should use three dimensions for your algorithm, and we will handle the looping over all times.
  3. Document your routine by creating a docstring that follows Google docstring format (see Sphinx Napoleon).
  4. If you are unable to provide a test for this function, please provide additional documentation (or images) to show what this function should produce.

Submitting Fully Wrapped Computational Routines

Submitting a fully wrapped computational routines is the fastest way to get your contributation released. However, it requires the most effort on your part.

  1. Read the WRF-Python Internals guide. This will show you how to wrap your routine.
  2. Follow the Fortran Style Guide and Python Style Guide.
  3. You should create your contribution in the WRF-Python source tree as if you were one of the core developers of it. This means:
    • Your Fortran code (if applicable) should be placed in the fortran directory.
    • Update the “ext1 = numpy.distutils.core.Extension” section of setup.py to include your new Fortran source (if applicable).
    • Update extension.py to create the Python wrapper that calls your Fortran function. This must include the appropriate function decorators for handling argument checking, leftmost dimension indexing, etc. as described in WRF-Python Internals.
    • If the current function decorators do not cover your specific needs, place your custom decorator in specialdec.py. Most of the decorators in specialdec.py are used for products that contain multiple outputs like cape_2d, but you can use it for other purposes.
    • If your function is pure python, you can create a new module for it, or place it in another module with similar functionality. For example, if your routine is a new interpolation routine, then it should go in interp.py. Remember to apply the same type of decorators as done with Fortran extensions (checking args, leftmost dimension indexing, etc).
    • Create a ‘getter’ routine which is responsible for extracting the required variables from a WRF file and calling your computational routine. This is what will be called by wrf.getvar(). This function should be placed in a new python module with the prefix ‘g_’ (i.e. g_yourdiagnostic.py).
    • Decorate your getter routine with an appropriate metadata handling decorator. If you need to make a custom decorator for the metadata, place it in metadecorators.py.
    • Update the mappings in routines.py to map your diagnostic name to your function, and to declare any keyword arguments that your function needs aside from the usual wrfin, varname, timeidx, method, squeeze, cache, and meta.
    • If you would like to make your routine available as a raw computation, you will need to place an additional thin wrapper in computation.py. This thin wrapper must be decorated with an appropriate metadata decorator found in metadecorators.py (usually set_alg_metadata). If you need to write your own custom metadata decorator, write it in metadecorators.py.
    • You must provide a docstring for every function you create using Google docstring format (see Sphinx Napoleon).
    • You must provide a test for your function. See Testing.

Fixing Documentation Errors

  1. Documenation is made with Sphinx using restructured text.
  2. Python docstrings follow Google docstring format.
  3. Documentation can be found in the doc directory, along with the docstrings contained within the Python code.
  4. For documentation fixes, you can just submit a pull request with the appropriate corrections already made.

Creating New Examples

  1. Examples are made with Sphinx using restructured text.
  2. Examples are currently found in the doc directory, mostly within the basic_usage.rst and plot.rst files. Feel free to contribute more examples to these files.
  3. Unless you are drastically changing the documentation structure, you can submit a pull request with your examples without creating a GitHub issue. If you are making a large change, or are unsure about it, then go ahead and create a GitHub issue to discuss with the developers.

Setting Up Your Development Environment

We recommend using the conda package manager for your Python environments. Our recommended setup for contributing is:

  • Install miniconda

  • Install git on your system if it is not already there (install XCode command line tools on a Mac or git bash on Windows)

  • Login to your GitHub account and make a fork of the WRF-Python repository by clicking the Fork button.

  • Clone your fork of the WRF-Python repository (in terminal on Mac/Linux or git shell/ GUI on Windows) in the location you’d like to keep it.

    git clone https://github.com/your-user-name/wrf-python.git
    
  • Navigate to that folder in the terminal or in Anaconda Prompt if you’re on Windows.

    cd wrf-python
    
  • Connect your repository to the NCAR WRF-Python repository.

    git remote add ncar https://github.com/ncar/wrf-python.git
    
  • To create the development environment, you’ll need to run the appropriate command below for your operating system.

    OSX:

    conda env create -f build_envs/Darwin.yml
    

    Linux:

    conda env create -f build_envs/Linux.yml
    

    Win64:

    conda env create -f build_envs/Win64.yml
    

    Note: For Win64, you will also need VS2015 installed on your system.

  • Activate your conda environment.

    conda activate wrf_python_build
    
  • CD to the build_scripts directory.

    cd build_scripts
    
  • Build and install WRF-Python.

    OSX/Linux:

    sh gnu_omp.sh
    

    Windows:

    ./win_msvc_mingw_omp.bat
    
  • The previous step will build and install WRF-Python in to the ‘develop’ environment. If you make changes and want to rebuild, uninstall WRF-Python by running:

    pip uninstall wrf-python
    

    Now follow the previous step to rebuild.

Code Style

Python Style Guide

The Python code in WRF-Python follows the PEP8 coding standard. All Python code submitted must pass the PEP8 checks performed by the pycodestyle code style guide utility. The utility must pass without any errors or warnings. For a tool to help automate some of the mundane formatting corrections (e.g. whitespace characters in blank lines, etc.), try the autopep8 utility.

Fortran Style Guide

At this time, we are only accepting Fortran 90 contributions, so you must convert any F77 code to F90 before contributing.

Although there is no formal style guide for Fortran contributions, Fortran code should look similar to other Fortran code in the WRF-Python fortran directory.

A summary of style notes is below:

  • Fortran 90 only.
  • Use 4 spaces for indentation, not tabs.
  • Use all capital letters for Fortran key words (e.g. IF, DO, REAL, INTENT)
  • Use all capital letters for Fortran intrinsics (e.g. MAX, MIN, SUM)
  • Use all capital letters for any constants declared as PARAMETER (e.g. RD).
  • Use all lowercase letters for variables with ‘_’ separting words (snake case).
  • Use all lowercase letters for functions and subroutines using ‘_’ to separate words (snake case).
  • Declare your REAL variables as REAL(KIND=8), unless you really need 4-byte REALs for a specific reason.
  • Do not allocate any memory in your Fortran routine (e.g work arrays). We will use numpy arrays to manage all memory. Instead, declare your work array (or dynamic array) as an INTENT(INOUT) argument in your function signature.
  • Avoid submitting code that uses global variables (other than for read only constants). All Fortran contributions must be threadsafe and have no side effects.
  • Place any computational constants in the wrf_constants module found in wrf_constants.f90 and put a “USE wrf_constants, ONLY : YOUR_CONSTANT,…” declaration in your function.
  • Please do not redefine constants already declared in wrf_constants.f90 (e.g. G, RD, RV, etc). Although the WRF model itself does not adhere to this, we are trying to be consistent with the constants used throughout WRF-Python.
  • Do not put any STOP statements in your code to handle errors. STOP statements will bring the entire Python interpreter down with it. Instead, use errstat and errmsg arguments to your function signature to tell Python about the error so it can throw an exception. See WETBULBCALC in wrf_rip_phys_routines.f90 for how this is handled.
  • If you know how to use OpenMP directives, feel free to add them to your routine, but this is not required.

Pull Requests

In order to submit changes, you must use GitHub to issue a pull request. Your pull requests should be made against the develop branch, since we are following GitFlow for this project.

Following a pull request, automated continuous integration tools will be run to ensure that your code follows the PEP 8 style guide, and verifies that a basic suite of unit tests run.

If your pull request is for a bug fix to an existing computational routine, then the automated unit tests will probably fail due to the new values. This is not a problem, but be sure to indicate to the developers in your GitHub issue that the tests will need to be updated.

Tests

Once you have submitted your contribution, we need a way to test your code. Currently, most of WRF-Python’s tests are written to ensure that WRF-Python produces the same result as the NCAR Command Language (NCL), which is where the code was originally derived. However, this isn’t applicable for new contributions and bug fixes, since there is nothing to test against for new contributions and bug fixes might change the numerical result. So, we have some recommendations below for how you can create your own tests.

Sample Data

You can download sample data for Hurricane Katrina here: <contact developers> This data includes both a moving nest and a static nest version. You should create your tests with this data set (both static and moving nests), unless you are unable to reproduce a particular problem with it.

Supplying Data

If you need to supply us data for your test (due to a bug) please provide us a link to the file or upload it using Submitting Files. Unless the data is very small, do not add it to the GitHub repository.

If you can demonstrate the problem/solution with a minimal set of hand created values, then you can use that in your test.

Guidelines

The following are guidelines for testing you contributions. The developers are aware that some issues have unique needs, so you can use the GitHub issue related to your contribution to discuss with developers.

  1. New computations must work for both moving nests and static nests. Generally this is not an issue unless your data makes use of lat/lon information (e.g. cross sections with lat/lon line definitions).
  2. WRF-Python’s tests can be found in the test directory.
  3. WRF-Python’s tests were written using the standard unittest package, along with numpy’s test package for the assert fuctions. One reason for this is that many of the tests are dynamically generated, and other testing frameworks can’t find the tests when generated this way. If you need to use another test framework, that’s fine, just let us know in your GitHub issue.
  4. Place your test in the test/contrib directory.
  5. For new contributions, images may be sufficient to show that your code is working. Please discuss with the developers in you GitHub issue.
  6. For bug related issues, try to create a case that demonstrates the problem, and demonstrates the fix. If your problem is a crash, then proving that your code runs without crashing should be sufficient.
  7. You might need some creativity here.

WRF-Python Internals

WRF-Python is a collection of diagnostic and interpolation routines for WRF-ARW data. The API is kept to a minimal set of functions, since we’ve found this to be the easiest to teach to new programmers, students, and scientists. Future plans include adopting the Pangeo xarray/dask model, along with an object oriented API, but this is not currently supported as of this user guide.

A typical use case for a WRF-Python user is to:

  1. Open a WRF data file (or sequence of files) using NetCDF4-python or PyNIO.
  2. Compute a WRF diagnostic using wrf.getvar().
  3. Perform any additional computations using methods outside of WRF-Python.
  4. Create a plot of the output using matplotlib (basemap or cartopy) or PyNGL.

The purpose of this guide is to explain the internals of item (2) so that users can help contribute or support the computational diagnostic routines.

Overview of a wrf.getvar() Diagnostic Computation

A diagnostic computed using the wrf.getvar() function consists of the following steps:

  1. Call the appropriate ‘getter’ function based on the specified diagnostic label. This step occurs in the wrf.getvar() routine in routines.py.
  2. Extract the required variables from the NetCDF data file (or files).
  3. Compute the diagnostic using a wrapped Fortran, C, or Python routine.
  4. Convert to the desired units (if applicable).
  5. Set the metadata (if desired) and return the result as an xarray.DataArray, or return a numpy.ndarray if no metadata is required.

In the source directory, the wrf.getvar() ‘getter’ routines have a “g_” prefix for the naming convention (the “g” is for “get”).

The unit conversion is handled by a wrapt decorator that can be found in decorators.py. The setting of the metadata is handled using a wrapt decorator, which can be found in the metadecorators.py file.

Overview of Compiled Computational Routines

Currently, the compiled computational routines are written in Fortran 90 and exposed the Python using f2py. The routines have been aquired over decades, originated from NCL’s Fortran77 codebase, the WRF model itself, or other tools like RIP (Read Interpolate Plot), and do not necessarily conform to a common programming mindset (e.g. 1D arrays, 2D arrays, etc).

The raw Fortran routines are compiled in to the wrf._wrffortran extension module, but are not particularly useful for applications in their raw form. These routines are imported in the extension.py module, where additional functionality is added to make the routines more user friendly.

The typical behavior for a fully exported Fortran routine in extension.py is:

  1. Verify that the supplied arguments are valid in shape. Although f2py does this as well, the errors thrown by f2py are confusing to users, so this step helps provide better error messages.
  2. Allocate an ouput array based on the output shape of the algorithm, number of “leftmost”[1] dimensions, and size of the data.
  3. Iterate over the leftmost [1] dimensions and compute output for argument data slices that are of the same dimensionality as the compiled algorithm.
  4. Cast the argument arrays (or array slices) in to the dtype used in the compiled routine. For WRF data, the conversion is usually from a 4-byte float to an 8-byte double.
  5. Extract the argument arrays out of xarray in to numpy arrays (if applicable) and transpose them in to Fortran ordering. Note that this does not actually do any copying of the data, it simply reorders the shape tuple for the data and sets the Fortran ordering flag. This allows data pointers from the output array slices to be passed directly to the Fortran routine, which eliminates the need to copy the result to the output array.

The steps described above are handled in wrapt decorators that can be found in decorators.py. For some routines that produce multiple outputs or have atypical behavior, the special case decorators are located in specialdec.py.

[1](1, 2) If the Fortran algorithm is written for a 2-dimensional array, and a users passes in a 5-dimensional array, there are 3 “leftmost” dimensions.

Example

The above overviews are better explained by an example. Although there are a few exceptions (e.g. ll_to_xy), many of the routines in WRF-Python behave this way.

For this example, let’s make a routine that adds a variable’s base state to its perturbation. This is the kind of thing that you’d normally use numpy for (e.g. Ptot = PB + P), but you could do this if you wanted concurrency for this operation via OpenMP rather than using dask (in a future release of WRF-Python, both OpenMP and dask will be available).

Fortran Code

Below is the Fortran 90 code, which will be written to a file called example.f90.

SUBROUTINE pert_add(base, pert, total, nx, ny)

!f2py threadsafe
!f2py intent(in,out) :: result

REAL(KIND=8), INTENT(IN), DIMENSION(nx, ny) :: base, pert
REAL(KIND=8), INTENT(OUT), DIMENSION(nx, ny) :: total
INTEGER, INTENT(IN) :: nx, ny

INTEGER :: i

!$OMP PARALLEL DO COLLAPSE(2) SCHEDULE(runtime)
DO j=1, ny
    DO i=1, nx
        total(i, j) = base(i, j) + pert(i, j)
    END DO
END DO
!$OMP END PARALLEL DO


END SUBROUTINE pert_add

This code adds the 2D base and perturbation variables and stores the result in a 2D output array. (For this example, we’re using a 2D array to help illustrate leftmost indexing below, but it could have been written using a 1D or 3D array).

At the top, there are these two f2py directives:

!f2py threadsafe
!f2py intent(in,out) :: total

The threadsafe directive tells f2py to release Python’s Global Interpreter Lock (GIL) before calling the Fortran routine. The Fortran code no longer uses Python variables, so you should relese the GIL before running the computation. This way, Python threads will contine to run, which may be important if you are using this in a webserver or in some other threaded environment like dask’s threaded scheduler.

The intent(in,out) f2py directive is used because we will be supplying a slice of the output array directly to this routine and don’t want to have to copy the result from Fortran back in to the result array. By specifying intent(in,out), we’re telling f2py to use the pointer to our output array directly.

Finally, for the OpenMP directive, the scheduler is set to use runtime scheduling via SCHEDULE(runtime). By using runtime scheduling, users can set the scheduling type within Python, but for most users the default is sufficient.

Building the Fortran Code

To build the Fortran code, the example.f90 source code should be placed in the fortran directory of the source tree.

Next, we need to update the numpy.distutils.core.Extension section of setup.py in the root directory of the source tree.

ext1 = numpy.distutils.core.Extension(
name="wrf._wrffortran",
sources=["fortran/wrf_constants.f90",
         "fortran/wrf_testfunc.f90",
         "fortran/wrf_user.f90",
         "fortran/rip_cape.f90",
         "fortran/wrf_cloud_fracf.f90",
         "fortran/wrf_fctt.f90",
         "fortran/wrf_user_dbz.f90",
         "fortran/wrf_relhl.f90",
         "fortran/calc_uh.f90",
         "fortran/wrf_user_latlon_routines.f90",
         "fortran/wrf_pvo.f90",
         "fortran/eqthecalc.f90",
         "fortran/wrf_rip_phys_routines.f90",
         "fortran/wrf_pw.f90",
         "fortran/wrf_vinterp.f90",
         "fortran/wrf_wind.f90",
         "fortran/omp.f90",
         "fortran/example.f90 # New file added here
         ]
 )

The easiest way to build your code is to use one of the build scripts located in the build_scripts directory of the source tree. These scripts contain variants for compiling with or without OpenMP support. Unless you are debugging a problem, building with OpenMP is recommended.

For this example, we’re going to assume you already followed how to Setting Up Your Development Environment. Below are the build instructions for compiling with OpenMP enabled on GCC (Linux or Mac):

pip uninstall wrf-python
cd build_scripts
sh ./gnu_omp.sh

The above command will build and install the new routine, along with the other Fortran routines. If you recieve errors, then your code failed to build sucessfully. Otherwise, your new routine can be called as wrf._wrffortran.pert_add().

Creating a Thin Python Wrapper

The new Fortran pert_add routine will work well for a 2D slice of data. However, if you want to extend the functionality to work with any dimensional array, you’ll need to add a thin wrapper with some extra functionality that make use of wrapt decorators.

First, let’s start by creating a very thin wrapper in Python in extension.py.

from wrf._wrffortran import pert_add

.
.
.

def _pert_add(base, pert, outview=None):
    """Wrapper for pert_add.

    Located in example.f90.

    """
    if outview is None:
        outview = np.empty(base.shape[0:2], base.dtype, order="F")

    result = pert_add(base,
                      pert,
                      outview)

    return result

Despite being only a few lines of code, there is quite a bit going on in the wrapper. The first thing to note is the arguments to the wrapper function. The only arguments that we need for the wrapper are the inputs to the function and an “outview” keyword argument. At this point in the call chain, the arguments are assumed to be Fortran-ordered, in that the Fortran ordering flag is set and the shape is transposed from a usual C-ordered numpy array (the data itself remains in the same order that it was created). By passing numpy arrays with the Fortran order flag set, f2py will pass the pointer directly through to the Fortran routine.

The outview keyword argument is used during leftmost dimension indexing to send slices of the output array to the Fortran routine to be filled. If there are no leftmost dimensions (e.g. this routine is called with 2D data), then the outview argument will be None and an outview variable will be created with the same number of dimensions as the base argument. It should be created with Fortran ordering so that the pointer is directly passed to the Fortran routine.

When the actual wrf._wrffortran.pert_add() Fortran routine is called, the nx and ny arguments are ommitted because f2py will supply this for us based on the shape of the numpy arrays we are supplying as input arguments. F2py also likes to return an array as a result, so even though we supplied outview as an array to be filled by the Fortran routine, we will still get a result from the function call that is pointing to the same thing as outview. (We could have chosen to ignore the result and return outview instead).

Extract and Transpose

The arrays that are being passed to the _pert_add thin wrapper need to be numpy arrays in Fortran ordering, but they won’t come this way from users. They will come in as either numpy.ndarray or xarray.DataArray and will be C-ordered. So, we need to make sure that a Fortran-ordered numpy.ndarray is what is passed to the thin wrapper.

Since this type of operation is repeated for many diagnostic functions, a decorator has been written in decorators.py for this purpose. Let’s decorate our thin wrapper with this function.

@extract_and_transpose()
def _pert_add(base, pert, outview=None):
    """Wrapper for pert_add.

    Located in example.f90.

    """
    if outview is None:
        outview = np.empty(base.shape[0:2], base.dtype, order="F")

    result = pert_add(base,
                      pert,
                      outview)

    return result

The extract_and_transpose() decorator converts any argument to _pert_add that are of type xarray.DataArray to numpy.ndarray, and then gets the numpy.ndarray.T attribute, and passes this on to the _pert_add wrapper.

Following the computation, we want the result to be returned back as the same C-ordered array types that went in as arguments, so this decorator takes the result of the computation and returns the numpy.ndarray.T from the Fortran-ordered result. This result gets passed back up the decorator chain.

Cast to Fortran Array Types

The Fortran routine expects a specific data type for the arrays (usually REAL(KIND=8)). WRF files typically store their data as 4-byte floating point numbers to save space. The arrays being passed to the wrf.decorators.extract_and_transpose() decorator need to be converted to the type used in the Fortran routine (e.g. double), then converted back to the original type (e.g. float) after the computation is finished. This is handled by the wrf.decorators.cast_type() decorator function in decorators.py.

@cast_type(arg_idxs=(0, 1))
@extract_and_transpose()
def _pert_add(base, pert, outview=None):
    """Wrapper for pert_add.

    Located in example.f90.

    """
    if outview is None:
        outview = np.empty(base.shape[0:2], base.dtype, order="F")

    result = pert_add(base,
                      pert,
                      outview)

    return result

The wrf.decorators.cast_type() decorator function takes an arg_idxs argument to specify which positional arguments need to be cast to the Fortran algorithm type, in this case arguments 0 and 1 (base and pert).

Following the computation, the result will be cast back to the original type for the input arguments (usually float), and passed back up the decorator chain.

Leftmost Dimension Indexing

The WRF-Python algorithms written in Fortran are usually written for fixed size arrays of 1, 2, or 3 dimensions. If your input arrays have more than the number of dimensions specified for the Fortran algorithm, then we need to do the following:

  1. Determine how many leftmost dimensions are used.
  2. Create an output array that has a shape that contains the leftmost dimensions concatenated with the shape of the result from the Fortran algorithm.
  3. Iterate over the leftmost dimensions and send slices of the input arrays to the Fortran algorithm.
  4. Along with the input arrays above, send a slice of the output array to be filled by the Fortran algorithm.
  5. Return the fully calculated output array.

The wrf.decorators.left_iteration() is general purpose decorator contained in decorators.py to handle most leftmost index iteration cases. (Note: Some products, like cape_2d, return multiple products in the output and don’t fall in to this generic category, so those decorators can be found in specialdec.py)

Let’s look at how this is used below.

@left_iteration(2, 2, ref_var_idx=0)
@cast_type(arg_idxs=(0, 1))
@extract_and_transpose()
def _pert_add(base, pert, outview=None):
    """Wrapper for pert_add.

    Located in example.f90.

    """
    if outview is None:
        outview = np.empty(base.shape[0:2], base.dtype, order="F")

    result = pert_add(base,
                      pert,
                      outview)

    return result

The wrf.decorators.left_iteration() decorator handles many different use cases with its arguments, but this example is one of the more common cases. The 0th positional argument tells the decorator that the “reference” input variable should provide at least two dimensions. This should be set to the same number of dimensions as in the Fortran algorithm, which is two in this case. Dimensions to the left of these two dimensions are considered “leftmost” dimensions.

The next positional argument (value of 2) tells the decorator that the newly created output variable should retain the shape of the reference variable’s right two dimensions. This only applies when your output has less dimensions than the reference variable (e.g. sea level pressure uses geopotential height for the reference but produces 2D output). Since we are not reducing the output dimensions, it should be set to the same value as the previous argument.

The final keyword argument of ref_ver_idx tells the decorator to use positional argument 0 (for the _pert_add function) as the reference variable.

The result of this decorator will be the fully computed output array, which gets passed back up the chain.

Checking Argument Shapes

Before any computations can be performed, the argument shapes are checked to verify their sizes. Although f2py will catch problems at the entry point to the Fortran routine, the error thrown is confusing to users.

The wrf.decorators.check_args() decorator is used to verify that the arguments are the correct size before proceeding.

Here is how it is used:

@check_args(0, 2, (2, 2))
@left_iteration(2, 2, ref_var_idx=0)
@cast_type(arg_idxs=(0, 1))
@extract_and_transpose()
def _pert_add(base, pert, outview=None):
    """Wrapper for pert_add.

    Located in example.f90.

    """
    if outview is None:
        outview = np.empty(base.shape[0:2], base.dtype, order="F")

    result = pert_add(base,
                      pert,
                      outview)

    return result

The 0th positional argument (value of 0), tells wrf.decorators.check_args() that the 0th positional argument of _pert_add is the reference variable.

The next postional argument (value of 2) tells wrf.decorators.check_args() that it should expect at least 2 dimensions for the reference variable. This should be set to the number of dimensions used in the Fortran algorithm, which is two in this case.

The final positional argument is a tuple with the number of dimensions that are expected for each array argument. Again, this should be set to the same number of dimensions expected in the Fortran routine for each positional argument. If an argument to your wrapped function is not an array type, you can use None in the tuple to ignore it, but that is not applicable for this example.

Putting It All Together

The previous sections showed how the decorator chain was built up from the _pert_add function. However, when you actually make a call to _pert_add, the decorators are called from top to bottom. This means check_args is called first, then left_iteration, then cast_type, then extract_and_transpose, and finally _pert_add. After _pert_add is finished, the result is passed back up the chain and back to the user.

Now that we have a fully wrapped compiled routine, how might we use this?

Let’s make a new wrf.getvar() diagnostic called ‘total_pressure’. A similar diagnostic already exists in WRF-Python, but this is just for illustration of how to use our newly wrapped Fortran routine.

Make a ‘getter’ Function

First, we need a ‘getter’ routine that extracts the required input variables from the WRF NetCDF file(s) to perform the computation. In this case, the variables are P and PB.

The current naming convention in WRF-Python is to prefix the ‘getter’ functions with a ‘g_’, so let’s call this file g_totalpres.py and make a function get_total_pressure inside of it.

The contents of this file will be:

# g_totalpres.py

from .extension import _pert_add
from .util import extract_vars

@copy_and_set_metadata(copy_varname="P", name="total_pressure",
                       description="total pressure",
                       units="Pa")
def get_total_pressure(wrfin, timeidx=0, method="cat", squeeze=True,
                       cache=None, meta=True, _key=None):
    """Return total pressure.

     This functions extracts the necessary variables from the NetCDF file
     object in order to perform the calculation.

     Args:

         wrfin (:class:`netCDF4.Dataset`, :class:`Nio.NioFile`, or an \
             iterable): WRF-ARW NetCDF
             data as a :class:`netCDF4.Dataset`, :class:`Nio.NioFile`
             or an iterable sequence of the aforementioned types.

         timeidx (:obj:`int` or :data:`wrf.ALL_TIMES`, optional): The
             desired time index. This value can be a positive integer,
             negative integer, or
             :data:`wrf.ALL_TIMES` (an alias for None) to return
             all times in the file or sequence. The default is 0.

         method (:obj:`str`, optional): The aggregation method to use for
             sequences.  Must be either 'cat' or 'join'.
             'cat' combines the data along the Time dimension.
             'join' creates a new dimension for the file index.
             The default is 'cat'.

         squeeze (:obj:`bool`, optional): Set to False to prevent dimensions
             with a size of 1 from being automatically removed from the
             shape of the output. Default is True.

         cache (:obj:`dict`, optional): A dictionary of (varname, ndarray)
             that can be used to supply pre-extracted NetCDF variables to
             the computational routines.  It is primarily used for internal
             purposes, but can also be used to improve performance by
             eliminating the need to repeatedly extract the same variables
             used in multiple diagnostics calculations, particularly when
             using large sequences of files.
             Default is None.

         meta (:obj:`bool`, optional): Set to False to disable metadata and
             return :class:`numpy.ndarray` instead of
             :class:`xarray.DataArray`.  Default is True.

         _key (:obj:`int`, optional): A caching key. This is used for
             internal purposes only.  Default is None.

     Returns:
         :class:`xarray.DataArray` or :class:`numpy.ndarray`: Omega.
         If xarray is
         enabled and the *meta* parameter is True, then the result will be a
         :class:`xarray.DataArray` object.  Otherwise, the result will be a
         :class:`numpy.ndarray` object with no metadata.

    """

    # Get the base and perturbation pressures
    varnames = ("PB", "P")
    ncvars = extract_vars(wrfin, timeidx, varnames, method, squeeze, cache,
                          meta=False, _key=_key)

    pb = ncvars["PB"]
    p = ncvars["P"]

    total_pres = _pert_add(pb, p)

    return total_pres

This getter function extracts the PB and P (base and pertrubation pressure) variables and calls the _pert_add function and returns the result. The arguments wrfin, timeidx, method, squeeze, cache, meta, and _key are used for every getter function and you can read what they do in the docstring.

The getter function is also decorated with a wrf.decorators.copy_and_set_metadata() decorator. This is a general purpose decorator that is used for copying metadata from an input variable to the result. In this case, the variable to copy is ‘P’. The name parameter specifies that the xarray.DataArray.name attribute for the variable (the name that will be written to a NetCDF variable). The description is a brief description for variable that will be placed in the xarray.DataArray.attrs dictionary along with the units parameter.

Make Your New Diagnostic Available in wrf.getvar()

The final step is to make the new ‘total_pressure’ diagnostic available from wrf.getvar(). To do this, modifications need to be made to routines.py.

First, import your new getter routine at the top of routines.py.

from __future__ import (absolute_import, division, print_function)

from .util import (get_iterable, is_standard_wrf_var, extract_vars,
                   viewkeys, get_id)
from .g_cape import (get_2dcape, get_3dcape, get_cape2d_only,
                     get_cin2d_only, get_lcl, get_lfc, get_3dcape_only,
                     get_3dcin_only)
.
.
.
from .g_cloudfrac import (get_cloudfrac, get_low_cloudfrac,
                          get_mid_cloudfrac, get_high_cloudfrac)
from .g_totalpres import get_total_pressure

Next, update _FUNC_MAP to map your diagnostic label (‘total_pressure’) to the getter routine (get_total_pres).

_FUNC_MAP = {"cape2d": get_2dcape,
             "cape3d": get_3dcape,
             .
             .
             .
             "high_cloudfrac": get_high_cloudfrac,
             "total_pressure": get_total_pressure
             }

Finally, update _VALID_KARGS to inform wrf.getvar() of any additional keyword argument names that this routine might use. The wrf.getvar() routine will check keyword arguments and throws an error when it gets any that are not declared in this map.

In this case, there aren’t any addtional keyword arguments, so we’ll just supply an empty list.

_VALID_KARGS = {"cape2d": ["missing"],
                "cape3d": ["missing"],
                "dbz": ["do_variant", "do_liqskin"],
                "maxdbz": ["do_variant", "do_liqskin"],
                .
                .
                .
                "high_cloudfrac": ["vert_type", "low_thresh",
                                   "mid_thresh", "high_thresh"],
                "total_pressure": []
                }

After this is complete, your new routine is now available for use from wrf.getvar().

Citation

WRF-Python has a Digital Object Identifier (DOI), which is a persistent identifier for web-based resources. The wrf-python DOI, when used in URL form, https://doi.org/10.5065/D6W094P1, provides a persistent link to the wrf-python Github page. The benefit of DOIs is that they are widely accepted by academic publishers as citable locators for scholarly objects.

If you author a paper that involves data analysis with wrf-python, or visualizations created with wrf-python, we would like to ask you to please cite wrf-python. This helps us better understand the impact of the software on the scientific community, which in turns helps us maintain support for the effort.

You can cite wrf-python using the following citation:

Ladwig, W. (2017). wrf-python (Version x.x.x) [Software]. Boulder, Colorado: UCAR/NCAR. https://doi.org/10.5065/D6W094P1

Note

The version number x.x.x should be set to the version of wrf-python that you are using.

License

Copyright 2016 University Corporation for Atmospheric Research

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Tutorials

NCAR occasionally provides tutorials for wrf-python at various times throughout the year.

Below are the links to the upcoming and past tutorials.

Upcoming Tutorials

WRF Users’ Workshop 2019

Welcome WRF-Python Tutorial attendees!

If you wish to actively participate in the tutorial, please bring your own laptop. Due to limited time constraints, the instructions below should be completed prior to arriving at the tutorial.

I will be executing the same cells as the student workbook, so if you prefer to sit and watch, that’s OK too. Following the tutorial, I will upload the instructor slides in to the same GitHub location as the student workbook if you want to try it out later.

Prerequisites

This tutorial assumes that you have basic knowledge of how to type commands in to a command terminal using your preferred operating system. You should know some basic directory commands like cd, mkdir, cp, mv.

Regarding Python, to understand the examples in this tutorial, you should have some experience with Python basics. Below is a list of some Python concepts that you will see in the examples, but don’t worry if you aren’t familiar with everything.

  • Opening a Python interpreter and entering commands.
  • Importing packages via the import statement.
  • Familiarity with some of the basic Python types: str, list, tuple, dict, bool, float, int, None.
  • Creating a list, tuple, or dict with “[ ]”, “( )”, “{ }” syntax (e.g. my_list = [1,2,3,4,5]).
  • Accessing dict/list/tuple items with the “x[ ]” syntax (e.g. my_list_item = my_list[0]).
  • Slicing str/list/tuple with the “:” syntax (e.g. my_slice = my_list[1:3]).
  • Using object methods and attributes with the “x.y” syntax (e.g. my_list.append(6)).
  • Calling functions (e.g. result = some_function(x, y))
  • Familiarity with numpy would be helpful, as only a very brief introduction is provided.
  • Familiarity with matplotlib would be helpful, as only a very brief introduction is provided.

If you are completely new to Python, that shouldn’t be a problem, since most of the examples consist of basic container types and function calls. It would be helpful to look at some introductory material before arriving at the tutorial. If you’ve programmed before, picking up Python isn’t too difficult.

Here are some links:

https://www.learnpython.org/

https://developers.google.com/edu/python/

Step 1: Open a Command Terminal

To begin, you will first need to know how to open a command line terminal for your operating system.

For Windows:

WINDOWS + r
type cmd in the run window

For Mac:

Finder -> Applications -> Utilities -> Terminal

For Linux:

Try one of the following:

CTRL + ALT + T
CTRL + ALT + F2
Step 2: Download Miniconda

For this tutorial, you will need to download and install Miniconda. We are going to use Python 3.7, but it will also work with Python 2.7.

Please use the appropriate link below to download Miniconda for your operating system.

Note

64-bit OS recommended

Win64

Mac

Linux

For more information, see: https://conda.io/miniconda.html

Note

What is Miniconda?

If you have used the Anaconda distribution for Python before, then you will be familiar with Miniconda. The Anaconda Python distribution includes numerous scientific packages out of the box, which can be difficult for users to build and install. More importantly, Anaconda includes the conda package manager.

The conda package manager is a utility (similar to yum or apt-get) that installs packages from a repository of pre-compiled Python packages. These repositories are called channels. Conda makes it easy for Python users to install and uninstall packages, and also can be used to create isolated Python environments (more on that later).

Miniconda is a bare bones implementation of Anaconda and only includes the conda package manager. Since we are going to use the conda-forge channel to install our scientific packages, Miniconda avoids any complications between packages provided by Anaconda and conda-forge.

Step 3: Install Miniconda

Windows:

  1. Browse to the directory where you downloaded Miniconda3-latest-Windows-x86_64.exe.
  2. Double click on Miniconda3-latest-Windows-x86_64.exe.
  3. Follow the instructions.
  4. Restart your command terminal.

Mac and Linux:

For Mac and Linux, the installer is a bash script.

  1. Using a terminal, you need to execute the bash shell script that you downloaded by doing:

    bash /path/to/Miniconda3-latest-MacOSX-x86_64.sh [Mac]
    
    bash /path/to/Miniconda3-latest-Linux-x86_64.sh [Linux]
    
  2. Follow the instructions.

  3. At the end of the installation, it will ask if you “wish the installer to initialize Miniconda3”. If you are unsure what to do, you should say “yes”. If you say “no”, we’re going to assume you know what you are doing.

    If you said “yes”, then once you restart your shell, the “base” conda environment will be activated by default and the miniconda3 Python will be found instead of the system Python when you type the “python” command. If you want to undo this later, then you can run the following “conda” command:

    conda config --set auto_activate_base false
    

    Note that this will not affect your ability to run “conda” commands, it will just prevent the miniconda3 Python from overriding any existing Python environments you may already have on your machine.

  4. Restart your command terminal.

  5. [Linux and Mac Users Only] Miniconda only works with bash. If bash is not your default shell, then you need to activate the bash shell by typing the following in to your command terminal:

    bash
    
  6. Verify that your system is using the correct Python interpreter by typing the following in to your command terminal:

    which python
    

    You should see the path to your miniconda installation. If not, see the note below.

    Note

    If you have already installed another Python distribution, like Enthought Canopy, you will need to comment out any PATH entries for that distribution in your .bashrc or .bash_profile. Otherwise, your shell environment may pick the wrong Python installation.

    If bash is not your default shell type, and the PATH variable has been set in .bash_profile by the miniconda installer, try executing “bash -l” instead of the “bash” command in step 5.

Step 4: Set Up the Conda Environment

If you are new to the conda package manager, one of the nice features of conda is that you can create isolated Python environments that prevent package incompatibilities. This is similar to the virtualenv package that some Python users may be familiar with. However, conda is not compatible with virtualenv, so only use conda environments when working with conda.

The name of our conda environment for this tutorial is: tutorial_2019.

Follow the instructions below to create the tutorial_2019 environment.

  1. Open a command terminal if you haven’t done so.

  2. [Linux and Mac Users Only] The conda package manager only works with bash, so if bash is not your current shell, type:

    bash
    
  3. Add the conda-forge channel to your conda package manager.

    Type or copy the command below in to your command terminal. You should run this command even if you have already done it in the past. This will ensure that conda-forge is set as the highest priority channel.

    conda config --add channels conda-forge
    

    Note

    Conda-forge is a community driven collection of packages that are continually tested to ensure compatibility. We highly recommend using conda-forge when working with conda. See https://conda-forge.github.io/ for more details on this excellent project.

  4. Create the conda environment for the tutorial.

    Type or copy this command in to your command terminal:

    conda create -n tutorial_2019 python=3.7 matplotlib cartopy netcdf4 jupyter git ffmpeg wrf-python
    

    Type “y” when prompted. It will take several minutes to install everything.

    This command creates an isolated Python environment named tutorial_2019, and installs the python interpreter, matplotlib, cartopy, netcdf4, jupyter, git, ffmpeg, and wrf-python packages.

Note

When the installation completes, your command terminal might post a message similar to:

If this is your first install of dbus, automatically load on login with:

mkdir -p ~/Library/LaunchAgents
cp /path/to/miniconda3/envs/tutorial_test/org.freedesktop.dbus-session.plist ~/Library/LaunchAgents/
launchctl load -w ~/Library/LaunchAgents/org.freedesktop.dbus-session.plist

This is indicating that the dbus package can be set up to automatically load on login. You can either ignore this message or type in the commands as indicated on your command terminal. The tutorial should work fine in either case.

  1. Activate the conda environment.

    To activate the tutorial_2019 Python environment, type the following in to the command terminal:

    For Linux and Mac (using bash):

    conda activate tutorial_2019
    

    For Windows:

    activate tutorial_2019
    

    You should see (tutorial_2019) on your command prompt.

    To deactivate your conda environment, type the following in to the command terminal:

    For Linux and Mac:

    conda deactivate
    

    For Windows:

    deactivate tutorial_2019
    
Step 5: Download the Student Workbook

The student workbook for the tutorial is available on GitHub. The tutorial_2019 conda environment includes the git application needed to download the repository.

These instructions download the tutorial in to your home directory. If you want to place the tutorial in to another directory, we’re going to assume you know how to do this yourself.

To download the student workbook, follow these instructions:

  1. Activate the tutorial_2019 conda environment following the instructions in the previous step (conda activate tutorial_2019 or activate tutorial_2019).

  2. Change your working directory to the home directory by typing the following command in to the command terminal:

    For Linux and Mac:

    cd ~
    

    For Windows:

    cd %HOMEPATH%
    
  3. Download the git repository for the tutorial by typing the following in to the command terminal:

    git clone --recursive https://github.com/NCAR/wrf_python_tutorial.git
    
  4. There may be additional changes to the tutorial after you have downloaded it. To pull down the latest changes, type the following in to the command terminal:

    For Linux and Mac:

    conda activate tutorial_2019
    
    cd ~/wrf_python_tutorial/wrf_workshop_2019
    
    git pull
    

    For Windows:

    activate tutorial_2019
    
    cd %HOMEPATH%\wrf_python_tutorial\wrf_workshop_2019
    
    git pull
    

    Note

    If you try the “git pull” command and it returns an error indicating that you have made changes to the workbook, this is probably because you ran the workbook and it contains the cell output. To fix this, first do a checkout of the workbook, then do the pull.

    git checkout -- .
    git pull
    
Step 6: Verify Your Environment

Verifying that your environment is correct involves importing a few packages and checking for errors (you may see some warnings for matplotlib or xarray, but you can safely ignore these).

  1. Activate the tutorial_2019 conda environment if it isn’t already active (see instructions above).

  2. Open a python terminal by typing the following in to the command terminal:

    python
    
  3. Now type the following in to the Python interpreter:

    >>> import netCDF4
    >>> import matplotlib
    >>> import xarray
    >>> import wrf
    
  1. You can exit the Python interpreter using CTRL + D
Step 7: Obtain WRF Output Files

The wrf_python_tutorial git repository linked to in Step 5 includes a directory containing several WRF-ARW data files which will be used for examples during the tutorial.

You also have the option of using your own data files for the tutorial by modifying the first Jupyter Notebook cell to point to your data set. However, there is no guarantee that every cell in your workbook will work without some modifications (e.g. cross section lines will be drawn outside of your domain).

  1. If you have recently cloned the wrf_python_tutorial git repository, then you should have a “wrf_tutorial_data” directory at the root level of the “wrf_python_tutorial” directory.

  2. If this directory does not exist, try running the following commands from within the “wrf_python_tutorial” directory to update your local copy of the git repository:

    git checkout -- .
    git pull
    git submodule init
    git submodule update
    cd wrf_tutorial_data
    git checkout -- .
    
  3. Verify that you have three WRF output files in the “wrf_tutorial_data” directory:

    $ ls wrf_tutorial_data
    wrfout_d01_2005-08-28_00_00_00
    wrfout_d01_2005-08-28_12_00_00
    wrfout_d01_2005-08-29_00_00_00
    
Getting Help

If you experience problems during this installation, please send a question to the Google Group support mailing list.

We look forward to seeing you at the tutorial!

Past Tutorials

WRF-Python and VAPOR Workshop 2018 (Boise State University)

The Department of Geosciences at Boise State University is partnering with staff from the National Center for Atmospheric Research (NCAR) to host a free, 2-day workshop in the Environmental Research Building (ERB) lab 2104 at Boise State University on September 26-27, 2018. The tutorial will be centered on the WRF-Python and VAPOR tools for analyzing and visualizing data from the Weather Research and Forecasting (WRF) regional weather and climate model.

Users must be registered to attend this tutorial (see Registration).

Location

September 26-27, 2018 9:00 AM - 4:00 PM

Boise State University, Environmental Research Building (ERB) lab #2104.

WRF-Python Overview

WRF-Python is a collection of diagnostic and interpolation routines for use with output from the Weather Research and Forecasting (WRF-ARW) Model. The package provides over 30 diagnostic calculations, several interpolation routines, and utilities to help with plotting via cartopy, basemap, or PyNGL. The functionality is similar to what is provided by the NCL WRF package.

Note

WRF-Python is NOT a tool for running the WRF-ARW model using Python.

This tutorial provides an introduction to wrf-python. The tutorial is beginner friendly for new users of wrf-python, but this is NOT an introduction to the Python programming language (see Prerequisites). Due to limited seating, if you do not have any previous experience with Python, please do not register for this tutorial.

Note

For online training that provides an introduction to the Python programming language itself, please see the Unidata Python Training Page.

Computers will be provided, but feel free to use your own laptop if you prefer. We will be covering how to install wrf-python via conda as part of the tutorial.

Students are encouraged to bring their own data sets, but data will be provided if this is not an option. Students will be provided a jupyter notebook workbook which can be modified to accommodate their data.

Topics include:

  • How to install wrf-python via conda
  • A brief introduction to jupyter notebook
  • Overview of WRF data files
  • WRF-Python basics
  • Plotting with cartopy
  • Overview of OpenMP features and other performance tips
  • Open lab for students
Registration

Please register prior to September 19, 2018. The registration form is here:

Registration Form

Registration consists of a brief survey, which will help give the instructor a brief overview of your background and will help tailor the tutorial to your expectations.

Prerequisites

This tutorial assumes that you have basic knowledge of how to type commands in to a terminal window using your preferred operating system. You should know some basic directory commands like cd, mkdir, cp, mv.

This tutorial assumes that you have prior experience programming in Python. Below is a list of some Python concepts that you will see in the examples, but don’t worry if you aren’t familiar with everything.

  • Opening a Python interpreter and entering commands.
  • Importing packages via the import statement.
  • Familiarity with some of the basic Python types: str, list, tuple, dict, bool, float, int, None.
  • Creating a list, tuple, or dict with “[ ]”, “( )”, “{ }” syntax (e.g. my_list = [1,2,3,4,5]).
  • Accessing dict/list/tuple items with the “x[ ]” syntax (e.g. my_list_item = my_list[0]).
  • Slicing str/list/tuple with the “:” syntax (e.g. my_slice = my_list[1:3]).
  • Using object methods and attributes with the “x.y” syntax (e.g. my_list.append(6)).
  • Calling functions (e.g. result = some_function(x, y))
  • Familiarity with numpy would be helpful, as only a very brief introduction is provided.
  • Familiarity with matplotlib would be helpful, as only a very brief introduction is provided.

Instructions for Computer Lab Installation
Step 1: Download Miniconda

For this tutorial, you will need to download and install Miniconda. We are going to use Python 3.6+.

Please use the appropriate link below to download Miniconda for your operating system.

Note

64-bit OS recommended

Win64

Mac

Linux

For more information, see: https://conda.io/miniconda.html

Step 2: Install Miniconda

Windows:

  1. Browse to the directory where you downloaded Miniconda3-latest-Windows-x86_64.exe.
  2. Double click on Miniconda3-latest-Windows-x86_64.exe.
  3. Follow the instructions.
  4. For Windows 10, use the Anaconda command prompt found under the Anaconda2 menu (Start Menu -> Anaconda2 -> Anaconda Prompt). Otherwise, open a regular command prompt.

Mac and Linux:

For Mac and Linux, the installer is a bash script.

  1. Using a terminal, you need to execute the bash shell script that you downloaded by doing:

    bash /path/to/Miniconda3-latest-MacOSX-x86_64.sh [Mac]
    
    bash /path/to/Miniconda3-latest-Linux-x86_64.sh [Linux]
    
  2. Follow the instructions.

  3. At the end of the installation, it will ask if you want to add the miniconda3 path to your bash environment. If you are unsure what to do, you should say “yes”. If you say “no”, we’re going to assume you know what you are doing.

    If you said “yes”, then once you restart your shell, the miniconda3 Python will be found instead of the system Python when you type the “python” command. If you want to undo this later, then you can edit either ~/.bash_profile or ~/.bashrc (depending on OS used) and comment out the line that looks similar to:

    # added by Miniconda3 x.x.x installer
    export PATH="/path/to/miniconda3/bin:$PATH"
    
  4. Restart your command terminal.

  5. [Linux and Mac Users Only] Miniconda only works with bash. If bash is not your default shell, then you need to activate the bash shell by typing the following in to your command terminal:

    bash
    
  6. Verify that your system is using the correct Python interpreter by typing the following in to your command terminal:

    which python
    

    You should see the path to your miniconda installation. If not, see the note below.

    Note

    If you have already installed another Python distribution, like Enthought Canopy, you will need to comment out any PATH entries for that distribution in your .bashrc or .bash_profile. Otherwise, your shell environment may pick to wrong Python installation.

    If bash is not your default shell type, and the PATH variable has been set in .bash_profile by the miniconda installer, try executing “bash -l” instead of the “bash” command in step 5.

Step 3: Set Up the Conda Environment

If you are new to the conda package manager, one of the nice features of conda is that you can create isolated Python environments that prevent package incompatibilities. This is similar to the virtualenv package that some Python users may be familiar with. However, conda is not compatible with virtualenv, so only use conda environments when working with conda.

The name of our conda environment for this tutorial is: tutorial_backup.

Follow the instructions below to create the tutorial_backup environment.

  1. Open a command terminal if you haven’t done so.

  2. [Linux and Mac Users Only] The conda package manager only works with bash, so if bash is not your current shell, type:

    bash
    
  3. Add the conda-forge channel to your conda package manager.

    Type or copy the command below in to your command terminal. You should run this command even if you have already done it in the past. This will ensure that conda-forge is set as the highest priority channel.

    conda config --add channels conda-forge
    

    Note

    Conda-forge is a community driven collection of packages that are continually tested to ensure compatibility. We highly recommend using conda-forge when working with conda. See https://conda-forge.github.io/ for more details on this excellent project.

  4. Create the backup conda environment for the tutorial.

    Students will create a conda environment during the tutorial, but if they run in to problems, we’re going to create a backup environment.

    Type or copy this command in to your command terminal:

    conda create -n tutorial_backup python=3.6 matplotlib cartopy netcdf4 jupyter git ffmpeg wrf-python
    

    Type “y” when prompted. It will take several minutes to install everything.

    This command creates an isolated Python environment named tutorial_backup, and installs the python interpreter, matplotlib, cartopy, netcdf4, jupyter, git, ffmpeg, and wrf-python packages.

Note

When the installation completes, your command terminal might post a message similar to:

If this is your first install of dbus, automatically load on login with:

mkdir -p ~/Library/LaunchAgents
cp /path/to/miniconda3/envs/tutorial_test/org.freedesktop.dbus-session.plist ~/Library/LaunchAgents/
launchctl load -w ~/Library/LaunchAgents/org.freedesktop.dbus-session.plist

This is indicating that the dbus package can be set up to automatically load on login. You can either ignore this message or type in the commands as indicated on your command terminal. The tutorial should work fine in either case.

  1. Activate the conda environment.

    To activate the tutorial_backup Python environment, type the following in to the command terminal:

    For Linux and Mac (using bash):

    source activate tutorial_backup
    

    For Windows:

    activate tutorial_backup
    

    You should see (tutorial_backup) on your command prompt.

    To deactivate your conda environment, type the following in to the command terminal:

    For Linux and Mac:

    source deactivate
    

    For Windows:

    deactivate tutorial_backup
    
Step 4: Download the Student Workbook

The student workbook for the tutorial is available on GitHub. The tutorial_backup conda environment includes the git application needed to download the repository.

These instructions download the tutorial in to your home directory. If you want to place the tutorial in to another directory, we’re going to assume you know how to do this yourself.

To download the student workbook, follow these instructions:

  1. Activate the tutorial_backup conda environment following the instructions in the previous step (source activate tutorial_backup or activate tutorial_backup).

  2. Change your working directory to the home directory by typing the following command in to the command terminal:

    For Linux and Mac:

    cd ~
    

    For Windows:

    cd %HOMEPATH%
    
  3. Download the git repository for the tutorial by typing the following in to the command terminal:

    git clone https://github.com/NCAR/wrf_python_tutorial.git
    
  4. There may be additional changes to the tutorial after you have downloaded it. To pull down the latest changes, type the following in to the command terminal:

    For Linux and Mac:

    source activate tutorial_backup
    
    cd ~/wrf_python_tutorial/boise_workshop_2018
    
    git pull
    

    For Windows:

    activate tutorial_2018
    
    cd %HOMEPATH%\wrf_python_tutorial\boise_workshop_2018
    
    git pull
    

    Note

    If you try the “git pull” command and it returns an error indicating that you have made changes to the workbook, this is probably because you ran the workbook and it contains the cell output. To fix this, first do a checkout of the workbook, then do the pull.

    git checkout -- .
    git pull
    
Step 5: Verify Your Environment

Verifying that your environment is correct involves importing a few packages and checking for errors (you may see some warnings for matplotlib or xarray, but you can safely ignore these).

  1. Activate the tutorial_backup conda environment if it isn’t already active (see instructions above).

  2. Open a python terminal by typing the following in to the command terminal:

    python
    
  3. Now type the following in to the Python interpreter:

    >>> import netCDF4
    >>> import matplotlib
    >>> import xarray
    >>> import wrf
    
  1. You can exit the Python interpreter using CTRL + D
Step 6: Install WRF Output Files

A link will be provided in an email prior to the tutorial for the WRF-ARW data files used for the examples.

  1. The link in the email should take you to a location on an Amazon cloud drive.
  2. If you hover your mouse over the wrf_tutorial_data.zip file, you’ll see an empty check box appear next to the file name. Click this check box.
  3. At the bottom of the screen, you’ll see a Download button next to a cloud icon. Click this button to start the download.
  4. The download was most likely placed in to your ~/Downloads folder [%HOMEPATH%\Downloads for Windows]. Using your preferred method of choice for unzipping files, unzip this file in to your home directory. Your data should now be in ~/wrf_tutorial_data [%HOMEPATH%\wrf_tutorial_data for Windows].
  5. Verify that you have three WRF output files in that directory.

WRF Users’ Workshop 2018

Welcome WRF-Python Tutorial attendees!

If you wish to actively participate in the tutorial, please bring your own laptop. Due to limited time constraints, the instructions below should be completed prior to arriving at the tutorial.

I will be executing the same cells as the student workbook, so if you prefer to sit and watch, that’s OK too. Following the tutorial, I will upload the instructor slides in to the same GitHub location as the student workbook if you want to try it out later.

Prerequisites

This tutorial assumes that you have basic knowledge of how to type commands in to a command terminal using your preferred operating system. You should know some basic directory commands like cd, mkdir, cp, mv.

Regarding Python, to understand the examples in this tutorial, you should have some experience with Python basics. Below is a list of some Python concepts that you will see in the examples, but don’t worry if you aren’t familiar with everything.

  • Opening a Python interpreter and entering commands.
  • Importing packages via the import statement.
  • Familiarity with some of the basic Python types: str, list, tuple, dict, bool, float, int, None.
  • Creating a list, tuple, or dict with “[ ]”, “( )”, “{ }” syntax (e.g. my_list = [1,2,3,4,5]).
  • Accessing dict/list/tuple items with the “x[ ]” syntax (e.g. my_list_item = my_list[0]).
  • Slicing str/list/tuple with the “:” syntax (e.g. my_slice = my_list[1:3]).
  • Using object methods and attributes with the “x.y” syntax (e.g. my_list.append(6)).
  • Calling functions (e.g. result = some_function(x, y))
  • Familiarity with numpy would be helpful, as only a very brief introduction is provided.
  • Familiarity with matplotlib would be helpful, as only a very brief introduction is provided.

If you are completely new to Python, that shouldn’t be a problem, since most of the examples consist of basic container types and function calls. It would be helpful to look at some introductory material before arriving at the tutorial. If you’ve programmed before, picking up Python isn’t too difficult.

Here are some links:

https://www.learnpython.org/

https://developers.google.com/edu/python/

Step 1: Open a Command Terminal

To begin, you will first need to know how to open a command line terminal for your operating system.

For Windows:

WINDOWS + r
type cmd in the run window

For Mac:

Finder -> Applications -> Utilities -> Terminal

For Linux:

Try one of the following:

CTRL + ALT + T
CTRL + ALT + F2
Step 2: Download Miniconda

For this tutorial, you will need to download and install Miniconda. We are going to use Python 3.6, but it will also work with Python 2.7.

Please use the appropriate link below to download Miniconda for your operating system.

Note

64-bit OS recommended

Win64

Mac

Linux

For more information, see: https://conda.io/miniconda.html

Note

What is Miniconda?

If you have used the Anaconda distribution for Python before, then you will be familiar with Miniconda. The Anaconda Python distribution includes numerous scientific packages out of the box, which can be difficult for users to build and install. More importantly, Anaconda includes the conda package manager.

The conda package manager is a utility (similar to yum or apt-get) that installs packages from a repository of pre-compiled Python packages. These repositories are called channels. Conda makes it easy for Python users to install and uninstall packages, and also can be used to create isolated Python environments (more on that later).

Miniconda is a bare bones implementation of Anaconda and only includes the conda package manager. Since we are going to use the conda-forge channel to install our scientific packages, Miniconda avoids any complications between packages provided by Anaconda and conda-forge.

Step 3: Install Miniconda

Windows:

  1. Browse to the directory where you downloaded Miniconda3-latest-Windows-x86_64.exe.
  2. Double click on Miniconda3-latest-Windows-x86_64.exe.
  3. Follow the instructions.
  4. Restart your command terminal.

Mac and Linux:

For Mac and Linux, the installer is a bash script.

  1. Using a terminal, you need to execute the bash shell script that you downloaded by doing:

    bash /path/to/Miniconda3-latest-MacOSX-x86_64.sh [Mac]
    
    bash /path/to/Miniconda3-latest-Linux-x86_64.sh [Linux]
    
  2. Follow the instructions.

  3. At the end of the installation, it will ask if you want to add the miniconda3 path to your bash environment. If you are unsure what to do, you should say “yes”. If you say “no”, we’re going to assume you know what you are doing.

    If you said “yes”, then once you restart your shell, the miniconda3 Python will be found instead of the system Python when you type the “python” command. If you want to undo this later, then you can edit either ~/.bash_profile or ~/.bashrc (depending on OS used) and comment out the line that looks similar to:

    # added by Miniconda3 x.x.x installer
    export PATH="/path/to/miniconda3/bin:$PATH"
    
  4. Restart your command terminal.

  5. [Linux and Mac Users Only] Miniconda only works with bash. If bash is not your default shell, then you need to activate the bash shell by typing the following in to your command terminal:

    bash
    
  6. Verify that your system is using the correct Python interpreter by typing the following in to your command terminal:

    which python
    

    You should see the path to your miniconda installation. If not, see the note below.

    Note

    If you have already installed another Python distribution, like Enthought Canopy, you will need to comment out any PATH entries for that distribution in your .bashrc or .bash_profile. Otherwise, your shell environment may pick to wrong Python installation.

    If bash is not your default shell type, and the PATH variable has been set in .bash_profile by the miniconda installer, try executing “bash -l” instead of the “bash” command in step 5.

Step 4: Set Up the Conda Environment

If you are new to the conda package manager, one of the nice features of conda is that you can create isolated Python environments that prevent package incompatibilities. This is similar to the virtualenv package that some Python users may be familiar with. However, conda is not compatible with virtualenv, so only use conda environments when working with conda.

The name of our conda environment for this tutorial is: tutorial_2018.

Follow the instructions below to create the tutorial_2018 environment.

  1. Open a command terminal if you haven’t done so.

  2. [Linux and Mac Users Only] The conda package manager only works with bash, so if bash is not your current shell, type:

    bash
    
  3. Add the conda-forge channel to your conda package manager.

    Type or copy the command below in to your command terminal. You should run this command even if you have already done it in the past. This will ensure that conda-forge is set as the highest priority channel.

    conda config --add channels conda-forge
    

    Note

    Conda-forge is a community driven collection of packages that are continually tested to ensure compatibility. We highly recommend using conda-forge when working with conda. See https://conda-forge.github.io/ for more details on this excellent project.

  4. Create the conda environment for the tutorial.

    Type or copy this command in to your command terminal:

    conda create -n tutorial_2018 python=3.6 matplotlib cartopy netcdf4 jupyter git ffmpeg wrf-python
    

    Type “y” when prompted. It will take several minutes to install everything.

    This command creates an isolated Python environment named tutorial_2018, and installs the python interpreter, matplotlib, cartopy, netcdf4, jupyter, git, ffmpeg, and wrf-python packages.

Note

When the installation completes, your command terminal might post a message similar to:

If this is your first install of dbus, automatically load on login with:

mkdir -p ~/Library/LaunchAgents
cp /path/to/miniconda3/envs/tutorial_test/org.freedesktop.dbus-session.plist ~/Library/LaunchAgents/
launchctl load -w ~/Library/LaunchAgents/org.freedesktop.dbus-session.plist

This is indicating that the dbus package can be set up to automatically load on login. You can either ignore this message or type in the commands as indicated on your command terminal. The tutorial should work fine in either case.

  1. Activate the conda environment.

    To activate the tutorial_2018 Python environment, type the following in to the command terminal:

    For Linux and Mac (using bash):

    source activate tutorial_2018
    

    For Windows:

    activate tutorial_2018
    

    You should see (tutorial_2018) on your command prompt.

    To deactivate your conda environment, type the following in to the command terminal:

    For Linux and Mac:

    source deactivate
    

    For Windows:

    deactivate tutorial_2018
    
Step 5: Download the Student Workbook

The student workbook for the tutorial is available on GitHub. The tutorial_2018 conda environment includes the git application needed to download the repository.

These instructions download the tutorial in to your home directory. If you want to place the tutorial in to another directory, we’re going to assume you know how to do this yourself.

To download the student workbook, follow these instructions:

  1. Activate the tutorial_2018 conda environment following the instructions in the previous step (source activate tutorial_2018 or activate tutorial_2018).

  2. Change your working directory to the home directory by typing the following command in to the command terminal:

    For Linux and Mac:

    cd ~
    

    For Windows:

    cd %HOMEPATH%
    
  3. Download the git repository for the tutorial by typing the following in to the command terminal:

    git clone https://github.com/NCAR/wrf_python_tutorial.git
    
  4. There may be additional changes to the tutorial after you have downloaded it. To pull down the latest changes, type the following in to the command terminal:

    For Linux and Mac:

    source activate tutorial_2018
    
    cd ~/wrf_python_tutorial/wrf_workshop_2018
    
    git pull
    

    For Windows:

    activate tutorial_2018
    
    cd %HOMEPATH%\wrf_python_tutorial\wrf_workshop_2018
    
    git pull
    

    Note

    If you try the “git pull” command and it returns an error indicating that you have made changes to the workbook, this is probably because you ran the workbook and it contains the cell output. To fix this, first do a checkout of the workbook, then do the pull.

    git checkout -- .
    git pull
    
Step 6: Verify Your Environment

Verifying that your environment is correct involves importing a few packages and checking for errors (you may see some warnings for matplotlib or xarray, but you can safely ignore these).

  1. Activate the tutorial_2018 conda environment if it isn’t already active (see instructions above).

  2. Open a python terminal by typing the following in to the command terminal:

    python
    
  3. Now type the following in to the Python interpreter:

    >>> import netCDF4
    >>> import matplotlib
    >>> import xarray
    >>> import wrf
    
  1. You can exit the Python interpreter using CTRL + D
Step 7: Obtain WRF Output Files

A link will be provided in an email prior to the tutorial for the WRF-ARW data files used for the examples. If you did not receive this email, the link will also be provided at the tutorial itself.

You also have the option of using your own data files for the tutorial by modifying the first Jupyter Notebook cell to point to your data set. However, there is no guarantee that every cell in your workbook will work without some modifications (e.g. cross section lines will be drawn outside of your domain).

  1. The link in the email should take you to a location on an Amazon cloud drive.
  2. If you hover your mouse over the wrf_tutorial_data.zip file, you’ll see an empty check box appear next to the file name. Click this check box.
  3. At the bottom of the screen, you’ll see a Download button next to a cloud icon. Click this button to start the download.
  4. The download was most likely placed in to your ~/Downloads folder [%HOMEPATH%\Downloads for Windows]. Using your preferred method of choice for unzipping files, unzip this file in to your home directory. Your data should now be in ~/wrf_tutorial_data [%HOMEPATH%\wrf_tutorial_data for Windows].
  5. Verify that you have three WRF output files in that directory.
Getting Help

If you experience problems during this installation, please send a question to the Google Group support mailing list.

We look forward to seeing you at the tutorial!

WRF-Python Tutorial 2018

NCAR will be providing a four hour tutorial for wrf-python on Wednesday, March 7, 2018 from 8 AM to 12 PM. The tutorial is free, but seating is limited to only 16 students, so registration is required.

The tutorial will take place at NCAR’s corporate training center in Boulder, Colorado.

Corporate Technical Training Center 3085 Center Green Drive, Building CG-2, Room #3024 Boulder, Colorado

Overview

WRF-Python is a collection of diagnostic and interpolation routines for use with output from the Weather Research and Forecasting (WRF-ARW) Model. The package provides over 30 diagnostic calculations, several interpolation routines, and utilities to help with plotting via cartopy, basemap, or PyNGL. The functionality is similar to what is provided by the NCL WRF package.

Note

WRF-Python is NOT a tool for running the WRF-ARW model using Python.

This tutorial provides an introduction to wrf-python. The tutorial is beginner friendly for new users of wrf-python, but this is NOT an introduction to the Python programming language (see Prerequisites). Due to limited seating, if you do not have any previous experience with Python, please do not register for this tutorial.

Note

For online training that provides an introduction to the Python programming language itself, please see the Unidata Python Training Page.

Computers will be provided, but feel free to use your own laptop if you prefer. We will be covering how to install wrf-python via conda as part of the tutorial.

Students are encouraged to bring their own data sets, but data will be provided if this is not an option. Students will be provided a jupyter notebook workbook which can be modified to accommodate their data.

Topics include:

  • How to install wrf-python via conda
  • A brief introduction to jupyter notebook
  • Overview of WRF data files
  • WRF-Python basics
  • Plotting with cartopy
  • Overview of OpenMP features and other performance tips
  • Open lab for students
Registration

Please register prior to February 21, 2018. The registration form is here:

Registration Form

Registration consists of a brief survey, which will help give the instructor a brief overview of your background and will help tailor the tutorial to your expectations.

Prerequisites

This tutorial assumes that you have basic knowledge of how to type commands in to a terminal window using your preferred operating system. You should know some basic directory commands like cd, mkdir, cp, mv.

This tutorial assumes that you have prior experience programming in Python. Below is a list of some Python concepts that you will see in the examples, but don’t worry if you aren’t familiar with everything.

  • Opening a Python interpreter and entering commands.
  • Importing packages via the import statement.
  • Familiarity with some of the basic Python types: str, list, tuple, dict, bool, float, int, None.
  • Creating a list, tuple, or dict with “[ ]”, “( )”, “{ }” syntax (e.g. my_list = [1,2,3,4,5]).
  • Accessing dict/list/tuple items with the “x[ ]” syntax (e.g. my_list_item = my_list[0]).
  • Slicing str/list/tuple with the “:” syntax (e.g. my_slice = my_list[1:3]).
  • Using object methods and attributes with the “x.y” syntax (e.g. my_list.append(6)).
  • Calling functions (e.g. result = some_function(x, y))
  • Familiarity with numpy would be helpful, as only a very brief introduction is provided.
  • Familiarity with matplotlib would be helpful, as only a very brief introduction is provided.

WRF Users’ Workshop 2017

Welcome wrf-python tutorial attendees!

The instructions below should be completed prior to arriving at the tutorial. There will not be enough time to do this during the tutorial.

Prerequisites

This tutorial assumes that you have basic knowledge of how to type commands in to a command terminal using your preferred operating system. You should know some basic directory commands like cd, mkdir, cp, mv.

Regarding Python, to understand the examples in this tutorial, you should have some experience with Python basics. Below is a list of some Python concepts that you will see in the examples, but don’t worry if you aren’t familiar with everything.

  • Opening a Python interpreter and entering commands.
  • Importing packages via the import statement.
  • Familiarity with some of the basic Python types: str, list, tuple, dict, bool, float, int, None.
  • Creating a list, tuple, or dict with “[ ]”, “( )”, “{ }” syntax (e.g. my_list = [1,2,3,4,5]).
  • Accessing dict/list/tuple items with the “x[ ]” syntax (e.g. my_list_item = my_list[0]).
  • Slicing str/list/tuple with the “:” syntax (e.g. my_slice = my_list[1:3]).
  • Using object methods and attributes with the “x.y” syntax (e.g. my_list.append(6)).
  • Calling functions (e.g. result = some_function(x, y))
  • Familiarity with numpy would be helpful, as only a very brief introduction is provided.
  • Familiarity with matplotlib would be helpful, as only a very brief introduction is provided.

If you are completely new to Python, that shouldn’t be a problem, since most of the examples consist of basic container types and function calls. It would be helpful to look at some introductory material before arriving at the tutorial. If you’ve programmed before, picking up Python isn’t too difficult.

Here are some links:

https://www.learnpython.org/

https://developers.google.com/edu/python/

Step 1: Open a Command Terminal

To begin, you will first need to know how to open a command line terminal for your operating system.

For Windows:

WINDOWS + r
type cmd in the run window

For Mac:

Finder -> Applications -> Utilities -> Terminal

For Linux:

Try one of the following:

CTRL + ALT + T
CTRL + ALT + F2
Step 2: Download Miniconda

For this tutorial, you will need to download and install Miniconda. We are going to use Python 2.7, but it should also work with Python 3.5+. However, due to limitations with open source compilers on conda-forge, only Python 2.7 is available for Windows.

Please use the appropriate link below to download Miniconda for your operating system.

Note

64-bit OS recommended

Win64

Mac

Linux

For more information, see: https://conda.io/miniconda.html

Note

What is Miniconda?

If you have used the Anaconda distribution for Python before, then you will be familiar with Miniconda. The Anaconda Python distribution includes numerous scientific packages out of the box, which can be difficult for users to build and install. More importantly, Anaconda includes the conda package manager.

The conda package manager is a utility (similar to yum or apt-get) that installs packages from a repository of pre-compiled Python packages. These repositories are called channels. Conda makes it easy for Python users to install and uninstall packages, and also can be used to create isolated Python environments (more on that later).

Miniconda is a bare bones implementation of Anaconda and only includes the conda package manager. Since we are going to use the conda-forge channel to install our scientific packages, Miniconda avoids any complications between packages provided by Anaconda and conda-forge.

Step 3: Install Miniconda

Windows:

  1. Browse to the directory where you downloaded Miniconda2-latest-Windows-x86_64.exe.
  2. Double click on Miniconda2-latest-Windows-x86_64.exe.
  3. Follow the instructions.
  4. Restart your command terminal.

Mac and Linux:

For Mac and Linux, the installer is a bash script.

  1. Using a terminal, you need to execute the bash shell script that you downloaded by doing:

    bash /path/to/Miniconda2-latest-MacOSX-x86_64.sh [Mac]
    
    bash /path/to/Miniconda2-latest-Linux-x86_64.sh [Linux]
    
  2. Follow the instructions.

  3. At the end of the installation, it will ask if you want to add the miniconda2 path to your bash environment. If you are unsure what to do, you should say “yes”. If you say “no”, we’re going to assume you know what you are doing.

    If you said “yes”, then once you restart your shell, the miniconda2 Python will be found instead of the system Python when you type the “python” command. If you want to undo this later, then you can edit either ~/.bash_profile or ~/.bashrc (depending on OS used) and comment out the line that looks similar to:

    # added by Miniconda2 4.1.11 installer
    export PATH="/path/to/miniconda2/bin:$PATH"
    
  4. Restart your command terminal.

  5. [Linux and Mac Users Only] Miniconda only works with bash. If bash is not your default shell, then you need to activate the bash shell by typing the following in to your command terminal:

    bash
    
  6. Verify that your system is using the correct Python interpreter by typing the following in to your command terminal:

    which python
    

    You should see the path to your miniconda installation. If not, see the note below.

    Note

    If you have already installed another Python distribution, like Enthought Canopy, you will need to comment out any PATH entries for that distribution in your .bashrc or .bash_profile. Otherwise, your shell environment may pick to wrong Python installation.

    If bash is not your default shell type, and the PATH variable has been set in .bash_profile by the miniconda installer, try executing “bash -l” instead of the “bash” command in step 5.

Step 4: Set Up the Conda Environment

If you are new to the conda package manager, one of the nice features of conda is that you can create isolated Python environments that prevent package incompatibilities. This is similar to the virtualenv package that some Python users may be familiar with. However, conda is not compatible with virtualenv, so only use conda environments when working with conda.

The name of our conda environment for this tutorial is: tutorial_2017.

Follow the instructions below to create the tutorial_2017 environment.

  1. Open a command terminal if you haven’t done so.

  2. [Linux and Mac Users Only] The conda package manager only works with bash, so if bash is not your current shell, type:

    bash
    
  3. Add the conda-forge channel to your conda package manager.

    Type or copy the command below in to your command terminal. You should run this command even if you have already done it in the past. This will ensure that conda-forge is set as the highest priority channel.

    conda config --add channels conda-forge
    

    Note

    Conda-forge is a community driven collection of packages that are continually tested to ensure compatibility. We highly recommend using conda-forge when working with conda. See https://conda-forge.github.io/ for more details on this excellent project.

  4. Create the conda environment for the tutorial.

    Type or copy this command in to your command terminal:

    conda create -n tutorial_2017 python=2.7 matplotlib=1.5.3 cartopy netcdf4 jupyter git ffmpeg wrf-python
    

    Type “y” when prompted. It will take several minutes to install everything.

    This command creates an isolated Python environment named tutorial_2017, and installs the python interpreter, matplotlib, cartopy, netcdf4, jupyter, git, ffmpeg, and wrf-python packages.

Note

When the installation completes, your command terminal might post a message similar to:

If this is your first install of dbus, automatically load on login with:

mkdir -p ~/Library/LaunchAgents
cp /path/to/miniconda2/envs/tutorial_test/org.freedesktop.dbus-session.plist ~/Library/LaunchAgents/
launchctl load -w ~/Library/LaunchAgents/org.freedesktop.dbus-session.plist

This is indicating that the dbus package can be set up to automatically load on login. You can either ignore this message or type in the commands as indicated on your command terminal. The tutorial should work fine in either case.

Note

In this tutorial, we need to use matplotlib v1.5.3 due to some issues with cartopy, which should be fixed in a later version of cartopy. Be sure to supply the version number as indicated in the command above.

  1. Activate the conda environment.

    To activate the tutorial_2017 Python environment, type the following in to the command terminal:

    For Linux and Mac (using bash):

    source activate tutorial_2017
    

    For Windows:

    activate tutorial_2017
    

    You should see (tutorial_2017) on your command prompt.

    To deactivate your conda environment, type the following in to the command terminal:

    For Linux and Mac:

    source deactivate
    

    For Windows:

    deactivate tutorial_2017
    
Step 5: Download the Student Workbook

The student workbook for the tutorial is available on GitHub. The tutorial_2017 conda environment includes the git application needed to download the repository.

These instructions download the tutorial in to your home directory. If you want to place the tutorial in to another directory, we’re going to assume you know how to do this yourself.

To download the student workbook, follow these instructions:

  1. Activate the tutorial_2017 conda environment following the instructions in the previous step (source activate tutorial_2017 or activate tutorial_2017).

  2. Change your working directory to the home directory by typing the following command in to the command terminal:

    For Linux and Mac:

    cd ~
    

    For Windows:

    cd %HOMEPATH%
    
  3. Download the git repository for the tutorial by typing the following in to the command terminal:

    git clone https://github.com/NCAR/wrf_python_tutorial.git
    
  4. There may be additional changes to the tutorial after you have downloaded it. To pull down the latest changes, type the following in to the command terminal:

    For Linux and Mac:

    source activate tutorial_2017
    
    cd ~/wrf_python_tutorial
    
    git pull
    

    For Windows:

    activate tutorial_2017
    
    cd %HOMEPATH%\wrf_python_tutorial
    
    git pull
    

    Note

    If you try the “git pull” command and it returns an error indicating that you have made changes to the workbook, this is probably because you ran the workbook and it contains the cell output. To fix this, first do a checkout of the workbook, then do the pull.

    git checkout -- wrf_workshop_2017.ipynb
    git pull
    
Step 6: Verify Your Environment

Verifying that your environment is correct involves importing a few packages and checking for errors (you may see some warnings for matplotlib or xarray, but you can safely ignore these).

  1. Activate the tutorial_2017 conda environment if it isn’t already active (see instructions above).

  2. Open a python terminal by typing the following in to the command terminal:

    python
    
  3. Now type the following in to the Python interpreter:

    >>> import netCDF4
    >>> import matplotlib
    >>> import xarray
    >>> import wrf
    
  1. You can exit the Python interpreter using CTRL + D
Step 7: Obtain WRF Output Files

For this tutorial, we strongly recommend that you use your own WRF output files. The tutorial includes an easy way to point to your own data files. The WRF output files should all be from the same WRF run and use the same domain. If your files are located on another system (e.g. yellowstone), then copy 2 or 3 of these files to your local computer prior to the tutorial.

If you do not have any of your own WRF output files, then you can download the instructor data files from a link that should have been provided to you in an email prior to the tutorial.

If you are using the link provided in the email for your data, you can follow the instructions below to place your data in the default location for your workbook.

  1. The link in the email should take you to a location on an Amazon cloud drive.
  2. If you hover your mouse over the wrf_tutorial_data.zip file, you’ll see an empty check box appear next to the file name. Click this check box.
  3. At the bottom of the screen, you’ll see a Download button next to a cloud icon. Click this button to start the download.
  4. The download was most likely placed in to your ~/Downloads folder [%HOMEPATH%\Downloads for Windows]. Using your preferred method of choice for unzipping files, unzip this file in to your home directory. Your data should now be in ~/wrf_tutorial_data [%HOMEPATH%\wrf_tutorial_data for Windows].
  5. Verify that you have three WRF output files in that directory.
Getting Help

If you experience problems during this installation, please send a question to the Google Group support mailing list.

We look forward to seeing you at the tutorial!

Indices and tables


The National Center for Atmospheric Research is sponsored by the National Science Foundation. Any opinions, findings and conclusions or recommendations expressed in this material do not necessarily reflect the views of the National Science Foundation.