<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="es">
	<id>http://wiki.cima.fcen.uba.ar/index.php?action=history&amp;feed=atom&amp;title=WRF4Lold</id>
	<title>WRF4Lold - Historial de revisiones</title>
	<link rel="self" type="application/atom+xml" href="http://wiki.cima.fcen.uba.ar/index.php?action=history&amp;feed=atom&amp;title=WRF4Lold"/>
	<link rel="alternate" type="text/html" href="http://wiki.cima.fcen.uba.ar/index.php?title=WRF4Lold&amp;action=history"/>
	<updated>2026-05-11T07:47:23Z</updated>
	<subtitle>Historial de revisiones de esta página en la wiki</subtitle>
	<generator>MediaWiki 1.41.1</generator>
	<entry>
		<id>http://wiki.cima.fcen.uba.ar/index.php?title=WRF4Lold&amp;diff=3687&amp;oldid=prev</id>
		<title>Lluis.fita: Página creada con «There is a far more powerful tool to manage work-flow of [http://www.meteo.unican.es/software/wrf4g/ WRF4G].   Llu&amp;iacute;s developed a less powerful one which is here desc...»</title>
		<link rel="alternate" type="text/html" href="http://wiki.cima.fcen.uba.ar/index.php?title=WRF4Lold&amp;diff=3687&amp;oldid=prev"/>
		<updated>2025-07-31T08:53:47Z</updated>

		<summary type="html">&lt;p&gt;Página creada con «There is a far more powerful tool to manage work-flow of [http://www.meteo.unican.es/software/wrf4g/ WRF4G].   Lluís developed a less powerful one which is here desc...»&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Página nueva&lt;/b&gt;&lt;/p&gt;&lt;div&gt;There is a far more powerful tool to manage work-flow of [http://www.meteo.unican.es/software/wrf4g/ WRF4G]. &lt;br /&gt;
&lt;br /&gt;
Llu&amp;amp;iacute;s developed a less powerful one which is here described&lt;br /&gt;
&lt;br /&gt;
WRF work-flow management is done via 5 scripts (these are the specifics for hydra [CIMA cluster]):&lt;br /&gt;
* &amp;lt;code&amp;gt;EXPERIMENTparameters.txt&amp;lt;/code&amp;gt;: General ASCII file which configures the experiment and chain of simulations (chunks). This is the unique file to modify&lt;br /&gt;
* &amp;lt;code&amp;gt;wrf4l_experiment.pbs&amp;lt;/code&amp;gt;: PBS-queue job which prepares the experiment of the environment&lt;br /&gt;
* &amp;lt;code&amp;gt;wrf4l_WPS.pbs&amp;lt;/code&amp;gt;: PBS-queue job which launch the WPS section of the model: &amp;lt;code&amp;gt;ungrib.exe&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;metgrid.exe&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;real.exe&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;wrf4l_WRF.pbs&amp;lt;/code&amp;gt;: PBS-queue job which launch the &amp;lt;code&amp;gt;wrf.exe&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;!---* &amp;lt;code&amp;gt;launch_pbs.bash&amp;lt;/code&amp;gt;: Necessary shell script to launch jobs which use more than one node in CIMA&amp;#039;s &amp;lt;code&amp;gt;hydra&amp;lt;/code&amp;gt; cluster--&amp;gt;&lt;br /&gt;
* There is a folder called &amp;lt;code&amp;gt;components&amp;lt;/code&amp;gt; with shell and python scripts necessary for the work-flow management&lt;br /&gt;
&lt;br /&gt;
An experiment which contains a period of simulation is divided by &amp;#039;&amp;#039;&amp;#039;chunks&amp;#039;&amp;#039;&amp;#039; small pieces of times which are manageable by the model. The work-flow follows these steps using &amp;lt;code&amp;gt;run_experiments.pbs&amp;lt;/code&amp;gt;:&lt;br /&gt;
# Copy and link all the required files for a given &amp;#039;&amp;#039;&amp;#039;chunk&amp;#039;&amp;#039;&amp;#039; of the whole period of simulation following the content of &amp;lt;code&amp;gt;EXPERIMENTparameters.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
# Launches &amp;lt;code&amp;gt;wrf4l_WPS.pbs&amp;lt;/code&amp;gt; which will produce the necessary files for the period of the given &amp;#039;&amp;#039;&amp;#039;chunk&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
# Launches &amp;lt;code&amp;gt;wrf4l_WRF.pbs&amp;lt;/code&amp;gt; which will simulated the period of the given &amp;#039;&amp;#039;&amp;#039;chunk&amp;#039;&amp;#039;&amp;#039; (which waits until the end of &amp;lt;code&amp;gt;run_WPS.pbs&amp;lt;/code&amp;gt;)&lt;br /&gt;
# Launches the next &amp;lt;code&amp;gt;wrf4l_experiments.pbs&amp;lt;/code&amp;gt; (which waits until the end of &amp;lt;code&amp;gt;wrf4l_WRF.pbs&amp;lt;/code&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
[[File:WRF4L_resize.png]]&lt;br /&gt;
&lt;br /&gt;
All the scripts are located in &amp;lt;code&amp;gt;hydra&amp;lt;/code&amp;gt; at:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/share/tools/workflows/WRF4L/hydra&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== How to simulate ==&lt;br /&gt;
# Assuming that we have already defined the domains of simulation and that we have a suited &amp;lt;CODE&amp;gt;namelist.wps&amp;lt;/CODE&amp;gt; and &amp;lt;CODE&amp;gt;namelist.input&amp;lt;/CODE&amp;gt; WRF namelist files in a folder called &amp;lt;CODE&amp;gt;$GEOGRID&amp;lt;/CODE&amp;gt;. We will gather all this information in the storage folder for the experiment (let&amp;#039;s assume called $STORAGEdir) and we create there a folder for the information of the domains.&lt;br /&gt;
&amp;lt;PRE style&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
mkdir ${STORAGEdir}/domains&lt;br /&gt;
cp ${GEOGRID}/geo_em.d* ${STORAGEdir}/domains&lt;br /&gt;
cp ${GEOGRID}/namelist.wps ${STORAGEdir}/domains&lt;br /&gt;
cp ${GEOGRID}/namelist.input ${STORAGEdir}/domains&lt;br /&gt;
&amp;lt;/PRE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# We then move to the &amp;lt;CODE&amp;gt;${STORAGEdir}/domains&amp;lt;/CODE&amp;gt; and create a template file for the &amp;lt;CODE&amp;gt;namelist.wps&amp;lt;/CODE&amp;gt;&lt;br /&gt;
&amp;lt;PRE style=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
cd ${STORAGEdir}/domains&lt;br /&gt;
cp namelist.wps namelist.wps_template&lt;br /&gt;
diff namelist.wps namelist.wps_template &lt;br /&gt;
4,6c4,6&lt;br /&gt;
&amp;lt;  start_date = &amp;#039;2010-06-01_00:00:00&amp;#039;, &amp;#039;2010-06-01_00:00:00&amp;#039;&lt;br /&gt;
&amp;lt;  end_date   = &amp;#039;2010-06-16_00:00:00&amp;#039;, &amp;#039;2010-06-16_00:00:00&amp;#039;&lt;br /&gt;
&amp;lt;  interval_seconds = 10800,&lt;br /&gt;
---&lt;br /&gt;
&amp;gt;  start_date = &amp;#039;iyr-imo-ida_iho:imi:ise&amp;#039;, &amp;#039;iyr-imo-ida_iho:imi:ise&amp;#039;,&lt;br /&gt;
&amp;gt;  end_date   = &amp;#039;eyr-emo-eda_eho:emi:ese&amp;#039;, &amp;#039;eyr-emo-eda_eho:emi:ese&amp;#039;, &lt;br /&gt;
&amp;gt;  interval_seconds = infreq&lt;br /&gt;
&amp;lt;/PRE&amp;gt; &lt;br /&gt;
# In case we are going to use &amp;lt;CODE&amp;gt;nc2wps&amp;lt;/CODE&amp;gt; to generate the intermediate files (&amp;lt;CODE&amp;gt;ungrib.exe&amp;lt;/CODE&amp;gt; output directly generated from netCDF files), we need to generate a template for the &amp;lt;CODE&amp;gt;namelist.nc2wps&amp;lt;/CODE&amp;gt;. There are different options available in &amp;lt;CODE&amp;gt;/share/tools/workflows/WRF4L/hydra&amp;lt;/CODE&amp;gt;. In this example we use the one prepared for ERA5 data as it is kept in Papa-Deimos:&lt;br /&gt;
&amp;lt;PRE style=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
cp /share/tools/workflows/WRF4L/hydra/namelist.nc2wps_template_ERA5 ${STORAGEdir}/domains/namelist.nc2wps_template&lt;br /&gt;
&amp;lt;/PRE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Creation of a new folder from where launch the experiment [ExperimentName] (g.e. somewhere at &amp;lt;CODE&amp;gt;salidas/&amp;lt;CODE&amp;gt; folder)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ mkdir salidas/[ExperimentName]&lt;br /&gt;
cd salidas/[ExperimentName]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
# copy basic WRF4L files to this folder&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cp /share/tools/workflows/WRF4L/hydra_old/wrf4l_experiment.pbs ./&lt;br /&gt;
cp /share/tools/workflows/WRF4L/hydra_old/wrf4l_WPS.pbs ./&lt;br /&gt;
cp /share/tools/workflows/WRF4L/hydra_old/wrf4l_WRF.pbs ./&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Edit the configuration/set-up of the simulation of the experiment (e.g. period of simulation, version of WPS and WRF to use, additional &amp;lt;CODE&amp;gt;namelist.input&amp;lt;/CODE&amp;gt; parameters, multiple folders: &amp;lt;CODE&amp;gt;storageHOME&amp;lt;/CODE&amp;gt;, &amp;lt;CODE&amp;gt;runHOME&amp;lt;/CODE&amp;gt;, MPI configuration, ...)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ vim EXPERIMENTparameters.txt&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
# Change the name of the user in the &amp;lt;CODE&amp;gt;wrf4l_experiment.pbs&amp;lt;/CODE&amp;gt;&lt;br /&gt;
&amp;lt;PRE style=&amp;quot;shell&amp;quot;&amp;gt;&lt;br /&gt;
diff wrf4l_experiment.pbs /share/tools/workflows/WRF4L/hydra_old/wrf4l_experiment.pbs&lt;br /&gt;
11c11&lt;br /&gt;
&amp;lt; #PBS -M lluis.fita@cima.fcen.uba.ar&lt;br /&gt;
---&lt;br /&gt;
&amp;gt; #PBS -M [user]@cima.fcen.uba.ar&lt;br /&gt;
&amp;lt;/PRE&amp;gt;&lt;br /&gt;
&lt;br /&gt;
# Launch the experiment&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ qsub wrf4l_experiment.pbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When it is running one would have (runnig WRF job &amp;lt;code&amp;gt;wrf_[SimName]&amp;lt;/code&amp;gt; `R&amp;#039;, and &amp;lt;code&amp;gt;exp_[SimName]&amp;lt;/code&amp;gt; in hold `H&amp;#039;):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ qstat -u $USER&lt;br /&gt;
&lt;br /&gt;
hydra: &lt;br /&gt;
                                                                         Req&amp;#039;d  Req&amp;#039;d   Elap&lt;br /&gt;
Job ID               Username Queue    Jobname          SessID NDS   TSK Memory Time  S Time&lt;br /&gt;
-------------------- -------- -------- ---------------- ------ ----- --- ------ ----- - -----&lt;br /&gt;
397.hydra            lluis.fi larga    wps_              27567     1  16   20gb 168:0 R   -- &lt;br /&gt;
398.hydra            lluis.fi larga    wrf_                --      1   1   20gb 168:0 H   -- &lt;br /&gt;
399.hydra            lluis.fi larga    exp_                --      1   1    2gb 168:0 H   -- &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case of crash of the simulation, after fixing the issue, go to &amp;lt;code&amp;gt;[runHOME]/[ExpName]/[SimName]&amp;lt;/code&amp;gt; and re-launch the experiment (after the first run the &amp;lt;code&amp;gt;scratch&amp;lt;/code&amp;gt; is switched automatically to `false&amp;#039;)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ qsub wrf4l_experiment.pbs&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Checking the experiment ==&lt;br /&gt;
Once the experiment runs, one needs to look on (following name of the variables from &amp;lt;code&amp;gt;EXPERIMENTparameters.txt&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;[runHOME]/[ExpName]/[SimName]&amp;lt;/code&amp;gt;: Will content the copies of the templates &amp;lt;code&amp;gt;namelist.wps&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;namelist.input&amp;lt;/code&amp;gt; and a file &amp;lt;code&amp;gt;chunk_attemps.inf&amp;lt;/code&amp;gt; which counts how many times a &amp;#039;&amp;#039;&amp;#039;chunk&amp;#039;&amp;#039;&amp;#039; has been attempted to be run (if it reached 4 times, the &amp;lt;code&amp;gt;WRF4L&amp;lt;/code&amp;gt; is stopped)&lt;br /&gt;
* &amp;lt;code&amp;gt;[runHOME]/[ExpName]/[SimName]/run&amp;lt;/code&amp;gt;: actual folder where the computing nodes run the model. In a folder called &amp;lt;code&amp;gt;wrfout&amp;lt;/code&amp;gt; there is a folder for each &amp;#039;&amp;#039;&amp;#039;chunk&amp;#039;&amp;#039;&amp;#039; with the standard output of the model&lt;br /&gt;
* &amp;lt;code&amp;gt;[runHOME]/[ExpName]/[SimName]/run/outwrf/[YYYYi][MMi][DDi][HHi][MIi][SSi]-[YYYYf][MMf][DDf][HHf][MIf][SSf]&amp;lt;/code&amp;gt;: folder with the standard output and all the required files to run a given &amp;#039;&amp;#039;&amp;#039;chunk&amp;#039;&amp;#039;&amp;#039;. The content of all this folder is compressed and kept in  &amp;lt;code&amp;gt;[storageHOME]/[ExpName]/[SimName]/config_[YYYYi][MMi][DDi][HHi][MIi][SSi]-[YYYYf][MMf][DDf][HHf][MIf][SSf].tar.gz&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;[storageHOME]/[ExpName]/[SimName]&amp;lt;/code&amp;gt; (in [storageHOST]): output of the already ran &amp;#039;&amp;#039;&amp;#039;chunks&amp;#039;&amp;#039;&amp;#039; as &amp;lt;code&amp;gt;[YYYYi][MMi][DDi][HHi][MIi][SSi]-[YYYYf][MMf][DDf][HHf][MIf][SSf]&amp;lt;/code&amp;gt; for a chunk from &amp;lt;code&amp;gt;[YYYYi]/[MMi]/[DDi] [HHi]:[MIi]:[SSi]&amp;lt;/code&amp;gt; to &amp;lt;code&amp;gt;[YYYYf]/[MMf]/[DDf] [HHf]:[MIf]:[SSf]&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== When something went wrong ===&lt;br /&gt;
If there has been any problem check the last chunk (in outwrf/[PERIODchunk]) to try to understand what happens and where the problem comes from:&lt;br /&gt;
* &amp;lt;code&amp;gt;rsl.[error/out].[nnnn]&amp;lt;/code&amp;gt;: These are the files which content the standard output while running the model. One file for each process. If the problem was something related to model execution and it has been prepared for the error, a correct message must appear. (look first for the largest files... &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ ls -lrS rsl.error.*&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* &amp;lt;code&amp;gt;run_wrf.log&amp;lt;/code&amp;gt;: These are the files which content the standard output of the model. Search for `segmentation faults&amp;#039; in form of (it might differ):&lt;br /&gt;
&amp;lt;pre&amp;gt;forrtl: error (63): output conversion error, unit -5, file Internal Formatted Write&lt;br /&gt;
Image              PC                Routine            Line        Source&lt;br /&gt;
wrf.exe            00000000032B736A  Unknown               Unknown  Unknown&lt;br /&gt;
wrf.exe            00000000032B5EE5  Unknown               Unknown  Unknown&lt;br /&gt;
wrf.exe            0000000003265966  Unknown               Unknown  Unknown&lt;br /&gt;
wrf.exe            0000000003226EB5  Unknown               Unknown  Unknown&lt;br /&gt;
wrf.exe            0000000003226671  Unknown               Unknown  Unknown&lt;br /&gt;
wrf.exe            000000000324BC3C  Unknown               Unknown  Unknown&lt;br /&gt;
wrf.exe            0000000003249C94  Unknown               Unknown  Unknown&lt;br /&gt;
wrf.exe            00000000004184DC  Unknown               Unknown  Unknown&lt;br /&gt;
libc.so.6          000000319021ECDD  Unknown               Unknown  Unknown&lt;br /&gt;
wrf.exe            00000000004183D9  Unknown               Unknown  Unknown&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* on &amp;lt;code&amp;gt;[runHOME]/[ExpName]/[SimName]&amp;lt;/code&amp;gt;, check the output of the PBS jobs. Which are called:&lt;br /&gt;
** &amp;lt;code&amp;gt;exp-[SimName].o[nnnn]&amp;lt;/code&amp;gt;: output of the &amp;lt;code&amp;gt;wrf4l_experiment.pbs&amp;lt;/code&amp;gt;&lt;br /&gt;
** &amp;lt;code&amp;gt;wps-[SimName].o[nnnn]&amp;lt;/code&amp;gt;: output of the &amp;lt;code&amp;gt;wrf4l_WPS.pbs&amp;lt;/code&amp;gt;&lt;br /&gt;
** &amp;lt;code&amp;gt;wrf-[SimName].o[nnnn]&amp;lt;/code&amp;gt;: output of the &amp;lt;code&amp;gt;wrf4l_WRF.pbs&amp;lt;/code&amp;gt;&lt;br /&gt;
* Check &amp;lt;code&amp;gt;[runHOME]/[ExpName]/[SimName]/run/namelist.output&amp;lt;/code&amp;gt; which holds all the parameters (even the default ones) used in the simulation&lt;br /&gt;
&lt;br /&gt;
== EXPERIMENTSparameters.txt ==&lt;br /&gt;
This ASCII file configures all the simulation. It assumes:&lt;br /&gt;
* Required files, forcings, storage, compiled version of the code might be at different machines.&lt;br /&gt;
* There is a folder with a given template version of the &amp;lt;code&amp;gt;namelist.input&amp;lt;/code&amp;gt; which will be used and changed accordingly to the requirement of the experiments&lt;br /&gt;
&lt;br /&gt;
Location of the WRF4L main folder (example for &amp;lt;code&amp;gt;hydra&amp;lt;/code&amp;gt;)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Home of the WRF4L&lt;br /&gt;
wrf4lHOME=/share/workflows/WRF4L&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Name of the machine where the experiment is running (&amp;#039;&amp;#039;&amp;#039;NOTE:&amp;#039;&amp;#039;&amp;#039; a folder with specific work-flow must exist as &amp;lt;code&amp;gt;$wrf4lHOME/${HPC}&amp;lt;/code&amp;gt;)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Machine specific work-flow files for $HPC (a folder with specific work-flow must exist as $wrf4lHOME/${HPC})&lt;br /&gt;
HPC=hydra&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Name of the compiler used to compile the model (&amp;#039;&amp;#039;&amp;#039;NOTE:&amp;#039;&amp;#039;&amp;#039; a file called &amp;lt;code&amp;gt;$wrf4lHOME/arch/${HPC}_${compiler}.env&amp;lt;/code&amp;gt; must exist)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Compilation (a file called $wrf4lHOME/arch/${HPC}_${compiler}.env must exist)&lt;br /&gt;
compiler=intel&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Name of the experiment&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Experiment name&lt;br /&gt;
ExpName = WRFsensSFC&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Name of the simulation. Here is understood that a given experiment could have the model configured with different set-ups (here identified with a different name of simulation)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Simulation name&lt;br /&gt;
SimName = control&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Which binary of &amp;lt;code&amp;gt;python&amp;lt;/code&amp;gt; 2.x to be used&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# python binary&lt;br /&gt;
pyBIN=/home/lluis.fita/bin/anaconda2/bin/python2.7&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Should this simulation be run from the beginning or not. If it is set to `true&amp;#039;, it will remove all the pre-existing content of the folder [ExpName]/[SimName] in the running and in the storage spaces. &amp;#039;&amp;#039;&amp;#039;Be careful&amp;#039;&amp;#039;&amp;#039;. In case of `false&amp;#039; simulation will continue from the last successful ran &amp;#039;&amp;#039;&amp;#039;chunk&amp;#039;&amp;#039;&amp;#039; (checking the restart files).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Start from the beginning (keeping folder structure)&lt;br /&gt;
scratch = false&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Period of the simulation of the simulation (In this example from 1958 Jan 1st to 2015 Dec 31)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Experiment starting date&lt;br /&gt;
exp_start_date = 19790101000000&lt;br /&gt;
# Experiment ending date&lt;br /&gt;
exp_end_date = 20150101000000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Length of the chunks (do not make chunks larger than 1-month!!)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Chunk Length [N]@[unit]&lt;br /&gt;
#  [unit]=[year, month, week, day, hour, minute, second]&lt;br /&gt;
chunk_length = 1@month&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Selection of the machines and users to each machine where the different requirement files are located and the output should be placed. &lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;NOTE:&amp;#039;&amp;#039;&amp;#039; this will only work if one set-up the &amp;lt;code&amp;gt;.ssh&amp;lt;/code&amp;gt; public/private keys in each involved USER/HOST. &lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;NOTE 2:&amp;#039;&amp;#039;&amp;#039; All the forcings, compiled code, ... are already at &amp;lt;code&amp;gt;hydra&amp;lt;/code&amp;gt; at the common space called &amp;lt;code&amp;gt;share&amp;lt;/code&amp;gt;&lt;br /&gt;
* &amp;#039;&amp;#039;&amp;#039;NOTE 3:&amp;#039;&amp;#039;&amp;#039; From the computing nodes, one can not access to the &amp;lt;code&amp;gt;/share&amp;lt;/code&amp;gt; folder and to any of the CIMA&amp;#039;s storage machines: skogul, freyja, ... For that reason, one need to use these system of &amp;lt;code&amp;gt;[USER]@[HOST]&amp;lt;/code&amp;gt; accounts. &amp;lt;code&amp;gt;*.pbs&amp;lt;/code&amp;gt; scripts uses a series of wrappers of the standard functions: &amp;lt;code&amp;gt;cp, ln, ls, mv, ....&amp;lt;/code&amp;gt; which manage them `from&amp;#039; and `to&amp;#039; different pairs of &amp;lt;code&amp;gt;[USER]@[HOST]&amp;lt;/code&amp;gt;. &amp;#039;&amp;#039;&amp;#039;NOTE:&amp;#039;&amp;#039;&amp;#039; This will only work if the public/private ssh key pairs have been set-up (see more details at [[llaves_ssh]])&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Hosts&lt;br /&gt;
#   list of different hosts and specific user&lt;br /&gt;
#     [USER]@[HOST]&lt;br /&gt;
#   NOTE: this will only work if public keys have been set-up&lt;br /&gt;
##&lt;br /&gt;
# Host with compiled code, namelist templates&lt;br /&gt;
codeHOST=lluis.fita@hydra&lt;br /&gt;
# forcing Host with forcings (atmospherics and morphologicals)&lt;br /&gt;
forcingHOST=lluis.fita@hydra&lt;br /&gt;
# output Host with storage of output (including restarts)&lt;br /&gt;
outHOST=lluis.fita@hydra&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Templates of the configuration of WRF: &amp;lt;code&amp;gt;namelist.wps&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;namelist.input&amp;lt;/code&amp;gt; files. &amp;#039;&amp;#039;&amp;#039;NOTE:&amp;#039;&amp;#039;&amp;#039; they will be changed according to the content of &amp;lt;code&amp;gt;EXPERIMENTparameters.txt&amp;lt;/code&amp;gt; like period of the &amp;#039;&amp;#039;&amp;#039;chunk&amp;#039;&amp;#039;&amp;#039;, atmospheric forcing, differences of the set-up, ... (located in the &amp;lt;code&amp;gt;[codeHOST]&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Folder with the `namelist.wps&amp;#039;, `namelist.input&amp;#039; and `geo_em.d[nn].nc&amp;#039; of the experiment&lt;br /&gt;
domainHOME = /home/lluis.fita/salidas/estudios/dominmios/SA50k&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Folder where the WRF model will run in the computing nodes (on top of that there will be two more folders [ExpName]/[SimName]). WRF will run at the folder [ExpName]/[SimName]/run&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Running folder&lt;br /&gt;
runHOME = /home/lluis.fita/estudios/WRFsensSFC/sims&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Folder with the compiled version of the WPS (located at &amp;lt;code&amp;gt;[codeHOST]&amp;lt;/code&amp;gt;)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Folder with the compiled source of WPS&lt;br /&gt;
wpsHOME = /share/WRF/WRFV3.9.1/ifort/dmpar/WPS&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Folder with the compiled version of the WRF (located at &amp;lt;code&amp;gt;[codeHOST]&amp;lt;/code&amp;gt;)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Folder with the compiled source of WRF&lt;br /&gt;
wrfHOME = /share/WRF/WRFV3.9.1/ifort/dmpar/WRFV3&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Folder to storage all the output of the model (history files, restarts and compressed file with content of the configuration and the standard output of the given run). The content of the folder will be organized by chunks (located at &amp;lt;code&amp;gt;[storageHOST]&amp;lt;/code&amp;gt;)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Storage folder of the output&lt;br /&gt;
storageHOME = /home/lluis.fita/salidas/estudios/WRFsensSFC/sims/output&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Wether modules should be load (not used for &amp;lt;code&amp;gt;hydra&amp;lt;/code&amp;gt;)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Modules to load (&amp;#039;None&amp;#039; for any)&lt;br /&gt;
modulesLOAD = None&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Names of the files used to check that the &amp;#039;&amp;#039;&amp;#039;chunk&amp;#039;&amp;#039;&amp;#039; has properly ran &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Model reference output names (to be used as checking file names)&lt;br /&gt;
nameLISTfile = namelist.input # namelist&lt;br /&gt;
nameRSTfile = wrfrst_d01_ # restart file&lt;br /&gt;
nameOUTfile = wfrout_d01_ # output file&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Extensions of the files with the configuration of WRF (to be retrieved from &amp;lt;code&amp;gt;codeHOST&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;domainHOME&amp;lt;/code&amp;gt;)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Extensions of the files with the configuration of the model&lt;br /&gt;
configEXTS = wps:input&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To continue from a previous &amp;#039;&amp;#039;&amp;#039;chunk&amp;#039;&amp;#039;&amp;#039; one needs to use the `restart&amp;#039; files. But they need to be renamed, because otherwise they will be re-written. Here one specifies the original name of the file &amp;lt;code&amp;gt;[origFile]&amp;lt;/code&amp;gt; and the name to be used to avoid the re-writting &amp;lt;code&amp;gt;[destFile]&amp;lt;/code&amp;gt;. It uses a complex bash script which even can deal with the change of dates according to the period of the &amp;#039;&amp;#039;&amp;#039;chunk&amp;#039;&amp;#039;&amp;#039; (&amp;#039;:&amp;#039; list of &amp;lt;code&amp;gt;[origFile]@[destFile]&amp;lt;/code&amp;gt;). They will located at the &amp;lt;code&amp;gt;[storageHOST]&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# restart file names&lt;br /&gt;
# &amp;#039;:&amp;#039; list of [tmplrstfilen|[NNNNN1]?[val1]#[...[NNNNNn]?[valn]]@[tmpllinkname]|[NNNNN1]?[val1]#[...[NNNNNn]?[valn]]&lt;br /&gt;
#    [tmplrstfilen]: template name of the restart file (if necessary with [NNNNN] variables to be substituted)&lt;br /&gt;
#      [NNNNN]: section of the file name to be automatically substituted&lt;br /&gt;
#        `[YYYY]&amp;#039;: year in 4 digits&lt;br /&gt;
#        `[YY]&amp;#039;: year in 2 digits&lt;br /&gt;
#        `[MM]&amp;#039;: month in 2 digits&lt;br /&gt;
#        `[DD]&amp;#039;: day in 2 digits&lt;br /&gt;
#        `[HH]&amp;#039;: hour in 2 digits&lt;br /&gt;
#        `[SS]&amp;#039;: second in 2 digits&lt;br /&gt;
#        `[JJJ]&amp;#039;: julian day in 3 digits&lt;br /&gt;
#      [val]: value to use (which is systematically defined in `run_OR.pbs&amp;#039;)&lt;br /&gt;
#        `%Y%&amp;#039;: year in 4 digits&lt;br /&gt;
#        `%y%&amp;#039;: year in 2 digits&lt;br /&gt;
#        `%m%&amp;#039;: month in 2 digits&lt;br /&gt;
#        `%d%&amp;#039;: day in 2 digits&lt;br /&gt;
#        `%h%&amp;#039;: hour in 2 digits&lt;br /&gt;
#        `%s%&amp;#039;: second in 2 digits&lt;br /&gt;
#        `%j%&amp;#039;: julian day in 3 digits&lt;br /&gt;
#    [tmpllinkname]: template name of the link of the restart file (if necessary with [NNNNN] variables to be substituted)&lt;br /&gt;
rstFILES=wrfrst_d01_[YYYY]-[MM]-[DD]_[HH]:[MI]:[SS]|YYYY?%Y#MM?%m#DD?%d#HH?%H#MI?%M#SS?%S@wrfrst_d01_[YYYY]-[MM]-[DD]_[HH]:[MI]:[SS]|YYYY?%Y#MM?%m#DD?%d#HH?%H#MI?%M#SS?%S&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Folder with the input data (located at &amp;lt;code&amp;gt;[forcingHOST]&amp;lt;/code&amp;gt;). &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Folder with the input morphological forcing data&lt;br /&gt;
indataHOME = /share/DATA/re-analysis/ERA-Interim&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Format of the input data and name of files&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Data format (grib, nc)&lt;br /&gt;
indataFMT= grib&lt;br /&gt;
# For `grib&amp;#039; format&lt;br /&gt;
#   Head and tail of indata files names.&lt;br /&gt;
#     Assuming ${indataFheader}*[YYYY][MM]*${indataFtail}.[grib/nc]&lt;br /&gt;
indataFheader=ERAI_&lt;br /&gt;
indataFtail=&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In case of netCDF input data, there is a bash script which transforms the data to grib, to be used later by &amp;lt;code&amp;gt;ungrib&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Variable table to use in &amp;lt;code&amp;gt;ungrib&amp;lt;/code&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#   Type of Vtable for ungrib as Vtable.[VtableType]&lt;br /&gt;
VtableType=ERA-interim.pl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Folder with the atmospheric forcing data (located at &amp;lt;code&amp;gt;[forcingHOST]&amp;lt;/code&amp;gt;). &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# For `nc&amp;#039; format&lt;br /&gt;
#   Folder which contents the atmospheric data to generate the initial state&lt;br /&gt;
iniatmosHOME = ./&lt;br /&gt;
#   Type of atmospheric data to generate the initial state&lt;br /&gt;
#     `ECMWFstd&amp;#039;: ECMWF &amp;#039;standard&amp;#039; way ERAI_[pl/sfc][YYYY][MM]_[var1]-[var2].grib&lt;br /&gt;
#     `ERAI-IPSL&amp;#039;: ECMWF ERA-INTERIM stored in the common IPSL way (.../4xdaily/[AN\_PL/AN\_SF])&lt;br /&gt;
iniatmosTYPE = &amp;#039;ECMWFstd&amp;#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Here on can change values on the template &amp;lt;code&amp;gt;namelist.input&amp;lt;/code&amp;gt;. It will change the values of the provided parameters with a new value. If the given parameter is not in the template of the &amp;lt;code&amp;gt;namelist.input&amp;lt;/code&amp;gt; it will be automatically added.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
## Namelist changes&lt;br /&gt;
nlparameters = ra_sw_physics;4,ra_lw_physics;4,time_step;180&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Name of WRF&amp;#039;s executable (to be localized at &amp;lt;code&amp;gt;[orHOME]&amp;lt;/code&amp;gt; folder from &amp;lt;code&amp;gt;[codeHOST]&amp;lt;/code&amp;gt;)&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Name of the exectuable&lt;br /&gt;
nameEXEC=wrf.exe&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;:&amp;#039; separated list of netCDF file names from WRF&amp;#039;s output which do not need to be kept&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# netCDF Files which will not be kept anywhere&lt;br /&gt;
NokeptfileNAMES=&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;:&amp;#039; separated list of headers of netCDF file names from WRF&amp;#039;s output which need to be kept&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Headers of netCDF files need to be kept&lt;br /&gt;
HkeptfileNAMES=wrfout_d:wrfxtrm_d:wrfpress_d:wrfcdx_d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;:&amp;#039; separated list of headers of restarts netCDF file names from WRF&amp;#039;s output which need to be kept&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# Headers of netCDF restart files need to be kept&lt;br /&gt;
HrstfileNAMES=wrfrst_d&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Parallel configuration of the run.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# WRF parallel run configuration&lt;br /&gt;
## Number of nodes&lt;br /&gt;
Nnodes = 1&lt;br /&gt;
## Number of mpi procs&lt;br /&gt;
Nmpiprocs = 16&lt;br /&gt;
## Number of shared memory threads (&amp;#039;None&amp;#039; for no openMP threads)&lt;br /&gt;
Nopenthreads = None&lt;br /&gt;
## Memory size of shared memory threads&lt;br /&gt;
SIZEopenthreads = 200M&lt;br /&gt;
## Memory for PBS jobs&lt;br /&gt;
MEMjobs = 30gb&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Generic definitions&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
## Generic&lt;br /&gt;
errormsg=ERROR -- error -- ERROR -- error&lt;br /&gt;
warnmsg=WARNING -- warning -- WARNING -- warning&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Lluis.fita</name></author>
	</entry>
</feed>