Diferencia entre las páginas «papa-deimos/instalacion» y «CROCO»

De Wikicima
(Diferencia entre las páginas)
 
 
Línea 1: Línea 1:
= En papa =
Web page of the instalation of the ocean model [https://www.croco-ocean.org/ CROCO] in CIMA's HPC 'hydra' 
<pre style="shell">
$ mkdir -p ~/sandbox/copy/
$ mkdir -p ~/sandbox/get/
$ sudo mkdir -p /datos/MOD/re-analysis/ECMWF/ERA5/monmean
$ sudo mv ~/sandbox/copy/ERA5_monmean_* /datos/MOD/re-analysis/ECMWF/ERA5/monmean/
</pre>


Instalación para descargar datos del ECMWF via la [https://confluence.ecmwf.int/display/WEBAPI/Access+ECMWF+Public+Datasets API]
= Installation =
<pre style="shell">
CIMA's HPC has a common folder where all the people has access. This is going to be the place where CROCO model will be installed.
$ sudo apt-get install python3-pip
$ sudo pip3 install ecmwf-api-client
</pre>


Instalación para descagar datos de [https://www.copernicus.eu/es Copernicus] via la [https://pypi.org/project/cdstoolbox-remote/ cdstoolbox]
The installation is going to be in the following <CODE>INSTALLDIR=/share/CROCO</CODE> folder.
<pre  style="shell">
$ sudo pip3 install cdstoolbox-remote
</pre>


'''NOTA:''' Las cdstoolbox, utilizan un sistema de llaves (keys) para que la API funcione. En este caso, se utiliza la llave del usuario de Lluís Fita en Copernicus para descargar los datos
Downloading the code (there is a GItlab repository, but by now, we are going to install closed releases)
<PRE style="shell">
$ wget https://gitlab.inria.fr/croco-ocean/croco/-/archive/v2.1.0/croco-v2.1.0.tar.gz
</PRE>


== acceso restringido a datos ==
Creating the folder of the compilation (using intel compiler and shared memory ''dmpar'') and where to deploy the code
Los datos en el directorio <code>/datos</code> son accesibles por todos los usuaries. No obstante, hay bases de datos con acceso restringido por distintos motivos (confidencialidad, ética, ...).
<PRE style="shell">
$ mkdir $INSTALLDIR/v210
$ cd $INSTALLDIR/v210
$ mkdir -p intel/dmpar
$ git clone --branch v2.1.0 https://gitlab.inria.fr/croco-ocean/croco.git croco-v2.1.0
Cloning into 'croco'...
remote: Enumerating objects: 52646, done.
remote: Counting objects: 100% (448/448), done.
remote: Compressing objects: 100% (263/263), done.
remote: Total 52646 (delta 268), reused 317 (delta 185), pack-reused 52198 (from 1)
Receiving objects: 100% (52646/52646), 300.67 MiB | 4.59 MiB/s, done.
Resolving deltas: 100% (40426/40426), done.
Note: switching to '291e8cd11b2329ee171c7a02c52c9c481dda981e'.


En estos casos, el directorio que contiene los datos sólo es accesible a un determinado grupo de usuaries creado a propósito
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.


Por ejemplo tenemos los datos <code>datosABC</code> de un proyecto llamado <code>proy123</code> con los usuaries <code>usuarieX,usuarieY,usarieZ</code>.  
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:


1. retiro permiso lectura para todes les usuaries
  git switch -c <new-branch-name>
<pre style="shell">
# ls -l
d-wxr-xr-x 2 root root 4096 Jul 16 11:14 datosABC
# chmod a-r datosABC
# ls -l
d-wxr-x--x 2 root root 4096 Jul 16 11:14 datosABC
</pre>
1. creación de un grupo nuevo de usuaries
<pre style="shell">
# groupadd proy123
</pre>
1. Assignando usuaries al nuevo grupo
<pre style="shell">
# sudo usermod -aG proy123 usuarieX
# sudo usermod -aG proy123 usuarieY
# sudo usermod -aG proy123 usuarieZ
</pre>
1. Información del grupo (descripción salida de [https://askubuntu.com/questions/811513/output-of-getent-group getent],  estructura <code>[NombreGrupo]:[InfPWD]:[idGrupo]:[Usuarie1],...,[UsuarieN]</code>)
<pre style="shell">
# getent group proy123
proy123:x:1234356:usuarieX,usuarieY,usuarieZ
</pre>
1. atribuyendo propiedad del directorio al grupo
<pre style="shell">
# chgrp -R proy123 datosABC
# ls -l
d-wxr-x--x 2 root proy123 4096 Jul 16 11:14 datosABC
</pre>
Para que el usuarie de deimos tenga acceso de lectura a estos datos, hace falta que el grupo exista en Deimos (mirar en sección) con el mismo nombre y sobre todo el mismo <code>[idGROUP]</code>.


Está por ver si hace falta crear el usuario de 'Deimos' en 'Papa' (con el mismo id que en Deimos)
Or undo this operation with:
<pre style="shell">
# adduser --no-create-home --uid [uid_Deimos] --force-badname jupyter-usuarieX
# usermod -aG proy123 jupyter-usuarieX
</pre>


= En deimos =
  git switch -


JupyterHub, tiene su propia instalación de python y por lo tanto de sus paquetes. Ver expliación [https://jakevdp.github.io/blog/2017/12/05/installing-python-packages-from-jupyter/ acá]
Turn off this advice by setting config variable advice.detachedHead to false
$ ls croco
GRIF        create_config.bash  OBSTRUCTION  README.md        TEST_CASES
BENCH        MPI_NOLAND          OCEAN        requirements.txt  XIOS
CHANGELOG.md  MUSTANG            PISCES      SCRIPTS
</PRE>


<pre style="shell">
Go to compile, first we set-up the environment for intel compilation
$ sudo su
<PRE style="shell">
# apt-get install git subversion
$ source /opt/load-libs.sh 1
# apt-get install python3 python3-scipy python3-numpy cython3 cython3-dbg
The following libraries, compiled with Intel 2021.4.0 compilers, were loaded:
# apt-get install netcdf-bin libnetcdf-dev netcdf-doc libnetcdff-dev libnetcdff-doc libhdf5-dev
* MPICH 3.4.2
  libhdf5-dev ncview cdo nco
* NetCDF 4
# apt-get install dvipng python3-netcdf4
* HDF5 1.10.5
# apt-get install python3-matplotlib python3-matplotlib-dbg
* JASPER 2.0.33
# apt-get install python3-cartopy python-cartopy-data python3-mpltoolkits.basemap
# apt-get install firefox-esr firefox-esr-l10n-all
# apt-get install gfortran
# apt-get install imagemagick
# apt-get install gcc-multilib
</pre>


El python the jupyterHub del usuario (?) es este <code>/opt/tljh/user/bin/python3</code>, por lo tanto, se tiene que instalar usando el <code>pip</code> de ese directorio
To change it please logout and login again.
<pre sryle="shell">
# /opt/tljh/user/bin/pip3 install numpy
# /opt/tljh/user/bin/pip3 install netcdf4
# /opt/tljh/user/bin/pip3 install matplotlib
# /opt/tljh/user/bin/conda install gcc
# apt-get install libgeos-dev
# /opt/tljh/user/bin/pip3 install cartopy
</pre>


Está por ver, si les otres usuaries verán los paquetes. O cómo hacerlo desde un entorno?
To load this libraries from within a script add this line to script:
== JupyterHub ([https://jupyter.org/hub https://jupyter.org/hub]) ==
source /opt/load-libs.sh 1
$ cd croco
</PRE>


=== Installing server ([https://jupyter.org/hub jupyter-server]) ===
We create an ASCII file in /share/CROCO/v210/intel with the specificities of the compilation with intel in hydra, name <CODE>CROCO_hydra_intel.env</CODE> (following the instructios from here [https://croco-ocean.gitlabpages.inria.fr/croco_doc/tutos/tutos.00.env.html tutorial 00])
<PRE>
#set environment variables for croco in HYDRA to be compiled with intel
source /opt/load-libs.sh 1
export CC=icc
export FC=ifort
export F90=ifort
export F77=ifort
export NETCDF=/opt/netcdf/netcdf-4/intel/2021.4.0
export PATH=$NETCDF/bin::${PATH}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}::${NETCDF}/lib
</PRE>


== Kernels ==
Loading the compilation environment
El sistema jupyter se basa en notebooks / kernels para distintos lenguajes de programación.
<PRE style="shell">
$ source CROCO_hydra_intel.env
</PRE>


=== Instalando un kernel de bash ===
== RUNNING CROCO TEST CASES ==
En esta sección se cuentan los pasos seguidos para instalar un kernel de bash [[papa-deimos/BashKernelIns]].


=== Instalando un kernel de R ===
== BASIN ==
Instalando [https://www.r-project.org/ R] en deimos se cuentan en esta página [[papa-deimos/instalacion/RKernelIns]].  
The simplest installation would be the 'basin' configuration. We follow these instructions [https://croco-ocean.gitlabpages.inria.fr/croco_doc/tutos/tutos.04.test_cases.basin.html basin]. Assuming that the source code resides in <CODE>CROCOsrc=/share/CROCO/v210/intel/serie/croco</CODE>
<pre style="shell">
 
sudo apt-get install r-base r-base-core r-base-core-dbg r-base-dev
<PRE style="shell">
</pre>
$ CROCOsrc=/share/CROCO/v210/intel/serie/croco
$ cd ${CROCOsrc}
$ mkdir -p CONFIGS/BASIN
cp OCEAN/cppdefs.h CONFIGS/BASIN
cp OCEAN/param.h CONFIGS/BASIN
cp OCEAN/jobcomp CONFIGS/BASIN
cd CONFIGS/BASIN
</PRE>


Creando el irkernel en JupyerHub siguiendo estas [https://irkernel.github.io/installation/ instrucciones]
We need to edit the configuration files.
<pre style="shell">


</pre>
Editing the file with the configuration
<PRE style="shell">
$ cp cppdefs.h cppdefs.h_orig
$ diff cppdefs.h cppdefs.h_orig
18c18
< #define  BASIN          /* Basin Example */
---
> #undef  BASIN          /* Basin Example */
56c56
< #undef REGIONAL        /* REGIONAL Applications */
---
> #define REGIONAL        /* REGIONAL Applications */
</PRE>


== Añadir usuaries ==
Editing the file with the size of the domain and configuration of the parallel environment
Todas las personas con cuenta en los recursos computacionales del CIMA, sólo tienen que pedir la apertura de la cuenta en el jupyterHub a Lluís
<PRE style="shell">
Todas las personas del DCAO, tienen que pedir la apertura de la cuenta en el sistema y en el jupyterHub a Lluís
$ cp param.h param.h_orig
$ diff param.h param.h_orig
</PRE>


En deimos
Editing the file with the compilation
<pre style="shell">
<PRE style="shell">
# useradd -MN [NombreUsuario]
$ cp jobcomp jobcomp_orig
# sudo passwd [NombreUsuario]
$ diff jobcomp jobcomp_orig
</pre>
34,35c34
< CROCOsrc=/share/CROCO/v210/intel/serie/croco
< SOURCE1=${CROCOsrc}/OCEAN
---
> SOURCE1=../croco/OCEAN
45c45
< FC=ifort
---
> FC=gfortran
</PRE>


<!-- For andamos
And compile
# groupadd -g 900001 andamos
<PRE style="shell">
# useradd -MN -g andamos andex.andamos
$ ./jobcomp > jobcomp.log
# passwd [pswd] andex.andamos
</PRE>


# groupadd -g 90002 guests
If everything when fine, one should have:
# useradd -MN -g guests clementine.junquas
<PRE style="shell">
# passwd [pwd] clementine.junquas
$ tail -n 15 jobcomp.log
-->
mv a.out croco
                                 
                                 
    .,:looddddddd:            ,;' 
  .cdddo;;oo'....            .ldd;
,ddddd,                    ;odoc.
.oddddo:              .'';odoc   
'dddddd;':,  .;;  .;;. :ccdo;     
.oddddddddooooddoooddooodl.       
.oddddddddddddddddddddddl;;     
  .,ldddddddddddddddddddoc:lolo'..
    ;ddc;::::::::::codo,    .';:,
  .ccc            ,cc,         
                                 
CROCO is OK
$ ls -l croco
-rwxrwxr-x 1 lluis.fita cima 1012160 Jul 15 15:11 croco
</PRE>


Desde jupyterHUB activar la cuenta del nueve usuarie
{| class="wikitable" style="margin:auto"
|-
! IMPORTANT NOTES
|-
| 1. do not run the model in the same place where you compiled the source
|-
| 2. Always use a queue-system job to run simulations, never run without it
|-
| 3. Always run in your '''salidas/''' folder never in <CODE>$HOME</CODE>
|}


== acceso restringido a datos ==
== UPWELLING ==
Para dar acceso restringido a datos se realiza a partir de la asiganción de grupos específicos a los datos. Sólo les usuaries que pertenezcan a dicho grupo, podrán acceder a ellos. Para eso
Compilation of the upwelling test case in distributed memory (dmpar) with intel.
:
# <code>Papa</code> y <code>Deimos</code> tienen que tener el mismo grupo (nombre e ID)
# El directorio donde estén los datos, por lo menos tiene que tener permisos de ejecución para todes
<pre style="Shell">
# ls -l [Directorio]
drwxr-x--x 2 root [grupo] 4096 Jun 23 12:56 [Directorio]
</pre>
# Mientras que los archivos, sólo serán de lectura para el [grupo]
<pre style="shell">
# ls -l [Directorio]/[Archivo]
-rwxr-x--- 1 root [grupo] 105817 Jun 23 11:48 [Directorio]/[Archivo]
</pre>


Después de la creación de un grupo de acceso restringido de datos en Papa, para que un usuario tenga acceso a los archivos, necesita que:
<!--
SHARED MEMORY
export OMP_SCHEDULE=static
export OMP_NUM_THREADS=40
export OMP_DYNAMIC=false
export OMP_NESTED=false
-->


# El mismo grupo exista en Deimos con el mismo nombre y id de grupo
<PRE>
# El usuario de <code>jupyter-[NombreUsuarie]</code> esté asociado al grupo
$ $ source /opt/load-libs.sh 1


Desde una ventana <code>shell</code> de jupyter, se obtieene:
The following libraries, compiled with Intel 2021.4.0 compilers, were loaded:
<pre style="shell">
* MPICH 3.4.2
ls DatosABC
* NetCDF 4
ls: cannot open directory 'DatosABC': Permission denied
* HDF5 1.10.5
</pre>
* JASPER 2.0.33
$ mkdir /share/CROCO/v210/intel/dmpar/
$ git clone --branch v2.1.0 https://gitlab.inria.fr/croco-ocean/croco.git croco
$ cd croco
$ CROCOsrc=/share/CROCO/v210/intel/dmpar/croco
</PRE>


Agarramos el mismo grupo de ejemplo creado en Papa <code>proy123</code>, id grupo: 123456
Copying configuration files
<PRE style="shell">
$ mkdir -p CONFIGS/Upwelling
cp OCEAN/cppdefs.h CONFIGS/Upwelling
cp OCEAN/param.h CONFIGS/Upwelling
cp OCEAN/jobcomp CONFIGS/Upwelling
cd CONFIGS/Upwelling
</PRE>


1. Comprobamos grupos pre-existentes
Configuring the case
<pre style="shell">
<PRE style="shell">
# getent group | grep proy123
$ cp cppdefs.h cppdefs.h_orig
$ diff cppdefs.h cppdefs.h_orig
29c29
< #define  UPWELLING      /* Upwelling Example */
---
> #undef  UPWELLING      /* Upwelling Example */
56c56
< #undef REGIONAL        /* REGIONAL Applications */
---
> #define REGIONAL        /* REGIONAL Applications */
1037c1037
< # define MPI
---
> # undef  MPI


# getent group | grep 123456
$ cp param.h param.h_orig
$ diff param.h param.h_orig


</pre>
$ cp jobcomp jobcomp_orig
'''NOTA:''' Si el id de grupo ya existe en 'Deimos', se tendrá que cambiar el id del grupo en 'Papa' (y actualizar todos los archivos asociados) con las instrucciones (siguiendo [https://unix.stackexchange.com/questions/33844/change-gid-of-a-specific-group Stack Overflow])
$ diff jobcomp jobcomp_orig
<pre style="shell">
34,35c34
# groupmod -g [nuevoIDgrupo] proy123
< CROCOsrc=/share/CROCO/v210/intel/dmpar/croco
# find / -gid [viejoIDgrupo] ! -type l -exec chgrp [nuevoIDgrupo] {} \;
< SOURCE1=${CROCOsrc}/OCEAN
</pre>
---
> SOURCE1=../croco/OCEAN
46c45
< FC=ifort
---
> FC=gfortran
52,53c51,52
< MPILIB="-L/opt/mpich/mpich-3.4.2/intel/2021.4.0/lib"
< MPIINC="-I/opt/mpich/mpich-3.4.2/intel/2021.4.0/include"
---
> MPILIB=""
> MPIINC=""
</PRE>


1. Creación del grupo
Compiling in parallel
<pre style="shell">
<PRE sytle="shell">
# groupadd -g 123456 proy123
$ ./jobcomp >& jobcomp.log
</pre>
$ tail -n 15 jobcomp.log
mv a.out croco
                                 
                                 
    .,:looddddddd:            ,;' 
  .cdddo;;oo'....            .ldd;
,ddddd,                    ;odoc.
.oddddo:              .'';odoc   
'dddddd;':,  .;;  .;;. :ccdo;     
.oddddddddooooddoooddooodl.       
.oddddddddddddddddddddddl;;     
  .,ldddddddddddddddddddoc:lolo'..
    ;ddc;::::::::::codo,    .';:,
  .ccc            ,cc,         
                                 
CROCO is OK


1. Asignación de usuaries
$ ls -l croco
<pre style="shell">
-rwxrwxr-x 1 lluis.fita cima 1184600 Jul 15 16:34 croco
# usermod -aG proy123 jupyter-usuarieX
</PRE>
# usermod -aG proy123 jupyter-usuarieY
# usermod -aG proy123 jupyter-usuarieZ
</pre>


1. Verificación
<pre style="shell">
# getent group proy123
proy123:x:123456:jupyter-usuarieX,jupyter-usuarieY,jupyter-usuarieZ
</pre>


== Nota acerca 'flavor' archivos netCDF ==
{| class="wikitable" style="margin:auto"
Los archivos netCDF pueden tener distintos savores ('flavours' del Inglés), estos son: NETCDF3_CLASSIC, NETCDF3_64BIT_OFFSET, NETCDF3_64BIT_DATA, NETCDF4_CLASSIC y NETCDF4
|-
! IMPORTANT NOTES
|-
| 1. do not run the model in the same place where you compiled the source
|-
| 2. Always use a queue-system job to run simulations, never run without it
|-
| 3. Always run in your '''salidas/''' folder never in <CODE>$HOME</CODE>
|}


Si bien las herramientas directas de la librería (ej.: ncdump, nccopy, nccreate, ...) no tienen problemas en abrir los archivos, sí que podemos encontrar problemas con la librería de python <CODE>netCDF4</CODE>.
= Example of use =


Al ejecutar una script desde el notebook, nos puede aparecer el siguiente mensaje de error:
== serie: BASIN ==  
<PRE>
(...)
OSError                                  Traceback (most recent call last)
Cell In [1], line 246
    244 ifile = 0
    245 dtprev = 0
--> 246 oplnc = NetCDFFile(expplfiles[0], 'r')
    247 oua = oplnc.variables['U_PL']
    248 ova = oplnc.variables['V_PL']


File src/netCDF4/_netCDF4.pyx:2470, in netCDF4._netCDF4.Dataset.__init__()
How to run a basic example
<PRE style="shell">
$ cd $HOME/salidas
$ mkdir CROCO
$ cd CROCO
$ mkdir tests
$ cd tests/
$ cp /share/tools/workflows/direct/CROCO/hydra/launch_ideal_test_serie.pbs ./
</PRE>


File src/netCDF4/_netCDF4.pyx:2107, in netCDF4._netCDF4._ensure_nc_success()
Link the necessary files and editng them if needed:
<PRE style="shell">
$ ln -s $CROCOsrc/CONFIGS/BASIN/croco ./
$ cp $CROCOsrc/TEST_CASES/croco.in.Basin ./


OSError: [Errno -101] NetCDF: HDF error: '/datos/MOD/EXPS/Inundaciones_RioGrandeSoul/sims/control/wrfpress_d01_2024-04-15_00:00:00'
</PRE>
</PRE>


Pero el archivo existe y tiene el formato:
And execute
<PRE style="shell">
<PRE style="shell">
$ file /datos/MOD/EXPS/Inundaciones_RioGrandeSoul/sims/control/wrfpress_d01_2024-04-15_00:00:00
$ qsub launch_ideal_test_serie.pbs
/datos/MOD/EXPS/Inundaciones_RioGrandeSoul/sims/control/wrfpress_d01_2024-04-15_00:00:00: Hierarchical
70619.hydra
  Data Format (version 5) data
$ qstat
Job ID                    Name            User            Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
(...)
70619.hydra                CROCO4L_ideal    lluis.fita            0 R larga
</PRE>
</PRE>


Mientras que un archivo que <CODE>netCDF4</CODE> no tiene problemas en abrir tiene el formato:
Looking for the current step:
<PRE style="shell">
<PRE style="shell">
$ file /datos/MOD/analysis/ECMWF/ECMWF-AN_pl20240418-30.nc
$ tail croco.out
/datos/MOD/analysis/ECMWF/ECMWF-AN_pl20240418-30.nc: NetCDF Data Format data (64-bit offset)
    2698  299.77778 8.942712777E-05-3.3148734E+00-3.3147840E+00 5.0400000E+16  0
    2699  299.88889 8.942810068E-05-3.3148728E+00-3.3147833E+00 5.0400000E+16  0
    2700  300.00000 8.942902900E-05-3.3148721E+00-3.3147827E+00 5.0400000E+16  0
      WRT_HIS -- wrote history fields into time record =  31 /   31
    2701  300.11111 8.942991352E-05-3.3148715E+00-3.3147820E+00 5.0400000E+16  0
    2702  300.22222 8.943075502E-05-3.3148708E+00-3.3147814E+00 5.0400000E+16  0
    2703  300.33333 8.943155431E-05-3.3148701E+00-3.3147807E+00 5.0400000E+16  0
    2704  300.44444 8.943231222E-05-3.3148695E+00-3.3147800E+00 5.0400000E+16  0
    2705  300.55556 8.943302959E-05-3.3148688E+00-3.3147794E+00 5.0400000E+16  0
    2706  300.66667 8.943370728E-05-3.3148682E+00-3.3147787E+00 5.0400000E+16  0
</PRE>
</PRE>


Para poder cambiar el 'flavour' del archivo que da problemas, se puede utilizar la herramienta <CODE>nccopy</CODE> que viene como parte de los programas binarios de las liberarías netCDF (cómo el <CODE>ncdump</CODE>)
Once finished:
<PRE style="shell">
<PRE style="shell">
$ nccopy -k '64-bit offset' ./wrfout_d01_2024-04-15_00:00:00 ./wrfout_d01_2024-04-15_00:00:00.nc64
$ qstat
</PRE>
Job ID                    Name            User            Time Use S Queue
La extensión no es necesaria, sólo para poder crear un archivo nuevo
------------------------- ---------------- --------------- -------- - -----
(...)
70619.hydra                CROCO4L_ideal    lluis.fita      00:01:13 C larga         
 
$ tail croco.out
    3239  359.88889 9.041893005E-05-3.3144938E+00-3.3144033E+00 5.0400000E+16  0
    3240  360.00000 9.041879739E-05-3.3144930E+00-3.3144026E+00 5.0400000E+16  0
      WRT_HIS -- wrote history fields into time record =  37 /   37
 
MAIN - number of records written into history  file(s):  37
        number of records written into restart  file(s):    0


== Añadir librerías/programario específico ==
En esta sección se detallan los pasos seguidos para instalar librerías y/o programario específico. Dichas librerías se tendrán que instalar para cada usuarie desde la sesión jupyter.


=== PyNCplot3 ===
MAIN: DONE
Librerías genéricas de Lluís Fita Borrell, CIMA [https://git.cima.fcen.uba.ar/lluis.fita/pyncplot/-/wikis/home PyNCplot]


Des de una sesión <CODE>bash</CODE> del jupyterHUB:
$ ls -lrt
<PRE style="Shell">
total 15576
git clone -b numpy20 https://git.cima.fcen.uba.ar/lluis.fita/pyncplot.git PyNCplot3
-rw-rw-r-- 1 lluis.fita cima     1412 Jul 15 15:40 launch_ideal_test_serie.pbs
cd PyNCplot3
lrwxrwxrwx 1 lluis.fita cima      55 Jul 15 15:44 croco -> /share/CROCO/v210/intel/serie/croco/CONFIGS/BASIN/croco
ln -s Makefile.deimos-jupyterHub ./Makefile
-rw-rw-r-- 1 lluis.fita cima    3361 Jul 15 15:46 croco.in.Basin
make all >& run_make.log
-rw------- 1 lluis.fita cima    3361 Jul 15 15:47 croco.in
ls *so
-rw------- 1 lluis.fita cima      824 Jul 15 15:48 CROCO4L_ideal.o70619
module_ForDef.cpython-39-x86_64-linux-gnu.so
-rw------- 1 lluis.fita cima 15653792 Jul 15 15:48 basin_his.nc
module_ForDiag.cpython-39-x86_64-linux-gnu.so
-rw------- 1 lluis.fita cima  274775 Jul 15 15:48 croco.out
module_ForDistriCorrect.cpython-39-x86_64-linux-gnu.so
module_ForGen.cpython-39-x86_64-linux-gnu.so
module_ForInt.cpython-39-x86_64-linux-gnu.so
module_ForSci.cpython-39-x86_64-linux-gnu.so
</PRE>
</PRE>
Después desde un directorio de trabajo del estudio concreto (e.j. <CODE>EstudioXY12</CODE>) almacenado dentro del  directrio <CODE>estudios</CODE> del home del usuario de jupyteHUB. Desde la misma notebook de <CODE>bash</CODE>
 
== dmpar: Upwelling ==
 
How to run a basic example
<PRE style="shell">
<PRE style="shell">
cd  
$ cd $HOME/salidas
mkdir -p estudios/EstudioXY12
$ mkdir CROCO
cd estudios/EstudioXY12
$ cd CROCO
/home/jupyter-[usuario_JupyterHUB]/PyNCplot3/link_essentials_PWD.bash
$ mkdir mpi
$ cd mpi/
$ cp /share/tools/workflows/direct/CROCO/hydra/launch_ideal_test_mpi.pbs ./
</PRE>
</PRE>
Ahora ya se puede abrir un notebook the python en el directorio <CODE>estudios/EstudioXY12</CODE> (aparecerá en el home del juyterHUB del usuario) y poder utilizar sin problemas las scripts de PyNCplot


=== Digital Earths - Global Hackathon ===
Link the necessary files and editng them if needed:
En la semana del 13 al 16 de Mayo aconteció el la [https://www.wcrp-esmo.org/activities/wcrp-global-km-scale-hackathon-2025 Digital Earths - Global Hackathon] en la cuál el CIMA/IFAECI participó con la organización de un nodo en [https://git.cima.fcen.uba.ar/gvieytes/global-hackathon/-/wikis/home CABA]. Para esa actividad fue necesaria la instalación de un entorno de python el cuál se hizo accesible para todes les usuaries de Deimos. Estos son los pasos seguidos:
<PRE style="shell">
$ ln -s $CROCOsrc/CONFIGS/Upwelling/croco ./
$ cp $CROCOsrc/TEST_CASES/croco.in.Upwelling ./
</PRE>


First download the Hachkathon's repository as root in <CODE>/opt/onda/mamba/DE-global_hackathon<CODE> (as root)
And execute
<PRE style="shell">
<PRE style="shell">
# git clone https://github.com/digital-earths-global-hackathon/tools.git
$ qsub launch_ideal_test_mpi.pbs
70649.hydra
$ qstat
Job ID                    Name            User            Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
(...)
70645.hydra              CROCO4L_ideal_  lluis.fita            0 R larga
</PRE>
</PRE>


Go to directory with installation and make a copy for <CODE>deimos</CODE> and edit the prefix
Once finished
<PRE style="shell">
<PRE style="shell">
# cd tools/python_envs
$ tail -3 croco.out
# cp install_python install_python_deimos
 
diff install_python install_python_orig install_python install_python_orig
MAIN: DONE
4c4,5
 
< prefix=${prefix:-$HOME/python_envs}
$ ls -lrt
---
total 6956
> #prefix=${prefix:-$HOME/python_envs}
lrwxrwxrwx 1 lluis.fita cima      59 Jul 15 16:42 croco -> /share/CROCO/v210/intel/dmpar/croco/CONFIGS/Upwelling/croco
> prefix=/opt/conda/mamba/DE-global_hackathon
-rw-rw-r-- 1 lluis.fita cima    1500 Jul 15 16:42 croco.in.Upwelling
./install_python_deimos
-rw------- 1 lluis.fita cima    1500 Jul 15 16:43 croco.in
```
-rwxrwxr-x 1 lluis.fita cima    9328 Jul 15 16:43 a.out
We got the right installation
-rw-rw-r-- 1 lluis.fita cima    1448 Jul 15 16:50 launch_ideal_test_mpi.pbs
<PRE style="shell">
-rw------- 1 lluis.fita cima  238630 Jul 15 16:50 CROCO4L_ideal_mpi.o70634
ls /opt/conda/mamba/DE-global_hackathon/
-rw------- 1 lluis.fita cima  717220 Jul 15 16:50 upwelling_rst.nc
miniconda3  tools
-rw------- 1 lluis.fita cima 3316688 Jul 15 16:50 upwelling_his.nc
-rw------- 1 lluis.fita cima 2771248 Jul 15 16:50 upwelling_avg.nc
-rw------- 1 lluis.fita cima  42450 Jul 15 16:50 croco.out
</PRE>
</PRE>


Registering the new kernel [https://jupyterhub.readthedocs.io/en/stable/howto/configuration/config-user-env.html jupyerhub-doc] and [https://discourse.jupyter.org/t/add-multiple-python-kernels-like-3-6-x-and-3-7-x-to-jupyterhub/5458/3 multiple-kernels]
== RUNNING A REAL SIMULATION ==
 
=== v2.1: simulación climatológica ===
Ejemplo de uso en un entorno real de CROCO para simular un periodo corto.
 
Para eso necesitamos:
* Forzantes atmosféricos:
* Batimetría:
*
 
=== v2.1: simulación multianual ===
Ejemplo de uso en un entorno real de CROCO para simular mútliples años reciclando el forzante atmosférico.
 
Para eso necesitamos:
* Forzantes atmosféricos:
* Batimetría:
*
 
== v1.1 - Nicolás Aubone modified ==
Here we run a real simulation with croco version 1.1
 
It's a simulation of one climatologic year in the San Matías Gulf, Argentina.
 
Be carefull, this model was modified from its original code (we add a bi-dimensional logarithmic bottom friction).
 
Creating the folder where to deploy the code (using intel compiler and shared memory ''dmpar''):
<PRE style="shell">
<PRE style="shell">
# /opt/tljh/user/bin/jupyter kernelspec list
$ INSTALLDIR=/share/CROCO
Available kernels:
$ mkdir $INSTALLDIR/v11
  python3            /opt/tljh/user/share/jupyter/kernels/python3
$ cd $INSTALLDIR/v11
  bash              /usr/local/share/jupyter/kernels/bash
$ mkdir -p intel/dmpar
  ir                /usr/local/share/jupyter/kernels/ir
$ cd intel/dmpar
  python_conda_su    /usr/local/share/jupyter/kernels/python_conda_su
$ tar xvfz [CROCOv1.1_NicolasAubone].tar.gz
# /opt/tljh/user/bin/jupyter-kernelspec remove python_conda_su
$ ls croco
Kernel specs to remove:
AGRIF            OCEAN                      run_simulation_openmp.bash
  python_conda_su    /usr/local/share/jupyter/kernels/python_conda_su
create_run.bash  PISCES                    SCRIPTS
Remove 1 kernel specs [y/N]: y
CVTK            readme_version_croco.txt   TEST_CASES
# /opt/tljh/user/bin/jupyter-kernelspec list
DOC_SPHINX       run_multi_simulation.bash XIOS
Available kernels:
MPP_PREP         run_simulation.bash
   python3    /opt/tljh/user/share/jupyter/kernels/python3
  bash       /usr/local/share/jupyter/kernels/bash
  ir         /usr/local/share/jupyter/kernels/ir
# /opt/conda/mamba/DE-global_hackathon/miniconda3/bin/mamba install ipykernel
# /opt/conda/mamba/DE-global_hackathon/miniconda3/envs/easy/bin/python3 -m ipykernel install
Installed kernelspec python3 in /usr/local/share/jupyter/kernels/python3
# /opt/tljh/user/bin/jupyter-kernelspec install /usr/local/share/jupyter/kernels/python3 --name=python_DigEarth-Hack
[InstallKernelSpec] Installed kernelspec python_digearth-hack in /usr/local/share/jupyter/kernels/python_digearth-hack
# /opt/tljh/user/bin/jupyter-kernelspec list
Available kernels:
  python3                /opt/tljh/user/share/jupyter/kernels/python3
  bash                   /usr/local/share/jupyter/kernels/bash
  ir                      /usr/local/share/jupyter/kernels/ir
  python_digearth-hack    /usr/local/share/jupyter/kernels/python_digearth-hack
</PRE>
</PRE>


Editing kernel name to be recognizible in jupyterHUB api and set-up environment (see [https://stackoverflow.com/questions/67914989/creating-virtual-environments-for-jupyterhub stackoverflow])
Create the "CONFIGS" folder to store our different simulations there.
 
Create the "GSM_CLIMATOLOGY" folder to store our simulation config files and the model compilation.
 
Copy to the simulation forlder "GSM_CLIMATOLOGY" our configuration files to compile and run the model:
<PRE style="shell">
<PRE style="shell">
# vim /usr/local/share/jupyter/kernels/python_digearth-hack/kernel.json
$ CROCOsrc=$INSTALLDIR/v110/intel/dmpar/croco
"display_name": "Python 3 (D.E. Hachkathon)",
$ cd ${CROCOsrc}
"env": {
$ mkdir -p CONFIGS/GSM_CLIMATOLOGY
  "MAMBA_PREFIX":"/opt/conda/mamba/DE-global_hackathon/miniconda3/envs/easy",
$ cd CONFIGS/GSM_CLIMATOLOGY
  "MAMBA_DEFAULT_ENV":"easy",
cp [file_location]/cppdefs.h CONFIGS/BASIN
  "PATH":"/opt/conda/mamba/DE-global_hackathon/miniconda3/envs/easy/bin:/opt/conda/mamba/DE-global_hackathon/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
cp [file_location]/param.h CONFIGS/BASIN
},
cp [file_location]/jobcomp CONFIGS/BASIN
</PRE>
</PRE>
------


==== Adding new packages/libraries to an existing environment ====
Edit cppdefs.h file to define a mpi run:


To add new libraries, using mamba (preferred way)
define  MPI
 
------
Edit param.h file to set the mpi parallelization settings, in this case we use 60 nodes:
<PRE style="shell">
<PRE style="shell">
# /opt/conda/mamba/DE-global_hackathon/miniconda3/bin/mamba install -n easy pyflextrkr
#ifdef MPI
 
integer NP_XI, NP_ETA, NNODES
 
parameter (NP_XI=10, NP_ETA=6, NNODES=NP_XI*NP_ETA)
</PRE>
</PRE>
------


Via pip
Edit the compilation file "jobcomp" with hydra parameters:
<PRE style="shell">
<PRE style="shell">
# cd /opt/conda/mamba/DE-global_hackathon/miniconda3/envs/easy
ROOT_DIR=/share/CROCO/v110/intel/dmpar/croco
# bin/pip3 install healpy
 
SOURCE=${ROOT_DIR}'/OCEAN'
 
FC=ifort
 
MPIF90="mpif90"
 
MPILIB="-L/opt/mpich/mpich-3.4.2/intel/2021.4.0/lib"
 
MPIINC="-I/opt/mpich/mpich-3.4.2/intel/2021.4.0/include"
</PRE>
</PRE>
------


=== ATRACKCS ===
Create an ASCII file in /share/CROCO/v110/intel with the specificities of the compilation with intel in hydra, name <CODE>CROCO_hydra_intel.env</CODE> (following the instructios from here [https://croco-ocean.gitlabpages.inria.fr/croco_doc/tutos/tutos.00.env.html tutorial 00])
El [https://github.com/ATRACKCS/ATRACKCS ATRACKCS] Esta herramienta está diseñada para realizar seguimientos de sistemas atmosféricos. Estos son los pasos seguidos para su instalación en Deimos
<PRE>
#set environment variables for croco in HYDRA to be compiled with intel
source /opt/load-libs.sh 1
export CC=icc
export FC=ifort
export F90=ifort
export F77=ifort
export NETCDF=/opt/netcdf/netcdf-4/intel/2021.4.0
export PATH=$NETCDF/bin::${PATH}
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}::${NETCDF}/lib
</PRE>


Creando el enclave del paquete
Loading the compilation environment
<PRE style="sell">
<PRE style="shell">
# cd /opt/conda/mamba/
$ source /share/CROCO/v110/intel/CROCO_hydra_intel.env
# mkdir ATRACKCS
# cd ATRACKCS
</PRE>
</PRE>


Agarrando el software y preparando la instalación via la creación de un nuevo entorno de python
Run the compilation (jobcomp file)
<PRE style="shell">
<PRE style="shell">
# git clone https://github.com/alramirezca/ATRACKCS
$ ./jobcomp >& jobcomp.log
# cd ATRACKCS
# /opt/conda/mamba/DE-global_hackathon/miniconda3/bin/mamba env create -f env_py3.yml
</PRE>
</PRE>
If compilation is successful, you should have a croco executable in your directory.
You will also find a Compile directory containing the model source files.


Activación del entorno y terminar la instalación
== Run the model==
Copy de configuration file croco.in inside your simulation directory
<PRE style="shell">
<PRE style="shell">
# eval "$(/opt/conda/mamba/DE-global_hackathon/miniconda3/bin/mamba shell hook --shell bash)"
$scp -p [croco.in] $CROCOsrc/CONFIGS/GSM_CLIMATOLOGY/
(base) # mamba activate atrackcs_py3
(atrackcs_py3) # pip install -e .
</PRE>
</PRE>


Registrando el nuevo kernel para deimos:
create a CROCO_FILES folder inside your simulation directory and copy the grid, forcing and ini files to it (croco_grd.nc,croco_clm.nc,croco_frc.nc and croco_ini.nc)
<PRE style="shell">
<PRE style="shell">
(atrackcs_py3) # pip3 install ipykernel
$ mkdir -p CROCO_FILES
(atrackcs_py3) # mamba deactivate
$scp -p [grid,ini and forcing files] $CROCOsrc/CONFIGS/GSM_CLIMATOLOGY/CROCO_FILES
(base) # mamba deactivate
# /opt/tljh/user/bin/jupyter kernelspec list
Available kernels:
  python3                /opt/tljh/user/share/jupyter/kernels/python3
  bash                    /usr/local/share/jupyter/kernels/bash
  ir                      /usr/local/share/jupyter/kernels/ir
  python_digearth-hack    /usr/local/share/jupyter/kernels/python_digearth-hack
# /opt/conda/mamba/DE-global_hackathon/miniconda3/envs/atrackcs_py3/bin/python3 -m ipykernel install
# /opt/tljh/user/bin/jupyter-kernelspec install /usr/local/share/jupyter/kernels/python3 --name=python_atrackcs
# /opt/tljh/user/bin/jupyter-kernelspec list
Available kernels:
  python3                /opt/tljh/user/share/jupyter/kernels/python3
  bash                    /usr/local/share/jupyter/kernels/bash
  ir                      /usr/local/share/jupyter/kernels/ir
  python_atrackcs        /usr/local/share/jupyter/kernels/python_atrackcs
  python_digearth-hack    /usr/local/share/jupyter/kernels/python_digearth-hack
</PRE>
</PRE>
 
Copy a job script in my "salidas" folder and edit it:
Editando el nombre del kernel para identificarlo desde deimos:
<PRE style="shell">
<PRE style="shell">
# vim /usr/local/share/jupyter/kernels/python_atrackcs/kernel.json
cd $HOME/salidas/croco/mpi
"display_name": "Python 3 (atrackcs)",
cp /share/tools/workflows/direct/CROCO/hydra/launch_ideal_test_mpi.pbs ./launch_gsm_clim_mpi.pbs
"env": {
  "MAMBA_PREFIX":"/opt/conda/mamba/DE-global_hackathon/miniconda3/envs/atrackcs_py3",
  "MAMBA_DEFAULT_ENV":"atrackcs_py3",
  "PATH":"/opt/conda/mamba/DE-global_hackathon/miniconda3/envs/atrackcs_py3/bin:/opt/conda/mamba/DE-global_hackathon/miniconda3/condabin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
},
</PRE>
</PRE>
<PRE style="shell">
# Amount of processors
Ntotprocs=60


=== ESMValTool ===
# Name of the file with the input of the ideal case
Herramientas para el análisis de datos CMIP. [https://esmvaltool.org/ ESMValTool] es una herramienta en python, abierta y desarrollada por la comunidad.
idealn=croco.in
CROCOsrc=/share/CROCO/v110/intel/dmpar/croco


Los pasos para su instalación se detallan acá [[/ESMValToolInst]].
</PRE>
copy the executable "croco" file and the config file "croco.in" to "salidas/croco/mpi":
<PRE style="shell">
$ cp /share/CROCO/v110/intel/dmpar/croco/CONFIGS/GSM_CLIMATOLOGY/croco ./
$ cp /share/CROCO/v110/intel/dmpar/croco/CONFIGS/GSM_CLIMATOLOGY/croco.in ./


<I>El sistema papa-deimos, se pudo constituir en parte, gracias a fondos del [https://www.insu.cnrs.fr/fr INSU] - [https://programmes.insu.cnrs.fr/lefe/ LEFE]</I>
</PRE>
 
And execute
<PRE style="shell">
$ qsub launch_gsm_clim_mpi.pbs

Revisión actual - 18:35 23 oct 2025

Web page of the instalation of the ocean model CROCO in CIMA's HPC 'hydra'

Installation

CIMA's HPC has a common folder where all the people has access. This is going to be the place where CROCO model will be installed.

The installation is going to be in the following INSTALLDIR=/share/CROCO folder.

Downloading the code (there is a GItlab repository, but by now, we are going to install closed releases)

$ wget https://gitlab.inria.fr/croco-ocean/croco/-/archive/v2.1.0/croco-v2.1.0.tar.gz

Creating the folder of the compilation (using intel compiler and shared memory dmpar) and where to deploy the code

$ mkdir $INSTALLDIR/v210
$ cd $INSTALLDIR/v210
$ mkdir -p intel/dmpar
$ git clone --branch v2.1.0 https://gitlab.inria.fr/croco-ocean/croco.git croco-v2.1.0
Cloning into 'croco'...
remote: Enumerating objects: 52646, done.
remote: Counting objects: 100% (448/448), done.
remote: Compressing objects: 100% (263/263), done.
remote: Total 52646 (delta 268), reused 317 (delta 185), pack-reused 52198 (from 1)
Receiving objects: 100% (52646/52646), 300.67 MiB | 4.59 MiB/s, done.
Resolving deltas: 100% (40426/40426), done.
Note: switching to '291e8cd11b2329ee171c7a02c52c9c481dda981e'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:

  git switch -c <new-branch-name>

Or undo this operation with:

  git switch -

Turn off this advice by setting config variable advice.detachedHead to false
$ ls croco
GRIF         create_config.bash  OBSTRUCTION  README.md         TEST_CASES
BENCH         MPI_NOLAND          OCEAN        requirements.txt  XIOS
CHANGELOG.md  MUSTANG             PISCES       SCRIPTS

Go to compile, first we set-up the environment for intel compilation

$ source /opt/load-libs.sh 1
The following libraries, compiled with Intel 2021.4.0 compilers, were loaded:
* MPICH 3.4.2
* NetCDF 4
* HDF5 1.10.5
* JASPER 2.0.33

To change it please logout and login again.

To load this libraries from within a script add this line to script:
source /opt/load-libs.sh 1
$ cd croco

We create an ASCII file in /share/CROCO/v210/intel with the specificities of the compilation with intel in hydra, name CROCO_hydra_intel.env (following the instructios from here tutorial 00)

#set environment variables for croco in HYDRA to be compiled with intel
source /opt/load-libs.sh 1
export CC=icc
export FC=ifort 
export F90=ifort 
export F77=ifort 
export NETCDF=/opt/netcdf/netcdf-4/intel/2021.4.0 
export PATH=$NETCDF/bin::${PATH} 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}::${NETCDF}/lib

Loading the compilation environment

$ source CROCO_hydra_intel.env

RUNNING CROCO TEST CASES

BASIN

The simplest installation would be the 'basin' configuration. We follow these instructions basin. Assuming that the source code resides in CROCOsrc=/share/CROCO/v210/intel/serie/croco

$ CROCOsrc=/share/CROCO/v210/intel/serie/croco
$ cd ${CROCOsrc}
$ mkdir -p CONFIGS/BASIN
cp OCEAN/cppdefs.h CONFIGS/BASIN
cp OCEAN/param.h CONFIGS/BASIN
cp OCEAN/jobcomp CONFIGS/BASIN
cd CONFIGS/BASIN

We need to edit the configuration files.

Editing the file with the configuration

$ cp cppdefs.h cppdefs.h_orig
$ diff cppdefs.h cppdefs.h_orig 
18c18
< #define  BASIN           /* Basin Example */
---
> #undef  BASIN           /* Basin Example */
56c56
< #undef REGIONAL        /* REGIONAL Applications */
---
> #define REGIONAL        /* REGIONAL Applications */

Editing the file with the size of the domain and configuration of the parallel environment

$ cp param.h param.h_orig
$ diff param.h param.h_orig

Editing the file with the compilation

$ cp jobcomp jobcomp_orig
$ diff jobcomp jobcomp_orig 
34,35c34
< CROCOsrc=/share/CROCO/v210/intel/serie/croco
< SOURCE1=${CROCOsrc}/OCEAN
---
> SOURCE1=../croco/OCEAN
45c45
< FC=ifort
---
> FC=gfortran

And compile

$ ./jobcomp > jobcomp.log

If everything when fine, one should have:

$ tail -n 15 jobcomp.log 
mv a.out croco
                                   
                                   
    .,:looddddddd:            ,;'  
  .cdddo;;oo'....            .ldd; 
 ,ddddd,                    ;odoc. 
.oddddo:               .'';odoc    
'dddddd;':,  .;;  .;;. :ccdo;      
.oddddddddooooddoooddooodl.        
 .oddddddddddddddddddddddl;;       
  .,ldddddddddddddddddddoc:lolo'.. 
    ;ddc;::::::::::codo,     .';:, 
   .ccc             ,cc,           
                                   
CROCO is OK
$ ls -l croco
-rwxrwxr-x 1 lluis.fita cima 1012160 Jul 15 15:11 croco
IMPORTANT NOTES
1. do not run the model in the same place where you compiled the source
2. Always use a queue-system job to run simulations, never run without it
3. Always run in your salidas/ folder never in $HOME

UPWELLING

Compilation of the upwelling test case in distributed memory (dmpar) with intel.


$ $ source /opt/load-libs.sh 1

The following libraries, compiled with Intel 2021.4.0 compilers, were loaded:
* MPICH 3.4.2
* NetCDF 4
* HDF5 1.10.5
* JASPER 2.0.33
$ mkdir /share/CROCO/v210/intel/dmpar/
$ git clone --branch v2.1.0 https://gitlab.inria.fr/croco-ocean/croco.git croco
$ cd croco
$ CROCOsrc=/share/CROCO/v210/intel/dmpar/croco

Copying configuration files

$ mkdir -p CONFIGS/Upwelling
cp OCEAN/cppdefs.h CONFIGS/Upwelling
cp OCEAN/param.h CONFIGS/Upwelling
cp OCEAN/jobcomp CONFIGS/Upwelling
cd CONFIGS/Upwelling

Configuring the case

$ cp cppdefs.h cppdefs.h_orig
$ diff cppdefs.h cppdefs.h_orig 
29c29
< #define  UPWELLING       /* Upwelling Example */
---
> #undef  UPWELLING       /* Upwelling Example */
56c56
< #undef REGIONAL        /* REGIONAL Applications */
---
> #define REGIONAL        /* REGIONAL Applications */
1037c1037
< # define MPI
---
> # undef  MPI

$ cp param.h param.h_orig
$ diff param.h param.h_orig 

$ cp jobcomp jobcomp_orig
$ diff jobcomp jobcomp_orig 
34,35c34
< CROCOsrc=/share/CROCO/v210/intel/dmpar/croco
< SOURCE1=${CROCOsrc}/OCEAN
---
> SOURCE1=../croco/OCEAN
46c45
< FC=ifort
---
> FC=gfortran
52,53c51,52
< MPILIB="-L/opt/mpich/mpich-3.4.2/intel/2021.4.0/lib"
< MPIINC="-I/opt/mpich/mpich-3.4.2/intel/2021.4.0/include"
---
> MPILIB=""
> MPIINC=""

Compiling in parallel

$ ./jobcomp >& jobcomp.log
$ tail -n 15 jobcomp.log
 mv a.out croco
                                   
                                   
    .,:looddddddd:            ,;'  
  .cdddo;;oo'....            .ldd; 
 ,ddddd,                    ;odoc. 
.oddddo:               .'';odoc    
'dddddd;':,  .;;  .;;. :ccdo;      
.oddddddddooooddoooddooodl.        
 .oddddddddddddddddddddddl;;       
  .,ldddddddddddddddddddoc:lolo'.. 
    ;ddc;::::::::::codo,     .';:, 
   .ccc             ,cc,           
                                   
CROCO is OK

$ ls -l croco
-rwxrwxr-x 1 lluis.fita cima 1184600 Jul 15 16:34 croco


IMPORTANT NOTES
1. do not run the model in the same place where you compiled the source
2. Always use a queue-system job to run simulations, never run without it
3. Always run in your salidas/ folder never in $HOME

Example of use

serie: BASIN

How to run a basic example

$ cd $HOME/salidas
$ mkdir CROCO
$ cd CROCO
$ mkdir tests
$ cd tests/
$ cp /share/tools/workflows/direct/CROCO/hydra/launch_ideal_test_serie.pbs ./

Link the necessary files and editng them if needed:

$ ln -s $CROCOsrc/CONFIGS/BASIN/croco ./
$ cp $CROCOsrc/TEST_CASES/croco.in.Basin ./

And execute

$ qsub launch_ideal_test_serie.pbs
70619.hydra
$ qstat
Job ID                    Name             User            Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
(...)
70619.hydra                CROCO4L_ideal    lluis.fita             0 R larga 

Looking for the current step:

$ tail croco.out 
    2698   299.77778 8.942712777E-05-3.3148734E+00-3.3147840E+00 5.0400000E+16  0
    2699   299.88889 8.942810068E-05-3.3148728E+00-3.3147833E+00 5.0400000E+16  0
    2700   300.00000 8.942902900E-05-3.3148721E+00-3.3147827E+00 5.0400000E+16  0
      WRT_HIS -- wrote history fields into time record =   31 /   31
    2701   300.11111 8.942991352E-05-3.3148715E+00-3.3147820E+00 5.0400000E+16  0
    2702   300.22222 8.943075502E-05-3.3148708E+00-3.3147814E+00 5.0400000E+16  0
    2703   300.33333 8.943155431E-05-3.3148701E+00-3.3147807E+00 5.0400000E+16  0
    2704   300.44444 8.943231222E-05-3.3148695E+00-3.3147800E+00 5.0400000E+16  0
    2705   300.55556 8.943302959E-05-3.3148688E+00-3.3147794E+00 5.0400000E+16  0
    2706   300.66667 8.943370728E-05-3.3148682E+00-3.3147787E+00 5.0400000E+16  0

Once finished:

$ qstat
Job ID                    Name             User            Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
(...)
70619.hydra                CROCO4L_ideal    lluis.fita      00:01:13 C larga          

$ tail croco.out 
    3239   359.88889 9.041893005E-05-3.3144938E+00-3.3144033E+00 5.0400000E+16  0
    3240   360.00000 9.041879739E-05-3.3144930E+00-3.3144026E+00 5.0400000E+16  0
      WRT_HIS -- wrote history fields into time record =   37 /   37

 MAIN - number of records written into history  file(s):   37
        number of records written into restart  file(s):    0


 MAIN: DONE

$ ls -lrt
total 15576
-rw-rw-r-- 1 lluis.fita cima     1412 Jul 15 15:40 launch_ideal_test_serie.pbs
lrwxrwxrwx 1 lluis.fita cima       55 Jul 15 15:44 croco -> /share/CROCO/v210/intel/serie/croco/CONFIGS/BASIN/croco
-rw-rw-r-- 1 lluis.fita cima     3361 Jul 15 15:46 croco.in.Basin
-rw------- 1 lluis.fita cima     3361 Jul 15 15:47 croco.in
-rw------- 1 lluis.fita cima      824 Jul 15 15:48 CROCO4L_ideal.o70619
-rw------- 1 lluis.fita cima 15653792 Jul 15 15:48 basin_his.nc
-rw------- 1 lluis.fita cima   274775 Jul 15 15:48 croco.out

dmpar: Upwelling

How to run a basic example

$ cd $HOME/salidas
$ mkdir CROCO
$ cd CROCO
$ mkdir mpi
$ cd mpi/
$ cp /share/tools/workflows/direct/CROCO/hydra/launch_ideal_test_mpi.pbs ./

Link the necessary files and editng them if needed:

$ ln -s $CROCOsrc/CONFIGS/Upwelling/croco ./
$ cp $CROCOsrc/TEST_CASES/croco.in.Upwelling ./

And execute

$ qsub launch_ideal_test_mpi.pbs
70649.hydra
$ qstat
Job ID                    Name             User            Time Use S Queue
------------------------- ---------------- --------------- -------- - -----
(...)
70645.hydra               CROCO4L_ideal_   lluis.fita             0 R larga 

Once finished

$ tail -3 croco.out 

 MAIN: DONE

$ ls -lrt
total 6956
lrwxrwxrwx 1 lluis.fita cima      59 Jul 15 16:42 croco -> /share/CROCO/v210/intel/dmpar/croco/CONFIGS/Upwelling/croco
-rw-rw-r-- 1 lluis.fita cima    1500 Jul 15 16:42 croco.in.Upwelling
-rw------- 1 lluis.fita cima    1500 Jul 15 16:43 croco.in
-rwxrwxr-x 1 lluis.fita cima    9328 Jul 15 16:43 a.out
-rw-rw-r-- 1 lluis.fita cima    1448 Jul 15 16:50 launch_ideal_test_mpi.pbs
-rw------- 1 lluis.fita cima  238630 Jul 15 16:50 CROCO4L_ideal_mpi.o70634
-rw------- 1 lluis.fita cima  717220 Jul 15 16:50 upwelling_rst.nc
-rw------- 1 lluis.fita cima 3316688 Jul 15 16:50 upwelling_his.nc
-rw------- 1 lluis.fita cima 2771248 Jul 15 16:50 upwelling_avg.nc
-rw------- 1 lluis.fita cima   42450 Jul 15 16:50 croco.out

RUNNING A REAL SIMULATION

v2.1: simulación climatológica

Ejemplo de uso en un entorno real de CROCO para simular un periodo corto.

Para eso necesitamos:

  • Forzantes atmosféricos:
  • Batimetría:

v2.1: simulación multianual

Ejemplo de uso en un entorno real de CROCO para simular mútliples años reciclando el forzante atmosférico.

Para eso necesitamos:

  • Forzantes atmosféricos:
  • Batimetría:

v1.1 - Nicolás Aubone modified

Here we run a real simulation with croco version 1.1

It's a simulation of one climatologic year in the San Matías Gulf, Argentina.

Be carefull, this model was modified from its original code (we add a bi-dimensional logarithmic bottom friction).

Creating the folder where to deploy the code (using intel compiler and shared memory dmpar):

$ INSTALLDIR=/share/CROCO
$ mkdir $INSTALLDIR/v11
$ cd $INSTALLDIR/v11
$ mkdir -p intel/dmpar
$ cd intel/dmpar
$ tar xvfz [CROCOv1.1_NicolasAubone].tar.gz 
$ ls croco
AGRIF            OCEAN                      run_simulation_openmp.bash
create_run.bash  PISCES                     SCRIPTS
CVTK             readme_version_croco.txt   TEST_CASES
DOC_SPHINX       run_multi_simulation.bash  XIOS
MPP_PREP         run_simulation.bash

Create the "CONFIGS" folder to store our different simulations there.

Create the "GSM_CLIMATOLOGY" folder to store our simulation config files and the model compilation.

Copy to the simulation forlder "GSM_CLIMATOLOGY" our configuration files to compile and run the model:

$ CROCOsrc=$INSTALLDIR/v110/intel/dmpar/croco
$ cd ${CROCOsrc}
$ mkdir -p CONFIGS/GSM_CLIMATOLOGY
$ cd CONFIGS/GSM_CLIMATOLOGY
cp [file_location]/cppdefs.h CONFIGS/BASIN
cp [file_location]/param.h CONFIGS/BASIN
cp [file_location]/jobcomp CONFIGS/BASIN

Edit cppdefs.h file to define a mpi run:

define MPI


Edit param.h file to set the mpi parallelization settings, in this case we use 60 nodes:

#ifdef MPI

integer NP_XI, NP_ETA, NNODES

parameter (NP_XI=10, NP_ETA=6, NNODES=NP_XI*NP_ETA)

Edit the compilation file "jobcomp" with hydra parameters:

ROOT_DIR=/share/CROCO/v110/intel/dmpar/croco

SOURCE=${ROOT_DIR}'/OCEAN'

FC=ifort

MPIF90="mpif90"

MPILIB="-L/opt/mpich/mpich-3.4.2/intel/2021.4.0/lib"

MPIINC="-I/opt/mpich/mpich-3.4.2/intel/2021.4.0/include"

Create an ASCII file in /share/CROCO/v110/intel with the specificities of the compilation with intel in hydra, name CROCO_hydra_intel.env (following the instructios from here tutorial 00)

#set environment variables for croco in HYDRA to be compiled with intel
source /opt/load-libs.sh 1
export CC=icc
export FC=ifort 
export F90=ifort 
export F77=ifort 
export NETCDF=/opt/netcdf/netcdf-4/intel/2021.4.0 
export PATH=$NETCDF/bin::${PATH} 
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}::${NETCDF}/lib

Loading the compilation environment

$ source /share/CROCO/v110/intel/CROCO_hydra_intel.env

Run the compilation (jobcomp file)

$ ./jobcomp >& jobcomp.log

If compilation is successful, you should have a croco executable in your directory.

You will also find a Compile directory containing the model source files.

Run the model

Copy de configuration file croco.in inside your simulation directory

$scp -p [croco.in] $CROCOsrc/CONFIGS/GSM_CLIMATOLOGY/

create a CROCO_FILES folder inside your simulation directory and copy the grid, forcing and ini files to it (croco_grd.nc,croco_clm.nc,croco_frc.nc and croco_ini.nc)

$ mkdir -p CROCO_FILES
$scp -p [grid,ini and forcing files] $CROCOsrc/CONFIGS/GSM_CLIMATOLOGY/CROCO_FILES

Copy a job script in my "salidas" folder and edit it:

cd $HOME/salidas/croco/mpi
cp /share/tools/workflows/direct/CROCO/hydra/launch_ideal_test_mpi.pbs ./launch_gsm_clim_mpi.pbs
# Amount of processors
Ntotprocs=60

# Name of the file with the input of the ideal case
idealn=croco.in 
CROCOsrc=/share/CROCO/v110/intel/dmpar/croco

copy the executable "croco" file and the config file "croco.in" to "salidas/croco/mpi":

$ cp /share/CROCO/v110/intel/dmpar/croco/CONFIGS/GSM_CLIMATOLOGY/croco ./
$ cp /share/CROCO/v110/intel/dmpar/croco/CONFIGS/GSM_CLIMATOLOGY/croco.in ./

And execute

$ qsub launch_gsm_clim_mpi.pbs