Personal tools
You are here: Home User Services Documentation Los Lobos Reference Card
Document Actions

Los Lobos Reference Card

Instructions for how to do many things on Los Lobos, including compiling, running jobs, etc

This page includes:

For assistance, please contact the Help Desk at help@hpc.unm.edu or (505)  277-8348 (Monday through Friday, 9 a.m. to 5 p.m. Mountain Time)


Access

Login using secure-shell (ssh):

    ssh loslobos.alliance.unm.edu

Transfer files to the Linux cluster from your home machine  using secure copy protocol (scp):

    scp <file> <user>@loslobos.alliance.unm.edu:<dest>


Your Account

Change your password on the Linux cluster:

    yppasswd

Change your login shell on the Linux cluster:

    ypchsh


Mail

Please forward your mail to your local system. To forward mail,  create a .forward file in your Linux cluster home directory and put your email address in it:

    <username>@<host.domain>


Compilers

Compilers from GNU, PGI are installed.

GNU C

gcc –o exec [options] filename(s).c

GNU C++

g++ –o exec [options] filename(s).C

GNU Fortran77

g77 –o exec [options] filename(s).f

PGI Fortran77

pgf77 –o exec [options] filename(s).f

PGI Fortran90/95

pgf90 –o exec [options] filename(s).f90


Programming Tools

Source code debuggers:

GNU

gdb

PGI

pgdbg

 

Performance profiler:

PGI

pgprof


Selecting an MPI Library

There are several builds of the MPICH MPI library on each Linux cluster.  Some use the special GM interface to the Myrinet high-speed network hardware and some use TCP/IP. Select  an MPICH build by setting the MPIHOME environment variable in your .cshrc file (if your login shell is tcsh). You can set this  variable by editing your .cshrc file and removing the comment sentinel (#) from the beginning of the  line specifying the MPICH library build that you want to use. Be sure all other lines setting a value for MPIHOME start with the comment sentinel (#).  Refer to your .cshrc file for examples. Log out and log back in to load the new environment.  Use the "which mpirun" command to verify the MPICH library build you have chosen.

For example:

    which mpirun

    /usr/parallel/mpich-gm.pgi/bin/mpirun

shows that you are using MPICH with the GM interface, and the corresponding MPICH compiling scripts mpif77 and mpif90 will use the PGI Fortran compilers.


Compiling with  MPI

C

mpicc –o exec [options] filename(s).c

 

C++

mpiCC –o exec [options] filename(s).C

 

Fortran77

mpif77 –o exec [options] filename(s).f

 

Fortran90

mpif90 -o exec [options] filename(s).f90


Linking Serial Libraries

ATLAS BLAS

-L/usr/local/lib -lcblas -lf77blas -latlas

LAPACK

-L/usr/local/lib -llapack -lcblas -lf77blas -latlas

FFTW release 2, complex input

-L/usr/local/lib -lfftw

FFTW release 2, real input

-L/usr/local/lib -lrfftw

FFTW release 3, real or complex input

-L/usr/local/lib -lfftw3


Linking Parallel Libraries


MPICH P4 Compatible Libraries:

FFTW release 2, complex input

-L/usr/parallel/lib/p4 -lfftw_mpi -lfftw

FFTW release 2, real input

-L/usr/parallel/lib/p4 -lrfftw_mpi -lrfftw

Scalapack

-L/usr/parallel/lib/p4 -lscalapack -lblacsf77init -lblacs -lblacsf77init -L/usr/local/lib -lf77blas -latlas


MPICH GM Compatible Libraries:

FFTW release 2, complex input

-L/usr/parallel/lib/gm -lfftw_mpi -lfftw

FFTW release 2, real input

-L/usr/parallel/lib/gm -lrfftw_mpi -lrfftw

Scalapack

-L/usr/parallel/lib/gm -lscalapack -lblacsf77init -lblacs -lblacsf77init -L/usr/local/lib -lf77blas -latlas


Submitting PBS Jobs

PBS supports two modes for running jobs – batch  and interactive. With batch mode, a user submits a job, and the job is queued, scheduled and run without any further interaction.  With interactive mode, a user submits a request for an interactive session on a set of compute nodes assigned by the PBS job manager.

The following examples show simple scripts for batch mode jobs.  

Portable Batch System (PBS)

PBS Batch Mode Sample Scripts:

Request 16 nodes and start one mpi process on each node:

#PBS -l nodes=16:ppn=2
mpirun -machinefile $PBS_NODEFILE -np 16 myprog.exe


Request 8 nodes and start 2 mpi processes on each node:

#PBS -l nodes=8:ppn=2
mpirun -machinefile $PBS_NODEFILE -np 16 myprog.exe


Standard output and error files will be written to your working directory.  To submit the script to PBS, use the following command:

    qsub <pbs_script_file>

 

PBS Interactive Mode Sample Session:
Request an interactive session with 8 nodes:

    qsub –I –l nodes=8

Run your program as follows:

Start one mpi process on each node: 

% mpirun -machinefile $PBS_NODEFILE -np 8 myprog.exe

Start two mpi processes on each node:

% mpirun -machinefile $PBS_NODEFILE -np 16 myprog.exe

Release your nodes when finished by typing:

    % exit

Display contents and status of queue:

    % qstat -a

Remove job from queue:

    % qdel <job_id>


Powered by Plone CMS, the Open Source Content Management System

This site conforms to the following standards: