Nicole Drakos

Research Blog

Welcome to my Research Blog.

This is mostly meant to document what I am working on for myself, and to communicate with my colleagues. It is likely filled with errors!

This project is maintained by ndrakos

LDAN Pipeline

For the simulations, I will be doing post-processing on the Lou data analysis nodes (LDANs). In this post, I am documenting my workflow and job scripts.

Overall Plan

You can only request 1 LDAN at a time; I think the easiest approach will be to run multiple serial jobs, with each job analyzing a separate snapshot. In AHF, this will require an input file for each snapshot.

AHF input files

I have a script on my computer for generating input files for AHF snapshots. These can then be copied over to Pleiades.


#!/bin/bash

snappath=/u/ndrakos/wfirst128/ #where the snapshots will be located
snapmin=0
snapmax=500
AHFinput_stem=AHF.input #base AHF input file; missing ic_filename and outfile_prefix lines

for (( i=$snapmin; i<=$snapmax; i++ ))
do
    #echo $i

    #current snapshot
    mysnap=snapshot_$( printf '%03d' $i)


    #define lines in input file
    ic_filename=$snappath$mysnap
    outfile_prefix=$snappath$mysnap

    #create input file
    filename=$mysnap.input
    cp $AHFinput_stem $filename

    #add lines to input file
    (echo 1a; echo ic_filename=$ic_filename; echo .; echo w) | ed - $filename
    (echo 3a; echo outfile_prefix=$outfile_prefix; echo .; echo w) | ed - $filename

done


Submitting Multiple Serial Jobs

There is information here on how to submit multiple serial jobs in one job script.

Halo Finder

Here is my job script:


#PBS -S /bin/csh
#PBS -j oe
#PBS -l select=1:ncpus=16:mpiprocs=16:mem=1GB
#PBS -l walltime=24:00:00
#PBS -q ldan

module load mpi-sgi/mpt
module load comp-intel/2018.3.222

setenv MPI_SHEPHERD true

cd .

export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/nasa/pkgsrc/sles12/2016Q4/lib:/pleiades/u/ndrakos/install_to_here/gsl_in/lib

mpiexec -np 16 ./runAHF_wfirst128.zsh

and the wrapper, runAHF_wfirst128.zsh:



#!/bin/zsh -f  

cd /pleiades/u/ndrakos/AHF/bin

minsnap=0
maxsnap=500


i=$MPT_MPI_RANK
i=$((i+minsnap))

while ((i<=maxsnap))
do
mysnap=snapshot_$( printf '%03d' $i)
./AHF /u/ndrakos/wfirst128/${mysnap}.input
i=$((i+16))
done

Merger Trees

Similarly, for the merger trees, the job script:



#PBS -S /bin/csh
#PBS -j oe
#PBS -l select=1:ncpus=16:mem=1GB
#PBS -l walltime=2:00:00
#PBS -q ldan
module load mpi-sgi/mpt
module load comp-intel/2018.3.222
setenv MPI_SHEPHERD true
cd .
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/nasa/pkgsrc/sles12/2016Q4/lib:/pleia\
des/u/ndrakos/install_to_here/gsl_in/lib
mpiexec -np 16 ./runTree_wfirst128.zsh

and the wrapper:



#!/bin/zsh -f                                                                   
cd /pleiades/u/ndrakos/AHF/bin
minsnap=0
maxsnap=500
i=$MPT_MPI_RANK
i=$((i+minsnap))
while ((i<maxsnap))
do
#Name of file 1                                                                 
for FILENAME in /u/ndrakos/wfirst128/snapshot_$( printf '%03d' $((i+1)))*.AHF_p\
articles; do
  file1=${FILENAME}
done
#Name of file 2                                                                 
for FILENAME in /u/ndrakos/wfirst128/snapshot_$( printf '%03d' $i)*.AHF_particl\
es; do
  file2=${FILENAME}
done
#Name out outputfile                                                            
output=$(echo "${file1%.*}") #prefex before .AHF_particles                      
#Run                                                                            
(echo 2 && echo $file1 && echo $file2 && echo $output.AHF) | ./MergerTree
i=$((i+16))
done



« Pleiades File Management
1024 Sims Catalogs »