Nicole Drakos

Research Blog

Welcome to my Research Blog.

This is mostly meant to document what I am working on for myself, and to communicate with my colleagues. It is likely filled with errors!

This project is maintained by ndrakos

Pleiades File Management

As discussed in the previous post, by default you get 8 GB on your home directory, but for long-term storage you can use the Lou Mass Storage System, which has no disk quota limits.

Lustre File System

You can store 1 TB of data on the Lustre nobackup filesystem (hard limit of 2 TB).

To check the quota:

lfs quota -h -u username /nobackup/username

Transfering Files

It is recommended that rather than scp, you use the shiftc command for transferring files. See this link for more information, but it works pretty much the same as scp, e.g.:

shiftc -r mydir lfe:/u/username/

Lou Data Analysis Nodes

You can do post-processing on the Lou data analysis nodes (LDANs).

To use the LDANs, submit your jobs to the ldan queue. Each job can use only one LDAN for up to three days, and each user can have a maximum of two jobs running simultaneously.

You can submit interactive PBS jobs to the LDANs from either the LFEs or the PFEs. You can submit PBS job scripts from either your Lustre home filesystem (/nobackup/username) or your Lou home filesystem (/u/username).

Work Flow

1) Write output from simulations to the Lustre nobackup directory; I should be able to store at least 200 snapshots with \(512^3\) particles, and at least 5 snapshots with \(2048^3\) particles.

2) Transfer to the lou mass storage system

3) Do post-processing (halo finders and merger trees) on LDANs

I am going to try this on the \(512^3\) simulations, and see if I run into any problems.


« Halo Mass Function Part 3
LDAN Pipeline »