Research Blog
Welcome to my Research Blog.
This is mostly meant to document what I am working on for myself, and to communicate with my colleagues. It is likely filled with errors!
This project is maintained by ndrakos
We will be basing the simulations off of what we need for the WFIRST ultra deep field mock catalogue. Here is a summary of the simulation parameters.
I will use the results from Planck 2018 cosmology (last column of Table 2).
\[\Omega_b = 0.02242\, h^{-2} = 0.04893\] \[\Omega_0 = 0.3111\] \[\Omega_\Lambda = 0.6889\] \[H_0 = 67.66\,{\rm km/s/Mpc}\] \[\sigma_8= 0.8102\] \[n_s = 0.9665\]As discussed in previous posts, we will use a box size of \(115\, h^{-1} {\rm Mpc}\) with \(2048^3\) particles.
I will start at redshift \(z=100\)—I don’t actually have a good reason for choosing this.
With the specified cosmology, the mass resolution for \(2048^3\) particles is \(1.53 \times 10^7 M_{\rm sun}/h\).
I think 500 time outputs seems reasonable to get good time resolution, but I am not sure if there is a more careful way to calculate this (I should also check how much storage this will take).
If I understand the parameter TimeBetSnapshot in the Gadget parameter file (which I am not sure I do), then I want to set this to \({\rm exp}(1/N_{\rm snap} \ln(a_f/a_i))\), where \(N_{\rm snap}\) is the number of snapshots and \(a_i\) and \(a_f\) are the initial and final scale factor, \(a=1/(1+z)\).
Criteria for choosing numerical parameters is commonly found in Power et al. 2003, but there is also an updated paper by Zhang et al. 2019 that suggest a smaller softening length than the former.
I am going to use a softening length of \(1/50\) the mean particle separation (this seems reasonable given discussion in Zhang et al. 2019). Then, noting that the mean particle separation of particles is \((V/N)^{1/3} = {\rm box size}/N^{1/3}\), this gives a softening length of \(\epsilon = 1.12\, h^{-1} {\rm kpc}\) for \(2048^3\) particles. This is pretty similar to the softening that Bruno had decided on for his simulation here.
(I have left the input/output directories blank because I need to set this up on Pleiades)
1) Try a sample run with \(256^3\) particles and check everything looks good
2) Get the sample run working on Pleiades
3) Check how many nodes I can request at once on Pleiades (without being queued too long), and if I will run into problems with storing snapshots
4) Double check with Brant whether \(\sim 500\) snapshots starting from \(z=100\) sounds reasonable