Currently still in heavy development, but is able to perform direction-independent calibration on the CPU.
More documentation: https://mwatelescope.github.com/MWATelescopeio/mwa_hyperdrive/wikiindex.html
Project homepage: https://github.com/MWATelescope/mwa_hyperdrive
...
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
---------------------------------- /pawsey/mwa/software/python3/modulefiles ---------------------------------- hyperdrive/chj hyperdrive/v0.2.0-alpha1alpha11 (L,D) |
Load a hyperdrive
module:
Code Block | ||||
---|---|---|---|---|
| ||||
module load hyperdrive # this will load the default version |
hyperdrive
prefers to use the FEE beam when its applicable. The associated beam code (hyperbeam) requires that the MWA FEE beam file be available at runtime; this is either done manually with a command-line argument to hyperdrive
, or with the MWA_BEAM_FILE
environment variable. garrawarla users typically don't need to worry about this, because hyperdrive
modules automatically set MWA_BEAM
_FILE
.
How do I get started?
Have a look at the help text!
The following is current as of 4 November 2021.
See help text:
Code Block | ||||
---|---|---|---|---|
| ||||
hyperdrive -h # -h could also be --help module load hyperdrive/chj # load CHJ's development version |
Example Slurm script
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
hyperdrive 0.2.0-alpha4
https://github.com/MWATelescope/mwa_hyperdrive
Calibration software for the Murchison Widefield Array (MWA) radio telescope
USAGE:
hyperdrive <SUBCOMMAND>
FLAGS:
-h, --help Prints help information
-V, --version Prints version information
SUBCOMMANDS:
di-calibrate Perform direction-independent calibration on the input MWA data
simulate-vis Simulate visibilities of a sky-model source list
srclist-by-beam Reduce a sky-model source list to the top N brightest sources, given pointing information
srclist-convert Convert a sky-model source list from one format to another
srclist-shift Shift the sources in a source list. Useful to correct for the ionosphere. The shifts must be
detailed in a .json file, with source names as keys associated with an "ra" and "dec" in
degrees. Only the sources specified in the .json are written to the output source list
srclist-verify Verify that sky-model source lists can be read by hyperdrive
dipole-gains Print information on the dipole gains listed by a metafits file
help Prints this message or the help of the given subcommand(s)
|
hyperdrive
is broken up into many subcommands. Each of these have their own help; e.g.
Code Block | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
hyperdrive-simulate-vis 0.2.0-alpha4
Simulate visibilities of a sky-model source list
USAGE:
hyperdrive simulate-vis [FLAGS] [OPTIONS] --metafits <metafits> --source-list <source-list>
FLAGS:
--no-beam Should we use a beam? Default is to use the FEE beam
--unity-dipole-gains Pretend that all MWA dipoles are alive and well, ignoring whatever is in the metafits
file
--dont-convert-lists Don't attempt to convert "list" flux densities to power law flux densities. See for more
info: https://github.com/MWATelescope/mwa_hyperdrive/wiki/Source-lists
--filter-points Don't include point components from the input sky model
--filter-gaussians Don't include Gaussian components from the input sky model
--filter-shapelets Don't include shapelet components from the input sky model
-v, --verbosity The verbosity of the program. The default is to print high-level information
--dry-run Don't actually do any work; just verify that the input arguments were correctly ingested
and print out high-level information
--cpu Use the CPU for visibility generation. This is deliberately made non-default because
using a GPU is much faster
-h, --help Prints help information
-V, --version Prints version information
OPTIONS:
-s, --source-list <source-list> Path to the sky-model source list used for simulation
-m, --metafits <metafits> Path to the metafits file
-o, --output-model-file <output-model-file> Path to the output visibilities file [default: model.uvfits]
-r, --ra <ra>
The phase centre right ascension [degrees]. If this is not specified, then the metafits phase/pointing
centre is used
-d, --dec <dec>
The phase centre declination [degrees]. If this is not specified, then the metafits phase/pointing centre is
used
-c, --num-fine-channels <num-fine-channels>
The total number of fine channels in the observation [default: 384]
-f, --freq-res <freq-res> The fine-channel resolution [kHz] [default: 80]
--middle-freq <middle-freq>
The middle frequency of the simulation [MHz]. If this is not specified, then the middle frequency specified
in the metafits is used
-t, --num-timesteps <num-timesteps>
The number of time steps used from the metafits epoch [default: 14]
--time-res <time-res> The time resolution [seconds] [default: 8]
--beam-file <beam-file>
The path to the HDF5 MWA FEE beam file. If not specified, this must be provided by the MWA_BEAM_FILE
environment variable
--dipole-delays <dipole-delays>...
Specify the MWA dipoles delays, ignoring whatever is in the metafits file
|
DI calibration
Available with hyperdrive di-calibrate
Two main things are required to calibrate visibilities:
- A data container (e.g. measurement set); and
- A sky-model source list.
Discussion on the source lists and the applicable formats can be found here.
By default, hyperdrive
will attempt to use all sources in the source list file. If there are more than 1,000 sources in the file, then it may take a long time if you're not using a GPU. In order to keep the number of sources used low, one could use the -n
/--num-sources
and/or --veto-threshold
flags, or use a source list with fewer sources in the first place (see hyperdrive srclist-by-beam
).
Example slurm script for garrawarla:
Code Block | ||||
---|---|---|---|---|
| ||||
#SBATCH --job-name=hyp-$1
#SBATCH --output=hyperdrive.out
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=40
#SBATCH --time=01:00:00
#SBATCH --clusters=garrawarla
#SBATCH --partition=gpuq
#SBATCH --account=mwaeor
#SBATCH --export=NONE
#SBATCH --gres=tmp:50g
#SBATCH --gres=gpu:1
module use /pawsey/mwa/software/python3/modulefiles
module load hyperdrive
set -eux
which hyperdrive
cd /astro/mwaeor/MWA/data/1090008640
if [[ ! -r srclist_1000.yaml ]]; then
hyperdrive srclist-by-beam -n 1000 -m *.metafits /pawsey/mwa/software/python3/srclists/master/srclist_pumav3_EoR0aegean_fixedEoR1pietro+ForA_phase1+2.txt srclist_1000.yaml
fi
hyperdrive di-calibrate -s srclist_1000.yaml -d *.ms *.metafits |
As hyperdrive
is still in heavy development, not all features are currently available. An indication of what is available is below.
- Reads raw MWA data
- Reads a single uvfits file as input
- Reads multiple uvfits files as input
- Reads a single measurement set file as input
- Reads multiple measurement set files as input
- Calibrates on the CPU
- Calibrates on a GPU
- Writes calibration solutions to the "André Offringa calibrate format"
- Writes calibration solutions in the "RTS format"
- Writes calibrated visibilities directly to uvfits output
- Writes calibrated visibilities directly to measurement set output
#!/bin/bash -l
#SBATCH --job-name=hyp-$1
#SBATCH --output=hyperdrive.out
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=40
#SBATCH --time=01:00:00
#SBATCH --clusters=garrawarla
#SBATCH --partition=gpuq
#SBATCH --account=mwaeor
#SBATCH --export=NONE
#SBATCH --gres=gpu:1,tmp:50g
module use /pawsey/mwa/software/python3/modulefiles
module load hyperdrive
set -eux
command -v hyperdrive
cd /astro/mwaeor/MWA/data/1090008640
# Get calibration solutions. Use the top 1000 sources.
hyperdrive di-calibrate \
-s /pawsey/mwa/software/python3/srclists/master/srclist_pumav3_EoR0aegean_fixedEoR1pietro+ForA_phase1+2.txt \
-n 1000 \
-d *gpubox*.fits *.metafits *.mwaf \
-o hyp_sols.fits
# Apply the solutions and write out a measurement set.
# Write it to /nvmetmp as that's much faster than /astro.
hyperdrive solutions-apply \
-d *gpubox*.fits *.metafits *.mwaf \
-s hyp_sols.fits \
-o /nvmetmp/hyp_calibrated.ms \
--time-average 8s \
--freq-average 80kHz
# Move the measurement set to /astro.
mv /nvmetmp/hyp_calibrated.ms . |
This example script reserves 50 GB of space for node local storage (/nvmetmp
). If your output visibilities are bigger than this, then the write will fail; you should adjust the #SBATCH --gres=gpu:1,tmp:50g
line to account for this, e.g. #SBATCH --gres=gpu:1,tmp:200g