VCS pulsar data processing
...
- We now have access to the Garrawarla cluster, which we should use for the bulk of our processing. Note Garrawarla does not have access to the
/group
file system so make sure all of your data is on/astro
. The Galaxy (and Magnus) Cluster(s) should be thought of a backup cluster or somewhere to do testing. In order to access any of the tools, you will need to source in your personal profile:
/pawsey/mwa/software/psrBash.profile
or on Galaxy:/group/mwavcs/PULSAR/psrBash.profile
. One way to do this is to add the following to your .bashrcCode Block language text if [[ $HOSTNAME == garrawarla* ]]; then alias sp3='source /pawsey/mwa/software/profiles/mwavcs_pulsar.profile' elif [[ $HOSTNAME == galaxy* ]]; then alias sp3='source /group/mwavcs/PULSAR/psrBash.profile' fi
Then just run the command
sp3
to load all our software
There is a master branch and a development branch of the main codebase. By default, psrBash.profile
will load the master branch, which is meant to represent the most stable version. However, there may be bugs that were later discovered and fixed, but which have not yet been grafted back into the master branch. If you wish to try the more cutting edge development branch, then after sourcing /group/mwavcs/PULSAR/psrBash.profile
, run
Code Block | ||
---|---|---|
| ||
module load vcstools/devel |
The Nextflow scripts (.nf) are in the mwa_search module so to switch to the development version run
Code Block | ||
---|---|---|
| ||
module load mwa_search/devel |
You will also have to load the nextflow module if you are using any nextflow scripts :
Code Block | ||
---|---|---|
| ||
module load nextflow |
- There is a strict directory structure for VCS processing, given the large volumes of data involved. Raw/recombined and all other products (incoherent sum products, coherent sum products, calibration solutions, etc.) data is stored on the
/astro
file system.We have endeavoured to make sure this is done without requiring the user's attention, however, please double-check working directories and defaults for scripts.
...
In this example, 2.4% of the files are not yet archived so the user should wait until 100% of the files are archived. If all the files have not been archived in over 5 days you should contact someone on the MWA operations team to ask them what may have gone wrong.
To download an entire observation (referred to by their observation IDs, GPS seconds, as labelled on the MWA data archive), use:
Code Block | ||
---|---|---|
| ||
vcs_download.nf --obsid <obs ID> --all |
otherwise, if you only want some interval [begin, end] of the full observation, then use:
...
VCS data, you must use the ASVO. Once you have created an account and logged in, you can use the ASVO job dashboard to submit a download job. You must click on the "Voltage Download Job" tab, input the observation ID, offset (seconds from the beginning of observation) and duration (in seconds). For example, here is how you would download the first 600 seconds of an observation.
You should then be able to see your download jobs on the ASVO job monitor. Once the job is complete, you can continue to the next step
The data will be downloaded to /astro/mwavcs/asvo/<ASVO job ID>. You can look at the first and last files in this directory to confirm your observation's first and last GPS second. To recombine raw data or untar combined data and put it in the correct directory can be done with:
Code Block | ||
---|---|---|
| ||
vcs_download.nf --obsid <obs ID> --begin <starting GPS second> --end <end GPS second> --download_dir /astro/mwavcs/asvo/<ASVO job ID> |
A note on beginning and end times: the observation ID does not correspond to the starting GPS second of the data. In order to determine the beginning/end times of the observation data, use:
...
and this will help you determine the correct GPS times to pass to the -b
/-e
options. This applies for all jobs that have an optional beginning and end times.
...
Recombine Anchor
...
recombine recombine
recombine | |
recombine |
As noted above in Downloading Data, the vcs_download.nf
script includes the recombine step, so manually recombining will only be necessary for situations where the Nextflow option is either unavailable or undesirable. (Note that with the recent switch to using ASVO as the data downloading server, vcs_download.nf
is now a bit of a misnomer – it can still be used to do the recombining, even if the data are already downloaded.) Other methods of recombining are detailed at the Recombine page.
Incoherent sums
Anchor | ||||
---|---|---|---|---|
|
After the download has completed, you already have all that is necessary to create the first kind of beamformed data: an incoherent sum. The data used to create this kind of beamformed output are labelled as <obsID>_<GPS second>_ics.dat
in the downloaded data directory (/group/mwavcs/vcs/<obs ID>/combined
by default), hence we refer to the incoherent sum files as ICS files. The incoherent sum is, basically, the sum of the tile powers, producing a full tile's field of view, with a √N improvement in sensitivity over a single tile, where N is the number of tiles combined. Unfortunately, this kind of data is also very susceptible to RFI corruption, thus you will need to be quite stringent with your RFI mitigation techniques (in time and frequency).
...
dedicated calibrator: usually directly before or after the target observation on a bright calibrator source. These are stored on the MWA data archive as visibilities (they are run through the online correlator like normal MWA observations). You can find the observation ID for the calibrator by searching on the MWA data archive or by using the following command which will list compatible calibrator IDs sorted by how close they are in time to the observation:
Code Block language text mwa_metadb_utils.py -c <obs ID>
If this command generates an error, it may be due to the lack of calibration observations with the same frequency channels. If so try the Calibration Combine Method.
- in-beam calibrator: using data from the target observation itself, correlating a section offline, and using those visibilities. See Offline correlation.
In order to download the calibration observation, set your MW ASVO API key as an environment variable. Below are some steps to do so:
...
- If there are any tiles with max>3 on multiple channels it is worth flagging.
- If there are any tiles that have a max=0 this is worth flagging (even if only one polarisation has max=0!) as it is contributing no signal and can cause errors in beamforming
- It is best to only flag up to ~3 tiles at a time as bad tiles can affect other potentially good tiles
- Sensitivity scales as ~sqrt(128-N) where N is the number of tiles flagged so if you start flagging more than 50 tiles will start to lower your sensitivity and maybe worth abandoning
- Make sure you put the number in the right of the key (between flag and ?) into the
flagged_tiles.txt
file
In the "attempt_number_N" subdirectory are a chan_x_output.txt and phase_x_output.txt file that contains all of the recommended flags that the calibration plotting script creates. These can be useful when deciding which tile(s) to flag next. The following bash command will output the worst tile with the greatest gain value for each channel:
Code Block | ||
---|---|---|
| ||
for i in $(ls chan*txt); do grep $(cat $i | cut -d '=' -f 3 | cut -d ' ' -f 1 | sort -n | tail -n 1 | head -n 1) $i; done |
...
So for this example, you should flag tile 122.
If there is a single channel that is affecting your solution, you can flag it using the following method.
Make sure you have used the -X option when you ssh to enable an interactive terminal, then run:It is also interesting to look for obviously too low powers, which can be achieved by running the following, which will list each tile with the lowest gain value. Tiles with very small (near-zero) values should be flagged.
Code Block | ||
---|---|---|
| ||
plot_BPcal_128T.py -f BandpassCalibration_node<coarse channel number>.dat -c | ||
| ||
for i in $(ls chan*.txt);do grep $(cat $i | tail -n+3 | cut -d '=' -f 3 | cut -d ' ' -f 1 | sort -n | head -1) $i;done |
If there is a single channel that is affecting your solution, you can flag it using the following method.
Make sure you have used the -X option when you ssh to enable an interactive terminal, then run:
Code Block | ||
---|---|---|
| ||
plot_BPcal_128T.py -f BandpassCalibration_node<coarse channel number>.dat -c |
This should create an interactive plot that looks like this
...
- The RTS basically does a "
ls
" command to find all of the gpubox files, which on a parallel file system can sometimes hang. This will be obvious when watching the RTS job for a channel takes much longer than other channels. In this case, it is a good idea to look in the batch output file (named something likeRTS_[obs ID]*.out)
and the log files (which are written to the output directory specified when callingcalibrate_vcs.py
) to help determine which jobs are hanging. These jobs should be cancelled and resubmitted individually to the queue. - You may find that some coarse channel solutions simply don't converge. In this case, look at those channels that did converge and flag any obviously bad tiles or fine frequency channels (by adding the relevant entry to either
flagged_tiles.txt
orflagged_channels.txt
) and re-run the calibration (often you will need to iterate a few times anyway). Hopefully, by eliminating really bad tiles/fine-channels, other coarse channels will converge.
Calibration using calibrate (from mwa-reduce package by Andre Offringa)
This is initial introduction of using calibrate software from mwa-reduce package by Andre Offringa. It is available on Garrawarla after loading module:
- module load mwa-reduce/master
- Execute calibrate to see the available options.
...
Cleaning up after using the RTS
Once you have a calibration solutions you're happy with, please remember to delete the raw visibilities, which are no longer needed.
It is a good idea to make a backup copy of the solutions themselves. We suggest making a compressed tarball that includes the following files:
- The
DI_Jones
files, - The
Bandpass
files, flagged_channels.txt
andflagged_tiles.txt
, and- the
rts_[OBSID].txt
file
Compression will take the size down from ~55 MB to ~20 MB. The final tarball can then be easily transported to a secure backup location of your choice.
Calibrating with the calibrate software from mwa-reduce package by Andre Offringa
This is initial introduction of using calibrate software from mwa-reduce package by Andre Offringa (AO). It is available on Garrawarla after loading module:
- module load mwa-reduce/master
- Execute calibrate to see the available options.
It requires CASA measurement set which can be downloaded using MWA ASVO interface (see also in the RTS section above). To use mwa_client on Garrawarla load:
...
There are several options. Here, -applybeam beam specifies if you want to use the MWA 2016 (or other) beam model (beam model files are in /pawsey/mwa/) to calculate apparent flux densities of the calibrator sources in the sky_model.txt file. Alternatively, some people do it themselves and put "apparent sky" flux values (corrected for the beam) into sky_model_BeamCorr.txt sky model files. Parameters to specify min and max baselines to be used -minuv and maxuv respectively.
The resulting calibration solutions will be saved to a binary file 1103645160_calsol.bin. This file can be easily read into python.
I've attached two (untested) example scripts which should work on Garrawarla.
...
Calibration solutions in the .bin AO-format can be plotted for example by using script check_solutions.sh from https://github.com/MWATelescope/manta-ray-calibration/ package by running:
- check_solutions.sh 1103645160_calsol.bin
Offline correlation
Anchor | ||||
---|---|---|---|---|
|
...
The huge field-of-view of the MWA's tile beam means that there can be 100s of pulsars in an observation. To list all pulsars in the beam run the following command
Code Block | ||
---|---|---|
| ||
module load vcstools/devel
find_pulsar_in_obs.py -o <obs ID> |
...
Code Block | ||
---|---|---|
| ||
beamform.nf --obsid <obs ID> --calid <cal ID> --all (or --begin <begin GPS time> --end <end GPS time>) --pointings <"RA string">_<"DEC string">,<"RA string">_<"DEC string"> [--ipfb] [--summed] |
where ["RA string"]
is formatted as "hh:mm:ss.ss" and ["DEC string"]
is is formatted as "dd:mm:ss.ss" (including the sign: "+" or "-") you can also include multiple pointings by separating them by a spacecomma (,). The beamformer will output full polarisation (Stokes I, Q, U & V) PSRFITS files unless you used the --summed
flag. The summed option only outputs Stokes I which means you can't create polarisation profiles but uses a quarter of the storage and for that reasons is useful for large scale searches.
...
Pulsar Processing on Garrawarla
PRESTO and DSPSR Pulsar software is difficult to install at the best of times, so the common packages are not currently natively installed on Garrawarla so the following singularity commands.
For PRESTO commands use:
...
, but are provided via containterisation. There are two generic Singularity containers available to users that focus on two different aspects of pulsar science/processing.
Pulsar searching: the psr-search
container includes the most common pulsar searching tools, namely PRESTO
and riptide
(FFA implementation). It can be accessed as shown below.
Code Block |
---|
/pawsey/mwa/singularity/presto/prestopsr-search/psr-search.sif <command_here> |
For example:
Code Block | |
---|---|
language | text<command> |
Pulsar follow-up analysis: the psr-analysis container includes the common pulsar analysis tools, such as PSRCHIVE, DSPSR, and various pulsar timing packages . It can be accessed as shown below.
Code Block |
---|
/pawsey/mwa/singularity/presto/presto.sif prepfold -o <output_name> -psr <pulsar_jname> *fits |
...
psr-analysis/psr-analysis.sif <command> |
Both of these images have been built to enable interactivity if required. To use this, one must modify how they run the container as follows:
Code Block |
---|
singularity run -B ~/.Xauthority <container> <command> |
--------------------------
There are other containers also available, but from January 2023 they will likely not be maintained or updated. They are "use at your own risk".
For PRESTO commands use:
Code Block | ||
---|---|---|
| ||
/pawsey/mwa/singularity/dspsrpresto/dspsrpresto.sif <command_here> |
...
For example:
Code Block | ||
---|---|---|
| ||
/pawsey/mwa/singularity/presto/presto.sif prepfold -o <output_name> -psr <pulsar_jname> *fits |
For simple DSPSR and PSRCHIVE commands use:
Code Block | ||
---|---|---|
| ||
/pawsey/mwa/singularity/dspsr/dspsr.sif <command_here> |
If you require an interactive or pop up window (such as with tempo2) first, make sure you use the -Y option when you ssh into Garrawarla then use:
...
Code Block | ||
---|---|---|
| ||
singularity run -B ~/.Xauthority /pawsey/mwa/singularity/pulseportraiture/pulseportraiture.sif <command_here> |
--------------------------
The Observation Processing Pipeline
...
Image accurate as of commit e6215f42c1d7c0b5a255721bc46840335170e579 to mwa_search repo
- Input Data: The OPP requires the calibrated and beamformed products of the VCS. These data can be acquired using the method described here.
- Pulsar Search: Given an observation ID, each pulsar within the field is identified and handed to the Pulsar Processing Pipeline (PPP)
- Initial Fold(s): Performs a PRESTO fold on the data. For slow pulsars, this will probably be 100 bins. Fast pulsars will be 50 bins.
- Classification: The resulting folds are classified as either a detection, or non-detection.
- Best Pointing: For the MWA's extended array configuration, there may be multiple pointings for a single pulsar. Should this be the case, we want to find the brightest detection to use for the rest of the pipeline. The "best" detection will be decided on and its pointing will be the only one used going forward.
- Post Folds: A series of high-bin folds will be done. This is in order to find the highest time resolution fold we can do while still getting a detection.
- Upload products to database: Uploads the initial fold and best fold to the pulsar database.
- IQUV Folding: Uses DSPSR to fold on stokes IQUV, making a timescrunched archive. This archive is immediately converted back to PSRFITS format for use with PSRSALSA
- RM Synthesis: Runs RM synthesis on the archive. If successful, will apply this RM correction.
- RVM Fitting: Attempts to fit the Rotating Vector Model to the profile. If successful, will upload products to the database.
...
Code Block |
---|
data_processing_pipeline.nf --obsid <OBSID> --calid <CALID> --begbegin <OBS BEGIN> --end <OBS END> --publish_fits_scratch |
...
Code Block |
---|
data_processing_pipeline.nf --obsid <OBSID> --calid <CALID> --begbegin <OBS BEGIN> --end <OBS END> --only_cand_search |
...
Code Block | ||
---|---|---|
| ||
nswainston@garrawarla-1:~> ssh garrawarla-2 |
Resuming Nextflow Pipelines
One large benefit of Nextflow pipelines is that you can resume the pipelines. Once you have fixed the bug that caused the pipeline to crash simply relaunch the pipeline with the -resume
option added. For the resume option to work you must run the command from the same directory and the working directory can't be deleted
Cleaning up the work directories
Once the pipeline is done and you are confident you don't need to resume the pipeline or need the intermediate files then it is a good idea to remove the Nextflow work directories to save space. By default, the work directories are stored in /astro/mwavcs/$USER/<obsid>_work
Calibration Combining Method AnchorCalCombine CalCombine
CalCombine | |
CalCombine |
...
The name formatting for calibrator observations is the name of the calibrator source, an underscore and the centre frequency channel ID. Try and find a pair of calibration observations with the same calibrator source and, together, will cover the entire frequency range of the target observation. For the above example, this was 1195317056 and 1195316936. If you can't find any suitable calibration observations, then you can keep increasing the time search window up to 48 hours.
Now that you know which calibration observations you need, download and calibrate them as you normally would as explained in the calibration section. It is best to use the same values in the flagged_tiles.txt and flagged_channels.txt for all calibration obs to ensure your calibration solutions are consistent. Once the calibration is complete you can combine the two calibrations into one using the script
...
This will output the combined calibration solution to /astro/mwavcs/vcs/[obs ID]/cal/<first calibration ID> _<second calibration ID>/rts
and you can treat the calibrator ID as <first calibration ID> _<second calibration ID>
when being used in other scripts.
Deprecated Methods
These are old methods that are not maintained but may be useful if you need to do something specific or the new scripts have failed
Download (old python method)
...
To download an entire observation (referred to by their observation IDs, GPS seconds, as labelled on the MWA data archive), use:
Code Block | ||
---|---|---|
| ||
process_vcs.py -m download -o <obs ID> -a |
otherwise, if you only want some interval [begin, end] of the full observation, then use:
Code Block | ||
---|---|---|
| ||
process_vcs.py -m download -o <obs ID> -b <starting GPS second> -e <end GPS second> |
Additionally, if the data have been processed on the NGAS server, then you have the option to ONLY download the incoherently summed data files (*_ics.dat) with the mode option: -m download_ics
.
To check progress, use:
Code Block | ||
---|---|---|
| ||
squeue -p copyq -M zeus -u $USER |
and you should see something like this:
Each user can have a maximum of 8 concurrent download jobs running.
Checking downloaded data
By default, data are written to /astro/mwavcs/vcs/[obs ID]
. Once the observation appears to have downloaded correctly, it is good practice to check. The easiest way is to check to output from the process_vcs.py
download command. In /group/mwavcs/vcs/[obs ID]
there is a subdirectory labelled "batch
". Within the "batch" directory is stored all of the submitted SLURM scripts and their output. If there has been a problem in downloading data, this will appear as up to 10 files named "check_volt*.out
", i.e.
Code Block | ||
---|---|---|
| ||
check_volt_[GPS time]_0.out
check_volt_[GPS time]_1.out
check_volt_[GPS time]_2.out
... |
After 10 attempts to correctly download the data, we stop trying. In this case, the user will need to investigate the SLURM output to see what was going wrong.
If the raw data was downloaded, and you manually want to check them, then use:
Code Block | ||
---|---|---|
| ||
checks.py -m download -d raw -o <obs ID> -a (or -b/-e) |
otherwise, if the recombined data (i.e. with the "Processed" flag on the MWA data archive) were downloaded, and you manually want to check them, then use:
Code Block | ||
---|---|---|
| ||
checks.py -m recombine -o <obs ID> -a (or -b/-e) |
The total data volume downloaded will vary, but for maximum duration VCS observations this can easily be ~40 TB of just raw data. It is therefore important to keep in mind the amount of data you are processing and the disk space your are consuming. If only the raw voltages have been downloaded then you will need to recombine the data yourself, which doubles the amount of data (see next section).
...
Note that this step should be performed automatically by the vcs_download.nf
script.
Recombine takes data spread over 32 files per second (each file contains 4 fine channels from one quarter of the array) and recombines them to 24+1 files per second (24 files with 128 fine channels from the entire array and one incoherent sum file); this is done on the GPU cluster ("gpuq") on Galaxy. When downloading the data, if you retrieved the "Processed" (i.e. recombined) data, then ignore this step as it has already been done on the NGAS server.
To recombine all of the data, useseconds, as labelled on the MWA data archive), use:
Code Block | ||
---|---|---|
| ||
process_vcs.py -m recombinedownload -o <obs ID> -a |
orotherwise, for only a subset of data, useif you only want some interval [begin, end] of the full observation, then use:
Code Block | ||
---|---|---|
| ||
process_vcs.py -m recombinedownload -o <obs ID> -b <starting GPS second> -e <end GPS second> |
If you want to see the progress, then Additionally, if the data have been processed on the NGAS server, then you have the option to ONLY download the incoherently summed data files (*_ics.dat) with the mode option: -m download_ics
.
To check progress, use:
Code Block | ||
---|---|---|
| ||
squeue -p copyq -M gpuqzeus -u $USER |
Generally, this processing should not take too long, typically ~few hours.
Checking the recombined data
As before, it is a good idea to check at this stage to make sure that all of the data were recombined properly. To do this, use:and you should see something like this:
Each user can have a maximum of 8 concurrent download jobs running.
Checking downloaded data
By default, data are written to /astro/mwavcs/vcs/[obs ID]
. Once the observation appears to have downloaded correctly, it is good practice to check. The easiest way is to check to output from the process_vcs.py
download command. In /group/mwavcs/vcs/[obs ID]
there is a subdirectory labelled "batch
". Within the "batch" directory is stored all of the submitted SLURM scripts and their output. If there has been a problem in downloading data, this will appear as up to 10 files named "check_volt*.out
", i.e.
Code Block | ||
---|---|---|
| ||
checks.py -m recombine -o <obs ID> |
...
check_volt_[GPS time]_0.out
check_volt_[GPS time]_1.out
check_volt_[GPS time]_2.out
... |
After 10 attempts to correctly download the data, we stop trying. In this case, the user will need to investigate the SLURM output to see what was going wrong.
If the raw data was downloaded, and you manually want to check them, then use:
Code Block | ||
---|---|---|
| ||
prepdatachecks.py -om recombine_testdownload -nobaryd raw -dm 0 <fits files> |
...
o <obs ID> -a (or -b/-e) |
otherwise, if the recombined data (i.e. with the "Processed" flag on the MWA data archive) were downloaded, and you manually want to check them, then use:
Code Block | ||
---|---|---|
| ||
exploredat <.dat file> |
...
checks.py -m recombine -o <obs ID> -a (or -b/-e) |
The total data volume downloaded will vary, but for maximum duration VCS observations this can easily be ~40 TB of just raw data. It is therefore important to keep in mind the amount of data you are processing and the disk space your are consuming. If only the raw voltages have been downloaded then you will need to recombine the data yourself, which doubles the amount of data (see next section).
Beamforming (old python method)
...