VCS pulsar data processing
...
dedicated calibrator: usually directly before or after the target observation on a bright calibrator source. These are stored on the MWA data archive as visibilities (they are run through the online correlator like normal MWA observations). You can find the observation ID for the calibrator by searching on the MWA data archive or by using the following command which will list compatible calibrator IDs sorted by how close they are in time to the observation:
Code Block language text mwa_metadb_utils.py -c <obs ID>
If this command generates an error, it may be due to the lack of calibration observations with the same frequency channels. If so try the Calibration Combine Method.
- in-beam calibrator: using data from the target observation itself, correlating a section offline, and using those visibilities. See Offline correlation.
In order to download the calibration observation, set your MW ASVO API key as an environment variable. Below are some steps to do so:
...
- If there are any tiles with max>3 on multiple channels it is worth flagging.
- If there are any tiles that have a max=0 this is worth flagging (even if only one polarisation has max=0!) as it is contributing no signal and can cause errors in beamforming
- It is best to only flag up to ~3 tiles at a time as bad tiles can affect other potentially good tiles
- Sensitivity scales as ~sqrt(128-N) where N is the number of tiles flagged so if you start flagging more than 50 tiles will start to lower your sensitivity and maybe worth abandoning
- Make sure you put the number in the right of the key (between flag and ?) into the
flagged_tiles.txt
file
In the "attempt_number_N" subdirectory are a chan_x_output.txt and phase_x_output.txt file that contains all of the recommended flags that the calibration plotting script creates. These can be useful when deciding which tile(s) to flag next. The following bash command will output the worst tile with the greatest gain value for each channel:
Code Block | ||
---|---|---|
| ||
for i in $(ls chan*txt); do grep $(cat $i | cut -d '=' -f 3 | cut -d ' ' -f 1 | sort -n | tail -n 1 | head -n 1) $i; done |
...
So for this example, you should flag tile 122.
If there is a single channel that is affecting your solution, you can flag it using the following method.
Make sure you have used the -X option when you ssh to enable an interactive terminal, then run:It is also interesting to look for obviously too low powers, which can be achieved by running the following, which will list each tile with the lowest gain value. Tiles with very small (near-zero) values should be flagged.
Code Block | ||
---|---|---|
| ||
plot_BPcal_128T.py -f BandpassCalibration_node<coarse channel number>.dat -c |
...
| |
for i in $(ls chan*.txt);do grep $(cat $i | tail -n+3 | cut -d '=' -f 3 | cut -d ' ' -f 1 | sort -n | head -1) $i;done |
If there is a single channel that is affecting your solution, you can flag it using the following method.
Make sure you have used the -X option when you ssh to enable an interactive terminal, then run:
Code Block | ||
---|---|---|
| ||
plot_BPcal_128T.py -f BandpassCalibration_node<coarse channel number>.dat -c |
This should create an interactive plot that looks like this
...
where [obs ID]
, [pointing] and [channel]
are defined as above.
Pulsar Processing on Garrawarla
PRESTO and DSPSR are not currently natively installed on Garrawarla so the following singularity commands[channel]
are defined as above.
Pulsar Processing on Garrawarla
Pulsar software is difficult to install at the best of times, so the common packages are not currently natively installed on Garrawarla, but are provided via containterisation. There are two generic Singularity containers available to users that focus on two different aspects of pulsar science/processing.
Pulsar searching: the psr-search
container includes the most common pulsar searching tools, namely PRESTO
and riptide
(FFA implementation). It can be accessed as shown below.
Code Block |
---|
/pawsey/mwa/singularity/psr-search/psr-search.sif <command> |
Pulsar follow-up analysis: the psr-analysis container includes the common pulsar analysis tools, such as PSRCHIVE, DSPSR, and various pulsar timing packages . It can be accessed as shown below.
Code Block |
---|
/pawsey/mwa/singularity/psr-analysis/psr-analysis.sif <command> |
Both of these images have been built to enable interactivity if required. To use this, one must modify how they run the container as follows:
Code Block |
---|
singularity run -B ~/.Xauthority <container> <command> |
--------------------------
There are other containers also available, but from January 2023 they will likely not be maintained or updated. They are "use at your own risk".
For PRESTO commands use:
...
Code Block | ||
---|---|---|
| ||
singularity run -B ~/.Xauthority /pawsey/mwa/singularity/pulseportraiture/pulseportraiture.sif <command_here> |
--------------------------
The Observation Processing Pipeline
...
Image accurate as of commit e6215f42c1d7c0b5a255721bc46840335170e579 to mwa_search repo
- Input Data: The OPP requires the calibrated and beamformed products of the VCS. These data can be acquired using the method described here.
- Pulsar Search: Given an observation ID, each pulsar within the field is identified and handed to the Pulsar Processing Pipeline (PPP)
- Initial Fold(s): Performs a PRESTO fold on the data. For slow pulsars, this will probably be 100 bins. Fast pulsars will be 50 bins.
- Classification: The resulting folds are classified as either a detection, or non-detection.
- Best Pointing: For the MWA's extended array configuration, there may be multiple pointings for a single pulsar. Should this be the case, we want to find the brightest detection to use for the rest of the pipeline. The "best" detection will be decided on and its pointing will be the only one used going forward.
- Post Folds: A series of high-bin folds will be done. This is in order to find the highest time resolution fold we can do while still getting a detection.
- Upload products to database: Uploads the initial fold and best fold to the pulsar database.
- IQUV Folding: Uses DSPSR to fold on stokes IQUV, making a timescrunched archive. This archive is immediately converted back to PSRFITS format for use with PSRSALSA
- RM Synthesis: Runs RM synthesis on the archive. If successful, will apply this RM correction.
- RVM Fitting: Attempts to fit the Rotating Vector Model to the profile. If successful, will upload products to the database.
...
Code Block | ||
---|---|---|
| ||
nswainston@garrawarla-1:~> ssh garrawarla-2 |
Resuming Nextflow Pipelines
One large benefit of Nextflow pipelines is that you can resume the pipelines. Once you have fixed the bug that caused the pipeline to crash simply relaunch the pipeline with the -resume
option added. For the resume option to work you must run the command from the same directory and the working directory can't be deleted
Cleaning up the work directories
Once the pipeline is done and you are confident you don't need to resume the pipeline or need the intermediate files then it is a good idea to remove the Nextflow work directories to save space. By default, the work directories are stored in /astro/mwavcs/$USER/<obsid>_work
Calibration Combining Method AnchorCalCombine CalCombine
CalCombine | |
CalCombine |
...
The name formatting for calibrator observations is the name of the calibrator source, an underscore and the centre frequency channel ID. Try and find a pair of calibration observations with the same calibrator source and, together, will cover the entire frequency range of the target observation. For the above example, this was 1195317056 and 1195316936. If you can't find any suitable calibration observations, then you can keep increasing the time search window up to 48 hours.
Now that you know which calibration observations you need, download and calibrate them as you normally would as explained in the calibration section. It is best to use the same values in the flagged_tiles.txt and flagged_channels.txt for all calibration obs to ensure your calibration solutions are consistent. Once the calibration is complete you can combine the two calibrations into one using the script
...
This will output the combined calibration solution to /astro/mwavcs/vcs/[obs ID]/cal/<first calibration ID> _<second calibration ID>/rts
and you can treat the calibrator ID as <first calibration ID> _<second calibration ID>
when being used in other scripts.
Deprecated Methods
These are old methods that are not maintained but may be useful if you need to do something specific or the new scripts have failed
Download (old python method)
...