Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Refer to the legacy page for instructions.

Offline PFB

*** WARNING: Must use VCSBeam >= v2.18 and mwalib >= v0.11.0 ***

This step is currently necessary for processing MWAX data, although it is intended to be subsumed into the beamforming step so that any intermediate channelisation that is required as part of the beamforming process is made invisible to the user. This is also important for obviating the need to write out the .dat files to disk, as the pre-beamformed data are sufficiently volumous that even our generous allotment of disk space on Pawsey's systems would be quickly exhausted.

The Offline PFB implements the weighted overlap add algorithm described in McSweeney et al. (2020).. It uses GPUs for the fine PFB operation (NVIDIA/CUDA), including cuFFT for the Fourier Transform step, and operates on one second of data at a time. The GPUs must have at least 3.5 GB of device memory available. Operating on smaller chunks of data is not (yet) implemented.

A single call to Offline PFB operates on a single coarse channel and an arbitrary number of timesteps, and produces output files in the legacy .dat format. The example SBATCH script below shows it being applied to 600 seconds of data (starting at GPS secondĀ 1313388760) for 5 coarse channels. The output files are written to the current working directory.

The Offline PFB uses the same polyphase filter that was used in the legacy system by default (see McSweeney et al. 2020), but alternative filters will be made available in the future. The filters are always applied on the second boundaries, and the tap size is determined from the length of the filter and the number of desired output channels. No attempt is made apply any time or phase adjustments to the voltages either before or after the PFB is applied.

Example of use on Garrawarla

(This example is intended to show how to pack multiple jobs onto the same compute node, but my testing of this script did not appear to produce the desired parallelisation that I was hoping for. At the very least, the user will be able to amend this example so that it requests multiple compute nodes.)

...

languagebash
titlefine_pfb_example.sh
collapsetrue

...

Documentation for the Offline PFB is found here.

Offline Correlator

For the standard use case of creating correlated visibilities from legacy data with a time integration of at least 100 ms, the version of the offline correlator that can be accessed via the instructions given on the legacy page should be used. The version that ships with VCSBeam is intended for fast imaging, capable of producing visibilities with a minimum of 2 ms integrations. The choice of 2 ms was set partly by the constraints set by the underlying (3rd party) correlator engine, xGPU, which requires the number of time steps per integration to be a multiple of 4, but at the same time must divide the number of timesteps in a second evenly. Since there are 10000 time steps per second (for the legacy data), the smallest number of time steps that fits both criteria is 4 (= 0.4 ms), but 20 time steps (= 2 ms) was chosen as a "rounder" number, considered adequate for science cases involving rapid transients, such as searching for FRBs.

...

Currently, each call to offline_correlator processes only a single legacy .dat file (i.e. a single second, a single coarse channel). Processing of multiple seconds and coarse channels requires running offline_correlator in batches.

Example of use on Garrawarla

Code Block
languagebash
titleoffline_correlator_example.sh
collapsetrue
#!/bin/bash -l

#SBATCH --nodes=1
#SBATCH --mem=370gb
#SBATCH --partition=gpuq
#SBATCH --gres=gpu:1
#SBATCH --time=00:10:00
#SBATCH --account=mwavcs
#SBATCH --export=NONE

module use /pawsey/mwa/software/python3/modulefiles
module load vcsbeam/correlator

INPUT_DATA_FILE=/path/to/recombined/data/file/1313388760_1313388762_ch144.dat
START_SECOND=1313388762
DUMPS_PER_SECOND=20 # This sets the output time resolution
                    # (e.g. 20 --> 1/20 = 0.05s = 50 ms)
                    # Minimum allowed resolution is 2 ms
CHANS_TO_AVERAGE=4 # This sets the output frequency resolution
                   # (e.g. 4 --> 4x10 kHz = 40 kHz)
GPUBOX_CHANNEL_NUMBER=20 # This should be chosen to "match" the input channel
                         # This is not easy! (mwalib handles this, but at the
                         # moment, offline_correlator is not using mwalib)
OUTPUT_PREFIX=1313388760 # Output files begin with this

srun -N 1 -n 1 offline_correlator \
    -d ${INPUT_DATA_FILE} \
    -s ${START_SECOND} \
    -r ${DUMPS_PER_SECOND} \
    -n ${CHANS_TO_AVERAGE} \
    -c ${GPUBOX_CHANNEL_NUMBER} \
    -o ${OUTPUT_PREFIX}

...

  1. make_mwa_incoh_beam
  2. make_mwa_tied_array_beam

Incoherent beam

Example of use on Garrawarla

Tied-array beam

Example of use on Garrawarla
Code Block
languagebash
titleExample of use on Garrawarla
collapsetrue
#!/bin/bash -l

#SBATCH --nodes=1
#SBATCH --mem=370gb
#SBATCH --partition=gpuq
#SBATCH --gres=gpu:1
#SBATCH --time=01:00:00
#SBATCH --account=mwavcs
#SBATCH --export=NONE

module use /pawsey/mwa/software/python3/modulefiles
module load vcsbeam

srun -N 24 -n 24 make_mwa_tied_array_beam \
        -m PATH/TO/1240826896_metafits_ppds.fits \
        -b 1240826897 \
        -T 295 \
        -f 133 \
        -d PATH/TO/VCS/DATA \
        -P pointings.txt \
        -c PATH/TO/CAL/1240827912.metafits \
        -C PATH/TO/RTS/SOLUTION \
        -p

...