Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

Top-Level Architecture

MWAX comprises 26 new servers and repurposes 10 of the existing servers on site at the MRO, occupying three racks.  It makes use of the existing 100 Gbps fibre-optic link from site to the ten existing storage servers located in B206 on Curtin Campus. These ten servers will buffer MWA data pending transfer via a 100 Gbps fibre-optic link from B206 to the Pawsey Supercomputing Centre.

This configuration is capable of simultaneous fine channelization, cross-correlation, frequency/time averaging, and buffer storage for 256 tiles.  In addition to the support for twice the inputs / four times the output visibilities, the MWAX correlator offers a number of improvements over the legacy system.


The MWAX correlator design employs a multicast architecture, allowing all receivers to each send their streams of high time resolution data to any number of multicast consumers with no additional load on the sender.  Initially there will be 24 multicast receivers (one for each coarse channel for correlation and voltage capture), however, the multicast architecture allows for additional multicast receivers to utilize the same high time resolution data for other purposes. For example: RFI monitoring, transient detection and external instruments such as Breakthrough: Listen could commensally consume some or all of the high time resolution data without impacting the operation of the telescope.


The MWAX correlator employs the “FX” architecture where the input time samples for each signal path (antenna and polarization) are fine-channelized prior to cross-correlation, reducing the correlation process to complex multiplications in the frequency domain.


Whereas the legacy correlator utilizes a polyphase filterbank for the F-engine, the MWAX correlator employs the FFT, following the rationale presented in Appendix C.  For the X-engine, development time/cost has been minimized by utilizing the existing open-source GPU correlator library “xGPU” (the same library at the heart of the legacy correlator).  The standard xGPU library is used essentially unchanged for MWAX, with the exception of a small but crucial change to improve speed and reduce bus/memory traffic, found to be essential to making the design scalable to 256T and beyond.  This modification is described in Appendix C.


The 24 coarse channels are processed by 24 GPU-accelerated compute nodes referred to as “MWAX Servers”, with two ‘hot spares’.  Each MWAX Server implements the functions shown in the figure below.  The real-time data flows on the MWAX Servers are managed through the use of input and output ring buffers that decouple its computational workflow from the input source and output destinations.  These ring buffers are established and accessed using the open-source standard library “PSRDADA”.


The execution speed of MWAX is dependent on various parameter/configuration choices, but most significantly the number of tiles to correlate and the GPU hardware employed.  For example, sustained real-time 256T correlation has been benchmarked successfully on a NVIDIA RTX2080Ti-based GPU card.


MWAX has been designed such that the exact same code used for real-time operation on the dedicated MWAX Servers at the MRO can also be installed on compute nodes with lower specification GPUs (e.g. Pawsey) to provide an offline mode, operating below real-time speed. Note: The modes available to the offline correlator depend heavily on the server and GPU hardware it is executed on.



The table below shows (in green) which output visibility modes will be able to operate in real-time with the proposed hardware configuration, assuming 256 tiles and 30.72 MHz of instantaneous bandwidth.  The figures overlaid on each mode entry are the data rate of visibility data that will be generated in this mode in gigabits per second (Gbps).  Modes in red may still be possible for short periods, subject to final hardware specifications and limitations. Note that the number of modes available to astronomers is significantly increased over the legacy correlator.

See: MWAX Correlator Modes (128T)

  


1.2  MWAX Correlator Signal Path Data Flow

In this section we describe the flow of signals from the MWA receivers to the correlator and then into long term storage at the MWA Archive at the Pawsey Supercomputing Centre, illustrated in the diagram below.

Media Converter

48 fibre-optic cables from the existing 16 MWA receivers in the field enter the control building.  This proposal calls for them to be directly connected to existing “EDT” cards located in 8 servers (+ 2 hot spares) selected and re-tasked from the existing pool of 16 VCS servers that are currently used in the legacy correlator.  These will become media converter “Medconv Servers”, responsible for taking the bespoke protocol from the receivers and converting them to the same data format we directly receive from NI FlexRIO and SNAP based receivers (both of which are potential future MWA receiver candidates).


The re-use of existing servers for this media conversion task is not a strict requirement, however we have used this make and model server successfully for years, both as an integral part of the legacy correlator and for testing and development for MWAX.  We understand this hardware well.  The need for only 8+2 units frees up the remainder of the identically configured servers for use as ‘whole-of-life’ spares.  These Medconv Servers would be decommissioned as the existing legacy receivers they support themselves age and are eventually retired or replaced with new generation receivers.


Within the Medconv Servers, 2048 consecutive packets of bespoke format 5-bit receiver voltage data are received on an EDT FPGA card, copied to server memory, merged, padded into 8-bit format and then sorted by input signal chain and destination coarse channel number.  They are then passed as 128 IP multicast packets out a conventional 10Gb Ethernet port into the local area, multicast voltage network.  Each coarse channel forms a separate multicast stream.  All voltage data packets are sent via aggregation switches and passed up to the existing Cisco 9504 switch which acts as the core switch for the correlator.  i.e. all voltage data for all coarse channels is available on the fabric of this core switch.

The voltage data output from the Medconv Servers is compatible with the direct output from NI FlexRIO and SNAP based receivers in timing, packet format and coarse channel bandwidth.  No further resampling or conversion is needed to cross-correlate a heterogeneous array of these three receiver types.


MWAX UDP Capture + Voltage Capture To Disk

As per standard IP multicast, any device on the voltage network can “join” the multicast for one or more coarse channels and a copy of the relevant stream will be passed to them.

For MWAX a new array of 24 (+2 spare) identically configured MWAX Servers will be connected to the core Cisco 9504 switch via 40Gb Ethernet.  Each MWAX Server is responsible for one coarse channel (1.28 MHz) of bandwidth.  For a 256T array, the data volume is approximately 11 Gbps per coarse channel.

Packets from the multicast stream are assembled in shared memory (RAM) into 8 second blocks of high time resolution voltage data based on their time and source (known as a “sub-observation”).  At the completion of each 8 second block, the RAM file is closed and made available to another process.  Depending on the current observing mode, the block may be:


  • Retained in RAM for a period to satisfy triggered ‘buffer dump’ commands
  • Written immediately to disk for voltage capture mode
  • Passed to the FX Engine for cross-correlation via a PSRDADA ring buffer


A 256T sub-observation buffer for one coarse channel, for the 8 seconds is approximately 10 GB in size.  The proposed hardware can buffer approximately 2 minutes in its available memory.


MWAX Metadata and Fringe Tracking ‘Delay Engine’

When an observation is added to the schedule, the desired target coordinate on the sky (the 'phase centre') is stored as a Right Ascension and Declination in the ICRS coordinate system. The delays for each tile’s analog beamformer for that observation are chosen by finding the nearest 'sweet spot' position on the sky, where sweet spots are pointing directions where the ideal delays for each dipole are equal to the actual quantized delays allowed by the analog beamformer delay board hardware.  This remains the same as the existing legacy system.

Over the course of an observation, the target coordinates (RA/Dec phase centre) will move across the sky due to sidereal motion, while the primary beam will stay at a fixed Azimuth/Elevation within a few degrees of the phase centre RA/Dec. At the start of the next observation, the primary beam will shift (if necessary) to another sweet spot.

A few seconds before the start of each observation, the correlator configuration daemon takes all the metadata about that observation and constructs a FITS file describing that observation. One of the HDUs (header/data units) in that FITS file contains a table of all the tiles used in that observation, along with their physical locations and cable lengths. Another HDU contains a table of target (phase centre RA/Dec) positions, converted to a current Topocentric Alt/Az coordinate every four seconds over the course of that observation.

The coordinate conversion is carried out using the astropy.coordinates library (http://docs.astropy.org/en/stable/coordinates), and transforms from the ICRS reference frame (within a few milli-arcseconds of the J2000 equatorial FK5 reference frame) to the current Topocentric Azimuth (degrees East of true local North) and Altitude (degrees above the local horizon). The astropy library uses Earth rotation data (UT1-UTC offset and/or polar motion) from the International Earth Rotation and Reference Systems (IERS) service, updated automatically as required.

The FITS file describing the new observation is written to a shared directory, accessible by all of the MWAX Servers. Each server uses the contents to set up the correlator in the defined mode (frequency/time averaging, etc), and to calculate delays for each correlator input using the Alt/Az table, the tile location and the recorded cable delays for that tile.  These delays are converted into units of time samples (1/1280000 seconds).


The midpoint integer delay for each 8 second sub-observation is calculated per signal path in units of time samples.  All data for this input is shifted backwards or forwards in time for this sub-observation by that integer number of time samples.

The remaining residual delay for each tile is interpolated for each input for the start and end of each 50 ms sub-block in the 8 second block (for a total of 161 residuals per signal path).  These delays are passed to the GPU where they are applied in the form of phase rotations, with the values updated 20 times a second.

Currently only RA/Dec tracking is supported.  For observing objects that move in RA/Dec such as The Moon, the RA/Dec for the start time of the observation is applied and that RA/Dec is tracked during the observation.  A follow-up observation a few minutes later would be phased up to a (very) slightly different RA/Dec.

Test data shows that the quadratic delay fit across the 8 second sub-observation is an extremely close approximation while being computationally inexpensive.  The dominant error in the delay tracking performance is radio source (sky) motion during the 50 ms update rate of the phase rotations.

The Alt/Az time series to delay model calculation itself is agnostic to the source of the Alt/Az series in the metadata FITS file.  Modest improvements to the M&C system would allow tracking of objects such as the ISS, however the current 50 ms phase rotation update rate, and 8 second whole sample delay update rate would remain and limit performance when tracking objects with fast apparent motion.


MWAX Correlator FX Engine

The MWAX correlator FX Engine is implemented as a PSRDADA client; a single process that reads/writes from/to the input/output ring buffers, while working in a closely coupled manner with a single GPU device - which can be any standard NVIDIA/CUDA-based GPU card.


The figure below shows the processing stages and data flows within the MWAX correlator FX Engine process.

The FX Engine treats individual 8 second sub-observations as independent work units. Most of its mode settings are able to change on-the-fly from one sub-observation to the next.  It operates on 50 ms units of input data, i.e. a total of 160 blocks over each 8 second sub-observation.  An additional block of metadata (of the same size as a 50 ms data block) is prepended to the data blocks, making a total of 161 blocks per sub-observation.  At the start of each new sub-observation, the metadata block is parsed to configure the operating parameters for the following 160 data blocks.


Each 50 ms data block consists of 64,000 time samples for each signal path presented at the input (number of tiles x 2 polarizations).  The 256T correlator configuration supports up to 512 signal paths.  Input data is transferred to GPU memory and promoted from 8-bit integers to 32-bit floats.  The 64,000 time samples of each path are partitioned into 10 blocks of 6,400 samples (5 ms), each of which is FFT’d on the GPU using “cuFFT”, resulting in 10 time samples on each of 6,400 ultrafine channels of resolution 200 Hz.


Fractional delays are then applied to each signal path to point to a specified correlation pointing centre.  The required delay values are generated within the Delay Engine and passed to the FX Engine via the prepended metadata block written to the input ring buffer.  Delays for the start and end of every 50 ms data block are provided, from which the FX Engine linearly interpolates the delay to be applied to each 5 ms sub-block.  Delays are applied by multiplying the frequency-domain samples of each sub-block by a phase gradient, whose complex gain values are taken from a pre-computed look-up table to increase speed.


The FFT’d data is then transposed to place it in the order that xGPU requires (slowest-to-fastest changing): [time][channel][tile][polarization].  As the data is re-ordered, it is written directly into xGPU’s input holding buffer in GPU memory.  The data from five 50 ms blocks is aggregated in this buffer, corresponding to an xGPU “gulp size” of 250 ms.  The minimum integration time is one gulp, i.e. 250 ms.  The integration time can be any multiple of 250 ms for which there are an integer number of gulps over the full 8 second sub-observation.


xGPU places the computed visibilities for each baseline, with 200 Hz resolution (6,400 channels), in GPU memory.  A GPU function then performs channel averaging according to the “fscrunch” factor specified in the metadata block, reducing the number of output channels to (6400/fscrunch), each of width (200*fscrunch) Hz.  During this averaging process, each visibility can have a multiplicative weight applied, based on a data occupancy metric that takes account of any input data blocks that were missing due to lost UDP packets or RFI excision (a potential future enhancement).  The centre (DC) ultrafine channel is excluded when averaging and the centre output channel values are re-scaled accordingly.  Note that only 200 Hz of bandwidth is lost in this process, rather than a complete output channel.  The averaged output channel data is then transferred back to host memory.


xGPU utilizes data ordering that is optimized for execution speed.  The visibility set is re-ordered into a more intuitive triangular order by the CPU: [time][baseline][channel][pol].  The re-ordered visibility sets (one per integration time) are then written to the output ring buffer.


Visibility Data Capture

The data capture process, running on each MWAX Server, reads visibility data off the output PSRDADA ring buffer and writes the data into FITS format. The data capture process breaks up large visibility sets into files of up to approximately 5 GB each, in order to optimize data transfer speeds while keeping the individual visibility file sizes manageable. The FITS files are written onto a separate partition on the MWAX Server disk storage.


Transfer to Curtin Data Centre Temporary Storage

An archiving process (mwax_mover), running on each MWAX Server, watches the directory to which the data capture software writes the output visibility FITS files, as well as the directory to which the voltage data is written, and adds these files to a queue.


mwax mover iterates through each file in the queue, firstly creating a record in the MWA metadata database for each file and secondly attempting to transfer them to an MWAcache Server at the Curtin 206 data centre via the MRO-to-Perth 100 Gbps link. The transfer is performed using xrootd, a high performance, reliable and scalable file transfer protocol. In the event that the 100 Gbps link to the Curtin 206 data centre is down, or there are issues with the MWAcache Servers, the mwax_mover process will requeue the files and attempt to transfer them when the systems or link are back online.


There are ten MWAcache Servers (eight in use, with two spares) and each MWAcache Server receives three coarse channels (one each from three different MWAX Servers). The MWAcache Servers each run an instance of xrootd to facilitate reliable, high-speed transfer from the MWAX servers.


Transfer and Storage at Pawsey Long Term Archive

The  mwax_mover instance running on each MWAcache Server at Curtin University will be configured to automatically forward all visibility and voltage files to the Pawsey Front-End (FE) Servers, where the data will be stored in Pawsey’s hierarchical storage management system (HSM). Once data is successfully transferred to Pawsey the data will be moved to a cache which will also be managed by mwax_mover. MWAX mover will cache up to a configurable high-water mark (e.g. 85% of volume capacity) and will remove the oldest data to remain under that threshold.


The NGAS servers running at Pawsey will execute the commands needed to write the visibilities and voltages to Pawsey’s tape subsystem. The NGAS software then updates the MWA metadata database to tag the files as being safely archived at Pawsey.


In the event that there is an issue with the systems at Pawsey (e.g. monthly maintenance or any other system error), or the 100 Gbps link between the Curtin 206 data centre and Pawsey is down, mwax_mover will requeue the transfer and try again at a later time. Each MWAcache Server has approximately 250 TB of usable storage space to store visibilities and voltage data temporarily until it can be successfully archived.


Researchers will access correlated data using the existing MWA All-Sky Virtual Observatory data portal (MWA ASVO), where users will either process the raw visibilities themselves or they may opt to have the MWA ASVO convert the data to CASA measurement set or uvfits format (with or without applying calibration solutions). Science teams wishing to utilize the high time resolution voltage data will use the existing “voltdownload” utility to obtain the raw voltage files. In a future release, the MWA ASVO will be upgraded to make use of any data cached on the MWAcache servers in B206, bypassing the need to stage files from the Pawsey tape subsystem and allowing the data to be delivered to users with less delay.


  • No labels