Table of Contents |
---|
...
Introduction
The MWAX system replaces the previous fine PFB, Voltage Capture System (VCS) (and media converter), correlator, and on-site archive of the Murchison Widefield Array (MWA). All of the fielded instrument hardware (tiles, beamformers, receivers) remains the same, as described in The Murchison Widefield Array: The SKA Low Frequency Precursor by Tingay et al. (2013), and the Phase II description paper: The Phase II Murchison Widefield Array: Design Overview by Wayth et al (2018).
Info |
---|
The new MWAX Correlator system is described in detail https://arxiv.org/abs/2303.11557 (Morrison, et. al. 2023). |
The diagram below shows a high level overview of the complete signal chain, including the main MWAX components: Media conversion and Correlator & Online Archive.
...
Top-Level Architecture
MWAX is located on site at the MRO and comprises 24 new servers (+ 2 spares) and repurposes 10 existing on site servers. Together the equipment occupies three racks. Output visibilities are transferred to CurtinCurtin’s B206 data centre, and ultimately the Pawsey Centre, via existing fibre-optic links.
This configuration is capable of simultaneous fine channelisation, cross-correlation, frequency/time averaging, and buffer storage for up to 256 tiles and up to 24 coarse channels of 1.28 MHz each. At the time of writing, the MWA has 16 original receivers plus 2 next-generation National Instruments (NI) receivers, which process 128 144 tiles in total. This limits the inputs to MWAX such that it currently correlates only 128 tiles144 tiles out of the deployed 256 tiles in the field.
The MWAX correlator design employs a multicast architecture, allowing all receivers to each send their streams of high time resolution data to any number of multicast consumers with no additional load on the sender. Initially there will be 24 multicast receivers (one for each coarse channel for correlation and voltage capture), however, the multicast architecture allows for additional multicast receivers to utilize utilise the same high time resolution data for other purposes. For example: RFI monitoring, transient detection and external instruments such as Breakthrough: Listen could commensally consume some or all of the high time resolution data without impacting the operation of the telescope.
The MWAX correlator employs the “FX” correlation architecture where the input time samples for each signal path (antenna and polarizationpolarisation) are fine-channelized channelised prior to cross-correlation, reducing the correlation process to complex multiplications in the frequency domain. Whereas the legacy correlator utilizes utilised a polyphase filterbank for the F-engine, the MWAX correlator employs the FFT (using the cuFFT implementation). For the X-engine, development time/cost was minimized minimised by utilizing utilising the existing open-source GPU correlator library “xGPU” (the same library at the heart of the legacy correlator, as well as others). The standard xGPU library is used essentially unchanged for MWAX, with the exception of a small but crucial change to improve speed and reduce bus/memory traffic, found to be essential to making the design scalable to 256T and beyond (see MWAX xGPU on github).
The 24 coarse channels are processed by 24 GPU-accelerated compute nodes referred to as “MWAX Servers”, with two hot spares. Each MWAX Server implements the functions shown in the figure above. The real-time data flows on the MWAX Servers are managed through the use of input and output ring buffers that decouple its computational workflow from the input source and output destinations. These ring buffers are established and accessed using the open-source ring buffer library “PSRDADA”.
MWAX has been designed such that the exact same code used for real-time operation on the dedicated MWAX Servers at the MRO can also be installed on compute nodes with lower specification GPUs (e.g. Pawsey) to provide an offline mode, operating below real-time speed (although it is a requirement that the GPU support CUDA). Note: The modes available to the offline correlator depend heavily on the server and GPU hardware it is executed on. See: MWAX Offline Correlator
The output visibility modes of MWAX for 128, 136 and 144 tiles that run real-time on the as-built MWAX hardware configuration are listed at this page: MWAX Correlator Modes (128T)
Signal Path/Data Flow
In this section we describe the flow of signals from the MWA tiles and receivers to the correlator media converter (Medconv) servers, MWAX Correlator and then into long term storage Long Term Storage at the MWA Archive at the Pawsey Supercomputing Centre.
...
MWAX Media Conversion
Each of the MWA's existing 16 receivers in the field send 8 tiles worth of 24 coarse channels over 48 fibre optic cables using the Xilinx RocketIO protocol. The fibre optic cables terminate in the MRO Control Building, where two bundles of three fibres connect to a media conversion (medconv) server via custom Xilininx FPGA cards (colloquially known as EDT cards- EDT standing for Engineering Design Team- the company that manufactures them). Six independent processes of edt2udp on each medconv server convert the RocketIO data into Ethernet UDP packets which are sent out to the Cisco Nexus 9504 switch as multicast data where each coarse channel is assigned a multicast address. This provides the "corner-turn" where each of the 6 processes on each medconv server is sending one third of the coarse channels for one sixteenth of the tiles in the array.
Info |
---|
NOTE: The next-generation receivers (such as the NI receivers) do not need media conversion, and therefore they send data directly to the Cisco Nexus 9504 switch. |
MWAX Correlator
This section describes the data flow within the MWAX correlator servers. The function and data flow between components is shown in the below diagram:
...
MWAX UDP Capture + Voltage Capture To Disk
As per standard is the case with IP multicast, any device on the voltage network can “join” the multicast for one or more coarse channels and a copy of the relevant stream will be passed to them at almost no cost to the Cisco Nexus 9504 switch.
For MWAX, an array of 24 (+2 spare) identically configured MWAX Servers are connected to the core Cisco 9504 switch (on site at the MRO) via 40Gb 40 Gb Ethernet. Each MWAX Server is responsible for one coarse channel (of bandwidth 1.28 MHz) of bandwidth. For a 256T array, the input data volume is approximately 11 Gbps per coarse channel.
Packets The mwax_u2s process receives packets from the multicast stream which are then assembled in shared memory (RAM) into 8 second blocks of high time resolution voltage data based on their time and source (known as a “sub“sub-observation”observation”). At the completion of each 8 second block, the RAM file is closed and made available to another the mwax_subfile_distributor process.
Depending on the current observing mode, the block may be handled by the mwax_subfile_distributor process in the following way:
Retained in RAM for a period to satisfy triggered ‘buffer dump’ commands
Written immediately to disk for voltage capture mode
Passed to the FX Engine for cross-correlation via a PSRDADA ring buffer
A 256T sub-observation buffer for one coarse channel, for the 8 seconds is approximately 11 GB in size.
MWAX Correlator FX Engine
The MWAX correlator FX Engine (mwax_db2correlate2db) is implemented as a PSRDADA client; a single process that reads/writes from/to the input/output ring buffers, while working in a closely coupled manner with a single GPU device - which can be any standard NVIDIA/CUDA-based GPU card.
The figure below shows the processing stages and data flows within the MWAX correlator FX Engine process.
...
The FX Engine treats individual 8 second sub-observations as independent work units. Most of its mode settings are able to change on-the-fly from one sub-observation to the next. It operates on Each 8 second sub-observation file contains 160 blocks of 50 ms units of input data , i.e. a total of 160 blocks over each 8 second sub-observation. An An additional block of metadata (of the same size as a 50 ms data block) is prepended to the data blocks, making a total of 161 blocks per sub-observation file. At the start of processing each new sub-observation file, the metadata block is parsed to configure the operating parameters for the following 160 data blocks.
Each 50 ms data block consists of 64,000 time samples for each signal path presented at the input (number of tiles x 2 polarizationspolarisations). The 256T correlator configuration supports up to 512 signal paths. Input data is transferred to GPU memory and promoted from 8-bit integers to 32-bit floats. The 64,000 time samples of each path are partitioned into 10 blocks of 6,400 samples (5 ms), each of which is FFT’d on the GPU using “cuFFT”“cuFFT”, resulting in 10 time samples on each of 6,400 ultrafine channels of resolution 200 Hz.
The design supports a future option the ability to apply delays to each signal path to point the telescope to a specified correlation pointing centre. When that mode becomes available, it will involve This involves the integer sample component of the required delays being applied using whole-sample shifts as the sub-observation data file is assembled (mwax_u2s), with the residual fractional delays being applied within the FX engineEngine. The required fractional delay values for each signal path are passed to the FX Engine via the prepended metadata block written to the input ring buffer. Delays are applied by multiplying the frequency-domain samples of each sub-block by a phase gradient, whose complex gain values are taken from a pre-computed look-up table to increase speed. Delay corrections can be static over the entire sub-observation (e.g. to eliminate fixed cable delays) or dynamic over the sub-obervation observation (with a 5 ms resolution) to implement fringe stopping, i.e. to keep the correlation pointing centre on a fixed RA/Dec.
The FFT’d data is then transposed to place it in the order that xGPU requires (slowest-to-fastest changing): [time][channel][tile][polarization]. As the data is re-ordered, it is written directly into xGPU’s input holding buffer in GPU memory. The data from five 50 ms blocks is aggregated in this buffer, corresponding to an xGPU “gulp size” of 250 ms. The minimum integration time is one gulp, i.e. 250 ms. The integration time can be any multiple of 250 ms for which there are an integer number of gulps over the full 8 second sub-observation.
Visibility Channelisation
xGPU places the computed visibilities for each baseline, with 200 Hz resolution (6,400 channels), in GPU memory. A GPU function then performs channel averaging according to the “fscrunch” factor specified in the PSRDADA header, reducing the number of output channels to (6400/fscrunch), each of width (200*fscrunch) Hz. During this . For example, with fscrunch = 50, there will be 128 output visibility channels of 10 kHz each.
The output visibility channels are "centre symmetric" in the way their boundaries are aligned within the coarse channel bandwidth. The centre output fine channel is centred symmetrically on the centre of the coarse channel (as was the case with the legacy correlator/fine PFB). Remaining output channels extend above and below the centre channel symmetrically. In cases where there is an odd number of output channels across the coarse channel, there are full-width channels at the lowest and highest ends of the coarse channel. In cases where there is an even number of output channels across the coarse channel, there are half-width channels at the lowest and highest ends of the coarse channel. See: MWA Fine Channel Centre Frequencies
The channel averaging process involves the summation of the complex visibility values for all the ultrafine channels comprising each output fine channel. That is, each output visibility channel is formed by the sum of fscrunch complex values. Note that for the centre output fine channel, the centre (DC) ultrafine channel is excluded from the summation to remove any DC component present in the coarse channel, and the output value is re-scaled accordingly to maintain a consistent output magnitude with other channels. Only 200 Hz of bandwidth is lost in this process, rather than a complete output channel (as was the case with the legacy correlator where an entire 10 kHz was lost).
During the channel averaging process, each visibility can have has a multiplicative weight applied , which in future can be based on a data occupancy metric that takes to normalise the magnitude of output visibilities to a desired absolute scale. This normalisation factor is automatically adjusted according to the selected integration time and fscrunch factor such that output visibility values remain at a consistent magnitude. In future, this normalisation factor will be able to be optionally combined with additional baseline-specific weighting factors based on the input data occupancy, i.e. taking account of any input data blocks that were missing due to lost UDP packets or pre-correlation RFI excision (a potential future enhancement). The centre (DC) ultrafine channel is excluded when averaging and the centre output channel values are re-scaled accordingly. Note that only 200 Hz of bandwidth is lost in this process, rather than a complete output channel (10kHz in the legacy correlator). The weighting factors applied to each baseline (which are common to all fine channels of that baseline) are placed in the output ring buffer along with the visibility values themselves. This allows downstream processes to track what weightings were applied. At present no visibility weightings are applied and the output weight values are all set to 1.0.
The averaged output channel data is then transferred back to host memory .The centre output fine channel is centred symmetrically on the centre of the coarse channel (as was the case with the legacy correlator/fine PFB). See: MWA Fine Channel Centre Frequencieswhere it is re-ordered before writing to the output ring buffer.
Visibility Re-ordering
xGPU utilizes a particular data ordering that is optimized for execution speed. The visibility set is re-ordered by the CPU into a more intuitive triangular order by the CPU: [time][baseline][channel][polarization]. The See: MWAX Visibility File Format
The re-ordered visibility sets (one per integration time) are then written to the output ring buffer.
Visibility Data Capture
The data capture process (mwax_db2fits), running on each MWAX Server, reads visibility data off the output PSRDADA ring buffer and writes the data into FITS format. The data capture process breaks up large visibility sets into files of up to approximately 10 GB each, in order to optimize data transfer speeds while keeping the individual visibility file sizes manageable. The FITS files are written onto a separate partition on the MWAX Server server disk storage.
Transfer to Curtin Data Centre
...
Cache Storage
Each MWAX server has enough disk storage for around 30 TB of visibilities plus 30 TB of voltage data, effectively replacing the need for a separate "Online Archive" cluster of servers as the legacy MWA had. In normal operating modes and schedule, this means the MWA can continue to observe for a week or two even if the link to Perth is offline- data will continue to be stored on disk until the link is online again, and will then begin transmission to Perth.
An archiving process (mwax_moversubfile_distributor), running on each MWAX Server, watches the directory directories to which the data capture software (mwax_db2fits) writes the output visibility FITS files, as well as the directory to which the voltage data is written, and adds these files to a queue.
mwax mover _subfile_distributor iterates through each file in the queue, computes a checksum, then creates a record in the MWA metadata database for each file. mwax_mover It then attempts to transfer the files to an MWAcache Server mwacache server at the Curtin 206 data centre via the MRO-to-Perth 100 Gbps link. The transfer is performed using xrootd, a high performance, reliable and scalable file transfer protocol. In the event that the 100 Gbps link to the Curtin 206 data centre is down, or there are issues with the MWAcache Serversmwacache servers, the mwax_subfile_mover distributor process will requeue the files and attempt to transfer them when the systems or link are back online.
There are ten MWAcache Servers (eight 10 mwacache servers (8 in use, with two 2 spares) and each MWAcache Server mwacache server receives three 3 coarse channels (one each from three 3 different MWAX Serversservers). The MWAcache Servers mwacache servers each run an instance of xrootd server to facilitate reliable, high-speed transfer from the MWAX servers.
Transfer and Storage at Pawsey Long Term Archive
The mwaxmwacache_mover archiver instance running on each MWAcache Server mwacache server at Curtin University is configured to automatically forward all visibility and voltage files to the Pawsey Front-End (FE) Servers, where the data are stored in Pawsey’s hierarchical storage management system (HSM). mwax_mover LTS (long term storage) systems: Acacia or Banksia. mwacache_archiver updates the MWA metadata database to tag the files as being safely archived at Pawsey.
In the event that there is an issue with the storage systems at Pawsey (e.g. monthly maintenance or any other system error), or the 100 Gbps link between the Curtin 206 data centre and Pawsey is down, mwaxmwacache_mover archiver will requeue the transfer and try again at a later time. Each MWAcache Server mwacache server has approximately 250 TB of usable storage space to store visibilities and voltage data temporarily until it can be successfully archived.
Researchers will be able to access visibilities and voltages using the existing MWA All-Sky Virtual Observatory data portal (MWA ASVO), where users will either download and process the raw data themselves (especially in the case of voltage data) or they may opt to have the MWA ASVO convert the data to CASA measurement set or uvfits format (with or without applying calibration solutions), averaging, etc. For more information see: Data Access.
More information
Child pages (Children Display) |
---|