...
The MWAX correlator replaces the existing previous Voltage Capture System (VCS), FX correlator and on-site archive of the Murchison Widefield Array (MWA). All of the fielded instrument hardware (tiles, beamformers, receivers) remains the same, as described in The Murchison Widefield Array: The SKA Low Frequency Precursor by Tingay et al. (2013), and the Phase II description paper: The Phase II Murchison Widefield Array: Design Overview by Wayth et al (2018). The diagram below shows a high level overview of the complete signal chain, including the main MWAX components: Media conversion and Correlator.
...
Each of the MWA's existing 16 receivers in the field send 8 tiles worth of 24 coarse channels over 48 fibre optic cables using the Xilinx RocketIO protocol. The fibre optic cables terminate in the MRO Control Building, where two bundles of three fibres connect to a media conversion (medconv) server via custom Xilininx FPGA cards. Six independent processes on each medconv server convert the RocketIO data into Ethernet UDP packets which are sent out to our Cisco Nexus 9504 switch as multicast data where each coarse channel is assigned a multicast address. This provides the "corner-turn" where each of the six 6 processes on each mediaconv server is sending one third of the coarse channels for one eighth of the tiles in the array.
...
For MWAX, an array of 24 (+2 spare) identically configured MWAX Servers are connected to the core Cisco 9504 switch (on site at the MRO) via 40Gb 40 Gb Ethernet. Each MWAX Server is responsible for one coarse channel of bandwidth 1.28 MHz. For a 256T array, the input data volume is approximately 11 Gbps per coarse channel.
...
Each MWAX server has enough disk storage for around 30TB 30 TB of visibilities plus 30TB 30 TB of voltage data, effectively replacing the need for a separate "Online Archive" cluster of servers as the legacy MWA had. In normal operating modes and schedule, this means the MWA can continue to observe for a week or two even if the link to Perth is offline- data will continue to be stored on disk until the link is online again, and will then begin transmission to Perth.
...
mwax mover iterates through each file in the queue, computes a checksum, then creates a record in the MWA metadata database for each file. mwax_mover then attempts to transfer the files to an MWAcache Server at the Curtin 206 data centre via the MRO-to-Perth 100 Gbps link. The transfer is performed using xrootd, a high performance, reliable and scalable file transfer protocol. In the event that the 100 Gbps link to the Curtin 206 data centre is down, or there are issues with the MWAcache Servers, the mwax_mover process will requeue the files and attempt to transfer them when the systems or link are back online.
There are ten 10 MWAcache Servers (eight 8 in use, with two 2 spares) and each MWAcache Server receives three 3 coarse channels (one each from three 3 different MWAX Servers). The MWAcache Servers each run an instance of xrootd to facilitate reliable, high-speed transfer from the MWAX servers.
...