A disk-based system powered by Dell, named Acacia after Australia’s national floral emblem the Golden Wattle – Acacia pycnantha, providing 60PB of high-speed object storage for hosting research data online. This multi-tiered cluster separates different types of data to improve data availability.
Detailed information on how to use Acacia can be found at https://support.pawsey.org.au/documentation/display/US/Acacia+-+Common+Usage.
Setting up
Creating keys
- Go to https://portal.pawsey.org.au/origin and login using your Pawsey username and password
- Go to the ACACIA tab:
- Click View Keys:
- You will need to create a key for each "Storage" which you would like to use. Select the desired Storage Name from the dropdown, click Create New Key, and click Yes when prompted to confirm.
- You will be shown an Access ID and a Secret Key that you will need later. The Access ID is easily obtained on the portal website, but the Secret Key will not be shown again. COPY THE ACCESS ID AND SECRET KEY AND KEEP THEM SOMEWHERE SAFE.
- If you lose the secret key, the easiest recovery method is to simply delete the key and create a new one.
Pawsey environment
The next step is to setup your environment on Pawsey so that you can access the Acacia "S3" system from Garrawarla, etc. There are two clients available for doing so:
MinIO Client ("mc")As at March 2024, minio client is no longer supported or available on Pawsey systems.- rclone
The set up for each client is different, as detailed in the following subsections. It has been reported that rclone seems to be the more robust choice for very large file transfers, and that mc does not seem to be able to handle large file transfers.
rclone
Create the following file:
[<ALIAS_NAME>] type = s3 provider = Other access_key_id = <ACCESS_ID> secret_access_key = <SECRET_KEY> endpoint = https://projects.pawsey.org.au acl = public-read-write bucket_acl = public-read-write
- ALIAS_NAME can be anything, but a sensible choice is the Storage Name (e.g. mwasci in the screenshots above)
- ACCESS_ID is the Access ID of the key created to access the given storage
- SECRET_KEY is the Secret Key given to you at the time your created the key.
To use rclone commands, you will need to load the rclone module:
module load rclone
You are now set up to use rclone to move data to/from your Acacia storage.
AWS Client
Configuration for aws
user@setonix:~> tee -a $HOME/.aws/credentials <<EOF [<profilename>] aws_access_key_id= aws_secret_access_key= EOF user@setonix:~> tee -a $HOME/.aws/config <<EOF [profile <profilename>] output=json EOF
To use aws client commands, you will need to load the aws client module:
module load awscli/1.16.308
To create a new bucket using AWS S3 CLI, use the S3 mb command:
> aws --endpoint-url=https://projects.pawsey.org.au --profile=<PROFILE_NAME> s3 mb s3://<BUCKET_NAME>
Where:
<PROFILE_NAME>
is the name you gave to the account credentials when configuring AWS S3 CLI.<BUCKET_NAME>
is the name you want to give your bucket, subject to naming requirements.
To delete a bucket using AWS S3 CLI, use the S3 rb command:
> aws --endpoint-url=https://projects.pawsey.org.au --profile=<PROFILE_NAME> s3 rb s3://<BUCKET_NAME>
Where:
<PROFILE_NAME>
is the name you gave the storage space when configuring aws S3 cli.<BUCKET_NAME>
is the name of the bucket you want to remove.
To list the buckets in your account using AWS S3 CLI, use the S3 ls command:
> aws --endpoint-url=https://projects.pawsey.org.au --profile=<PROFILE_NAME> s3 ls
Where:
<PROFILE_NAME>
is the name you gave to the account credentials when configuring AWS S3 CLI.
To list the objects in a bucket:
> aws --endpoint-url=https://projects.pawsey.org.au --profile=<PROFILE_NAME> s3 ls s3://<BUCKET_NAME>
Where:
<BUCKET_NAME>
is the name of the bucket you want to list the objects within.
To list objects in a pseudo folder:
> aws --endpoint-url=https://projects.pawsey.org.au --profile=<PROFILE_NAME> s3 ls s3://<BUCKET_NAME>/<PREFIX>
Where:
<PREFIX>
is the name of the pseudo folder you want to list the objects within.
To upload an object using AWS S3 CLI, use the S3 cp command:
> aws --endpoint-url=https://projects.pawsey.org.au --profile=<PROFILE_NAME> s3 cp <SOURCE> <TARGET>
Where:
<SOURCE>
is the filesystem path and name of the file you want to upload.<TARGET>
is the key of the object on Acacia, so bucket name, pseudo folder (optional), and object name. You can specify any object name to “rename” the file on upload.
To download an object from Acacia:
> aws --endpoint-url=https://projects.pawsey.org.au --profile=<PROFILE_NAME> s3 cp <SOURCE> <TARGET>
Where:
<SOURCE>
is the key of the object on Acacia you want to download, so bucket name, pseudo folder (optional), and object name.<TARGET>
is the filesystem path where you want the object to be downloaded to.
To delete an object using AWS S3 CLI, use the rm
command:
> aws --endpoint-url=https://acacia.pawsey.org.au --profile=<PROFILE_NAME> s3 rm s3://<BUCKET_NAME>/<OBJECT_NAME>
Where:
<PROFILE_NAME>
is the name you gave to the account credentials when configuring AWS S3 CLI.<BUCKET_NAME>
is the name of the bucket containing the object.<OBJECT_NAME>
is the name of the object to remove.
The following command removes all objects:
> aws --endpoint-url=https://acacia.pawsey.org.au --profile=<PROFILE_NAME> s3 rm s3://<BUCKET_NAME> --recursive
> aws --endpoint-url=https://acacia.pawsey.org.au --profile=<PROFILE_NAME> s3 rm s3://<BUCKET_NAME>/<PREFIX> --recursive
Where:
<PROFILE_NAME>
is the name you gave to the account credentials when configuring AWS S3 CLI.<BUCKET_NAME>
is the name of the bucket containing the object.<OBJECT_NAME>
is the name of the object to remove.
To share an object using AWS S3 CLI, use the presign
command:
> aws --endpoint-url=https://projects.pawsey.org.au --profile=<PROFILE_NAME> s3 presign s3://<BUCKET_NAME>/<OBJECT_NAME>
Where:
<PROFILE_NAME>
is the name you gave to the account credentials when configuring AWS S3 CLI.<BUCKET_NAME>
is the name of the bucket containing the object.<OBJECT_NAME>
is the name of the object to share.
Example workflows
Some example workflows are given at https://support.pawsey.org.au/documentation/display/US/Supercomputing+project+example. These examples are designed for Setonix, but can be used almost exactly as is from hpc-data. The only (necessary?) thing that needs to change is the name of the partition, which is "copy" on Setonix, but "copyq" elsewhere:
#SBATCH --partition=copyq
Note that these example scripts may require access to /scratch, which is accessible from hpc-data, but not from garrawarla or galaxy.