Air Quality Modeling on the Cloud

From LADCO Wiki
Jump to: navigation, search

Objectives

LADCO is seeking to understand the best practices for submitting and managing multiprocessor computing jobs on a cloud computing platform. In particular, LADCO would like to develop a WRF production environment that utilizes cloud-based computing. The goal of this project is to prototype a WRF production environment on a public, on-demand high performance computing service in the cloud to create a WRF platform-as-a-service (PaaS) solution. The WRF PaaS must meet the following objectives:

  • Configurable computing and storage to scale, as needed, to meet that needs of different WRF applications
  • Configurable WRF options to enable changing grids, simulation periods, physics options, and input data
  • Flexible cloud deployment from a command line interface to initiate computing clusters and spawn WRF jobs in the cloud

Cloud Modeling Projects

Ramboll Modeling on the Cloud Contract Results

Working with AWS Parallel Cluster

WRF on the Cloud

Tips and Tricks

Adding ssh users to a pcluster instance

AWS documentation for adding users with OpenLDAP

The post-install command in the pcluster configuration file should invoke the settings in the post_install_users.sh script described above. Alternatively, if you already have an instance running you can run the script as sudo to invoke the settings:

  > sudo ./post_install_users.sh

After running the script you need to use the add-key.sh and add-user.sh script described in the above link.

 #Usage (using zac as an example)
 > sudo ./add-user.sh zac {####., e.g, 2001}
 > sudo ./add-key.sh zac ladco.key
 #Add users to /etc/passwd file
 > sudo getent passwd zac
 > sudo vi /etc/passwd
 #Change ladco to primary group
 > sudo usermod zac -g ladco

Using AWS S3 for offline storage

Data are moved off of the compute servers to the AWS Simple Storage Solution for intermediate to long-term storage. The AWS CLI is used to access/manage the data on S3.

View the S3 commands, with an example for the copy (cp) command

 > aws s3 help
 > aws s3 cp help

List the S3 buckets

 > aws s3 ls
 2019-02-06 21:18:09 ladco-wrf
 > aws s3 ls ladco-wrf/
 PRE 24Apr2019/
 PRE 24Jan2019/
 PRE LADCO_2016_WRFv39_APLX/
 PRE LADCO_2016_WRFv39_YNT/
 PRE LADCO_2016_WRFv39_YNT_GFS/
 PRE LADCO_2016_WRFv39_YNT_NAM/
 PRE aws-reports/

Copy a file from one of the s3 buckets to a location on the compute server

 aws s3 cp s3://ladco-wrf/LADCO_2016_WRFv39_YNT_GFS/wrfout_d01_2016-06-10_00:00:00 /data2/wrf3.9.1/LADCO_2016_WRFv39_YNT_GFS/


Increase size of in-use volume

Add New Volume to Running Instance

From the AWS Console

  • Go to Volumes
  • Create a new Volume
  • Under the Actions menu, select Attach Volume
  • Select the Instance to which attach the new volume

From the AWS Instance

  • Check that the volume is available
lsblk
  • Confirm that the volume is empty (assuming the volume is attached as /dev/xvdf)
sudo file -s /dev/xvdf

If this command returns the following, it confirms that it is empty.

/dev/xvdf: data
  • Format the volume to an ext4 filesystem
sudo mkfs -t ext4 /dev/xvdf
  • Create a new directory and mount the volume
sudo mkdir /newdata
sudo mount /dev/xvdf /newdata

Copy output files from EC2 volume to S3 Glacier

#!/bin/csh -f

set PROJECT = LADCO_2016_WRFv39_YNT_GFS
set NEW_YN = Y 

if ( $NEW_YN == Y ) then
   # Create a new storage vault
   aws glacier create-vault --vault-name $PROJECT --account-id -

   # Add tags to describe vault (10 tags max)
   aws glacier add-tags-to-vault --account-id - --vault-name $PROJECT --tags model="WRFv3.9.1",simyear=2016,stdate=20160610,endate=20160619,awsinst=ec2-ondemand,desc="LADCO YNT GFS Test Run"
endif

# Upload files to the storage vault
set datadir = /data/wrf3.9.1/${PROJECT}/output_full/wrf_out/2016
cd $datadir
foreach f ( *wrfout* )
   echo "Copying $f"
   aws glacier upload-archive --account-id - --vault-name $PROJECT --body $f
end

# Remove files on ec2 after the files are all uploaded
# rm -f $datadir/*wrfout*