-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom mountpoints? #111
Comments
If we consider the Lustre mountpoint to be created by a process external to Terraform, it is relatively straightforward to implement a lustre client class that mounts custom mountpoints. Here is a quick draft of class profile::lustre::client(Hash[String, Any] $mountpoints) {
yumrepo { 'aws-fsx':
name => 'AWS FSx Packages - $basearch',
baseurl => 'https://fsx-lustre-client-repo.s3.amazonaws.com/el/7/x86_64/',
enabled => 1,
gpgkey => 'https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-rpm-public-key.asc',
gpgcheck => 1,
}
yumrepo { 'aws-fsx-src':
name => 'AWS FSx Source - $basearch',
baseurl => 'https://fsx-lustre-client-repo.s3.amazonaws.com/el/7/SRPMS/',
enabled => 1,
gpgkey => 'https://fsx-lustre-client-repo-public-keys.s3.amazonaws.com/fsx-rpm-public-key.asc',
gpgcheck => 1,
}
package { ['kmod-lustre-client', 'lustre-client']:
ensure => installed,
require => Yumrepo['aws-fsx'],
}
$defaults = {
'ensure' => present,
'fstype' => 'lustre',
'options' => 'noatime,flock',
'require' => [
Package['kmod-lustre-client'],
Package['lustre-client']
],
}
file { keys($mountpoints):
ensure => 'directory',
mode => '0755',
}
create_resources(mount, $mountpoints, $defaults)
} The mountpoints could be defined with the hieradata using the Here is an example of a mountpoint definition using the preceding profile::lustre::client class: profile::lustre::client:
'/lustre1':
name: '/lustre1'
target: 'fs-0bd8c38cb68312484.fsx.ca-central-1.amazonaws.com@tcp:/zaaanbmw'
'/lustre2':
name: '/lustre2'
target: 'fs-1ce9c38cb68312484.fsx.ca-central-1.amazonaws.com@tcp:/pdddmrmw' You can then add node /^[a-z0-9-]*node\d+$/ {
include profile::consul::client
include profile::base
include profile::metrics::exporter
include profile::rsyslog::client
include profile::cvmfs::client
include profile::gpu
include profile::singularity
include profile::jupyterhub::node
include profile::nfs::client
include profile::lustre::client # <- add Lustre custom mountpoints
include profile::slurm::node
include profile::freeipa::client
} |
@cmd-ntrf many thanks for this. Something that does occur to me is a complication with VPCs. As you know we need to specify a VPC when creating an FSx filesystem. Given that at the moment this needs to happen outwith Terraform and MC, and MC by default creates its own VPC for the cluster, I'm wondering how to handle the routing between the two. A workaround for now would be to mount the filesystem manually, having created it specifically to use the VPC that MC has created. |
Quick update: I am drafting a PR to add support for cloud provider filesystems using a new variable An example for AWS is available here: The creation of filesystem resources is functional, what is missing is the Puppet code to mount the filesystem. |
* Fix issue when GID is empty in mkproject * Fix project while loop
Related to #36 and re-directed here from the
software-stack
repo:We would like to supplement the EBS-backed NFS storage with FSx on AWS (lots of abbreviations there...).
Is there support currently for custom mountpoints, such that they would be added to new compute node instances as they are provisioned? If not, where should be start looking in the Puppet code?
I know you plan to add things like FSx in the future but hopefully this would be enough for us in the meantime.
The text was updated successfully, but these errors were encountered: