Storage Providers

Connecting storage and platforms...


Overview

This page reviews the storage providers and platforms supported by libStorage.

Client/Server Configuration

Regarding the examples below, please read the provision about client/server configurations before proceeding.

Amazon

libStorage includes support for multiple Amazon Web Services (AWS) storage services.

Elastic Block Storage

The AWS EBS driver registers a storage driver named ebs with the libStorage service registry and is used to connect and manage AWS Elastic Block Storage volumes for EC2 instances.

Note

For backwards compatibility, the driver also registers a storage driver named ec2. The use of ec2 in config files is deprecated but functional. The ec2 driver will be removed in 0.7.0, at which point all instances of ec2 in config files must use ebs instead.

Note

The EBS driver does not yet support snapshots or tags, as previously supported in Rex-Ray v0.3.3.

The EBS driver is made possible by the official Amazon Go AWS SDK.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

ebs:
  accessKey:      XXXXXXXXXX
  secretKey:      XXXXXXXXXX
  region:         us-east-1
  maxRetries:     10
  kmsKeyID:       arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef
  statusMaxAttempts:  10
  statusInitialDelay: 100ms
  statusTimeout:      2m
Configuration Notes

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Activating the Driver

To activate the AWS EBS driver please follow the instructions for activating storage drivers, using ebs as the driver name.

Troubleshooting

Examples

Below is a working config.yml file that works with AWS EBS.

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: ebs
  server:
    services:
      ebs:
        driver: ebs
        ebs:
          accessKey:      XXXXXXXXXX
          secretKey:      XXXXXXXXXX
          region:         us-east-1

Elastic File System

The AWS EFS driver registers a storage driver named efs with the libStorage service registry and is used to connect and manage AWS Elastic File Systems.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

efs:
  accessKey:      XXXXXXXXXX
  secretKey:      XXXXXXXXXX
  securityGroups:
  - sg-XXXXXXX
  - sg-XXXXXX0
  - sg-XXXXXX1
  region:              us-east-1
  tag:                 test
  disableSessionCache: false
  statusMaxAttempts:  6
  statusInitialDelay: 1s
  statusTimeout:      2m
Configuration Notes

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Runtime Behavior

AWS EFS storage driver creates one EFS FileSystem per volume and provides root of the filesystem as NFS mount point. Volumes aren't attached to instances directly but rather exposed to each subnet by creating MountPoint in each VPC subnet. When detaching volume from instance no action is taken as there isn't good way to figure out if there are other instances in same subnet using MountPoint that is being detached. There is no charge for MountPoint so they are removed only once whole volume is deleted.

By default all EFS instances are provisioned as generalPurpose performance mode. maxIO EFS type can be provisioned by providing maxIO flag as volumetype.

Its possible to mount same volume to multiple container on a single EC2 instance as well as use single volume across multiple EC2 instances at the same time.

Note

Each EFS FileSystem can be accessed only from single VPC at the time.

Activating the Driver

To activate the AWS EFS driver please follow the instructions for activating storage drivers, using efs as the driver name.

Troubleshooting

Examples

Below is a working config.yml file that works with AWS EFS.

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: efs
  server:
    services:
      efs:
        driver: efs
        efs:
          accessKey:      XXXXXXXXXX
          secretKey:      XXXXXXXXXX
          securityGroups:
          - sg-XXXXXXX
          - sg-XXXXXX0
          - sg-XXXXXX1
          region:         us-east-1
          tag:            test

Simple Storage Service

The AWS S3FS driver registers a storage driver named s3fs with the libStorage service registry and provides the ability to mount Amazon Simple Storage Service (S3) buckets as filesystems using the s3fs FUSE command.

Unlike the other AWS-related drivers, the S3FS storage driver does not need to deployed or used by an EC2 instance. Any client can take advantage of Amazon's S3 buckets.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

Server-Side Configuration

s3fs:
  region:           us-east-1
  accessKey:        XXXXXXXXXX
  secretKey:        XXXXXXXXXX
  disablePathStyle: false

Client-Side Configuration

s3fs:
  cmd:            s3fs
  options:
  - XXXX
  - XXXX
  accessKey:      XXXXXXXXXX
  secretKey:      XXXXXXXXXX

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Runtime Behavior

The AWS S3FS storage driver can create new buckets as well as remove existing ones. Buckets are mounted to clients as filesystems using the s3fs FUSE command. For clients to correctly mount and unmount S3 buckets the s3fs command should be in the path of the executor or configured via the s3fs.cmd property in the client-side REX-Ray configuration file.

The client must also have access to the AWS credentials used for mounting and unmounting S3 buckets. These credentials can be stored in the client-side REX-Ray configuration file or via any means avaialble to the s3fs command.

Activating the Driver

To activate the AWS S3FS driver please follow the instructions for activating storage drivers, using s3fs as the driver name.

Examples

Below is a working config.yml file that works with AWS S3FS.

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: s3fs
  server:
    services:
      s3fs:
        driver: s3fs
        s3fs:
          accessKey:      XXXXXXXXXX
          secretKey:      XXXXXXXXXX

Ceph

libStorage includes support for the following Ceph storage technologies.

RADOS Block Device

The Ceph RBD driver registers a driver named rbd with the libStorage driver manager and is used to connect and mount RADOS Block Devices from a Ceph cluster.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

rbd:
  defaultPool: rbd
  testModule: true
Configuration Notes

Runtime behavior

The Ceph RBD driver only works when the client and server are on the same node. There is no way for a centralized libStorage server to attach volumes to clients, therefore the libStorage server must be running on each node that wishes to mount RBD volumes.

The RBD driver uses the format of <pool>.<name> for the volume ID. This allows for the use of multiple pools by the driver. During a volume create, if the volume ID is given as <pool>.<name>, a volume named name will be created in the pool storage pool. If no pool is referenced, the defaultPool will be used.

Both pool and name may only contain alphanumeric characters, underscores, and dashes.

When querying volumes, the driver will return all RBDs present in all pools in the cluster, prefixing each volume with the appropriate <pool>. value.

All RBD creates are done using the default 4MB object size, and using the "layering" feature bit to ensure greatest compatibility with the kernel clients.

Activating the Driver

To activate the Ceph RBD driver please follow the instructions for activating storage drivers, using rbd as the driver name.

Troubleshooting

Examples

Below is a full config.yml that works with RBD

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: rbd
  server:
    services:
      rbd:
        driver: rbd
        rbd:
          defaultPool: rbd

Caveats

OpenStack

Another welcome community contribution, libStorage includes support for OpenStack storage.

Cinder

The Cinder driver registers a storage driver named cinder with the libStorage driver manager and is used to connect and manage storage on Cinder-compatible instances.

Configuration

The following is an example configuration with most fields populated for illustration. For a running example see the Examples section.

cinder:
  authURL:              https://domain.com/openstack
  userID:               0
  userName:             myusername
  password:             mypassword
  tenantID:             0
  tenantName:           customer
  domainID:             0
  domainName:           corp
  regionName:           USNW
  availabilityZoneName: Gold
Configuration Notes

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Activating the Driver

To activate the Cinder driver please follow the instructions for activating storage drivers, using cinder as the driver name.

Examples

Below is a full config.yml file that works with Cinder.

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: cinder
  server:
    services:
      cinder:
        driver: cinder
cinder:
  authUrl: https://keystoneHost:35357/v2.0/
  username: username
  password: password
  tenantName: tenantName

Dell EMC

libStorage includes support for several Dell EMC storage platforms.

Isilon

The Isilon driver registers a storage driver named isilon with the libStorage service registry and is used to connect and manage Isilon NAS storage. The driver creates logical volumes in directories on the Isilon cluster. Volumes are exported via NFS and restricted to a single client at a time. Quotas can also be used to ensure that a volume directory doesn't exceed a specified size.

Configuration

The following is an example configuration of the Isilon driver. For a running example see the Examples section.

isilon:
  endpoint: https://endpoint:8080
  insecure: true
  username: username
  group: groupname
  password: password
  volumePath: /libstorage
  nfsHost: nfsHost
  dataSubnet: subnet
  quotas: true

For information on the equivalent environment variable and CLI flag names please see the section on how configuration properties are transformed.

Extra Parameters

The following items are configurable specific to this driver.

Optional Parameters

The following items are not required, but available to this driver.

Activating the Driver

To activate the Isilon driver please follow the instructions for activating storage drivers, using isilon as the driver name.

Examples

Below is a full config.yml file that works with Isilon.

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: isilon
  server:
    services:
      isilon:
        driver: isilon
        isilon:
          endpoint: https://endpoint:8080
          insecure: true
          username: username
          password: password
          volumePath: /libstorage
          nfsHost: nfsHost
          dataSubnet: subnet
          quotas: true

Instructions

It is expected that the volumePath exists already within the Isilon system. This example would reflect a directory create under /ifs/volumes/libstorage for created volumes. It is not necessary to export this volume. The dataSubnet parameter is required so the Isilon driver can restrict access to attached volumes to the host that REX-Ray is running on.

If quotas are enabled, a SmartQuotas license must also be enabled on the Isilon cluster for the capacity size functionality of libStorage to work.

A SnapshotIQ license must be enabled on the Isilon cluster for the snapshot functionality of libStorage to work.

Caveats

The Isilon driver is not without its caveats:

you can set the RBAC rights on the Isilon console:

# create RBAC group
isi auth roles create --name libstorage_roles
# asign privileges to role
isi auth roles modify libstorage_roles --add-priv  ISI_PRIV_NS_IFS_ACCESS
isi auth roles modify libstorage_roles --add-priv  ISI_PRIV_LOGIN_PAPI   
isi auth roles modify libstorage_roles --add-priv  ISI_PRIV_NFS       
isi auth roles modify libstorage_roles --add-priv  ISI_PRIV_IFS_RESTORE
isi auth roles modify libstorage_roles --add-priv  ISI_PRIV_QUOTA      
isi auth roles modify libstorage_roles  --add-priv  ISI_PRIV_SNAPSHOT 
# add user to RBAC group
isi auth roles modify libstorage_roles --add-user libstorage

ScaleIO

The ScaleIO driver registers a storage driver named scaleio with the libStorage service registry and is used to connect and manage ScaleIO storage.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

scaleio:
  endpoint:             https://host_ip/api
  apiVersion:           "2.0"
  insecure:             false
  useCerts:             true
  userName:             admin
  password:             mypassword
  systemID:             0
  systemName:           sysv
  protectionDomainID:   0
  protectionDomainName: corp
  storagePoolID:        0
  storagePoolName:      gold
  thinOrThick:          ThinProvisioned
Configuration Notes

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Runtime Behavior

The storageType field that is configured per volume is considered the ScaleIO Storage Pool. This can be configured by default with the storagePool setting. It is important that you create unique names for your Storage Pools on the same ScaleIO platform. Otherwise, when specifying storageType it may choose at random which protectionDomain the pool comes from.

The availabilityZone field represents the ScaleIO Protection Domain.

Configuring the Gateway

Activating the Driver

To activate the ScaleIO driver please follow the instructions for activating storage drivers, using scaleio as the driver name.

Troubleshooting

Examples

Below is a full config.yml file that works with ScaleIO.

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: scaleio
  server:
    services:
      scaleio:
        driver: scaleio
        scaleio:
          endpoint: https://gateway_ip/api
          insecure: true
          userName: username
          password: password
          systemName: tenantName
          protectionDomainName: protectionDomainName
          storagePoolName: storagePoolName

DigitalOcean

Thanks to the efforts of our tremendous community, libStorage also has built-in support for DigitalOcean!

DO Block Storage

The DigitalOcean Block Storage (DOBS) driver registers a driver named dobs with the libStorage service registry and is used to attach and mount DigitalOcean block storage devices to DigitalOcean instances.

Requirements

The DigitalOcean block storage driver has the following requirements:

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

dobs:
  token:  123456
  region: nyc1
  statusMaxAttempts: 10
  statusInitialDelay: 100ms
  statusTimeout: 2m
  convertUnderscores: false
Configuration notes

Note

The DigitalOcean service currently only supports block storage volumes in specific regions. Make sure to use a suuported region.

The standard environment variable for the DigitalOcean access token is DIGITALOCEAN_ACCESS_TOKEN. However, the environment variable mapped to this driver's dobs.token property is DOBS_TOKEN. This choice was made to ensure that the driver must be explicitly configured for access instead of detecting a default token that may not be intended for the driver.

Examples

Below is a full config.yml that works with DOBS

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: dobs
  server:
    services:
      dobs:
        driver: dobs
        dobs:
          token: 123456
          region: nyc1

FittedCloud

Another example of the great community shared by the libStorage project, the talented people at FittedCloud have provided a driver for their EBS optimizer.

EBS Optimizer

The FittedCloud EBS Optimizer driver registers a storage driver named fittedcloud with the libStorage service registry and provides the ability to connect and manage thin-provisioned EBS volumes for EC2 instances.

Note

This version of the FittedCloud driver only supports configurations where client and server are on the same host. The libStorage server must be running on each node along side with the FittedCloud Agent.

Note

This version of the FittedCloud driver does not support co-existing with the ebs driver on the same host. As a result it also doesn't support optimizing existing EBS volumes. See the Examples section below for a running example.

Note

The FittedCloud driver does not yet support snapshots or tags.

Requirements

This driver has the following requirements:

Getting Started

Before starting, please make sure to register as a user by visiting the FittedCloud customer website. Once an account is activated it will be assigned a user ID, which can be found on the Settings page after logging into the web site.

The following commands will download and install the latest FittedCloud Agent software. The flags -o S -m enable new thin volumes to be created via the docker command instead of optimizing existing EBS volumes. Please replace the <User ID> with a FittedCloud user ID.

$ curl -skSL 'https://customer.fittedcloud.com/downloadsoftware?ver=latest' \
  -o fcagent.run
$ sudo bash ./fcagent.run -- -o S -m -d <User ID>

Please refer to FittedCloud website for more details.

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

ebs:
  accessKey:      XXXXXXXXXX
  secretKey:      XXXXXXXXXX
  kmsKeyID:       abcd1234-a123-456a-a12b-a123b4cd56ef
  statusMaxAttempts:  10
  statusInitialDelay: 100ms
  statusTimeout:      2m
Configuration Notes

Examples

The following example illustrates how to configured the FittedCloud driver:

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: fittedcloud
  server:
    services:
      fittedcloud:
        driver: fittedcloud
ebs:
  accessKey:  XXXXXXXXXX
  secretKey:  XXXXXXXXXX

Additional information on configuring the FittedCloud driver may be found at this location.

Google

libStorage ships with support for Google Compute Engine (GCE) as well.

GCE Persistent Disk

The Google Compute Engine Persistent Disk (GCEPD) driver registers a driver named gcepd with the libStorage service registry and is used to connect and mount Google Compute Engine (GCE) persistent disks with GCE machine instances.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

gcepd:
  keyfile: /etc/gcekey.json
  zone: us-west1-b
  defaultDiskType: pd-ssd
  tag: rexray
  statusMaxAttempts:  10
  statusInitialDelay: 100ms
  statusTimeout:      2m
  convertUnderscores: false
Configuration Notes

Runtime behavior

Activating the Driver

To activate the GCEPD driver please follow the instructions for activating storage drivers, using gcepd as the driver name.

Troubleshooting

Examples

Below is a full config.yml that works with GCE

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: gcepd
  server:
    services:
      gcepd:
        driver: gcepd
        gcepd:
          keyfile: /etc/gcekey.json
          tag: rexray

Caveats

Microsoft

Microsoft Azure support is included with libStorage as well.

Azure Unmanaged Disk

The Microsoft Azure Unmanaged Disk (Azure UD) driver registers a driver named azureud with the libStorage service registry and is used to connect and mount Azure unmanaged disks from Azure page blob storage with Azure virtual machines.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

azureud:
  subscriptionID: abcdef01-2345-6789-abcd-ef0123456789
  resourceGroup: testgroup
  tenantID: usernamehotmail.onmicrosoft.com
  storageAccount: username
  storageAccessKey: XXXXXXXX
  clientID: 123def01-2345-6789-abcd-ef0123456789
  clientSecret: XXXXXXXX
  certPath:
  container: vhds
  useHTTPS: true
Configuration Notes

Runtime Behavior

Activating the Driver

To activate the Azure UD driver please follow the instructions for activating storage drivers, using azureud as the driver name.

Troubleshooting

Examples

Below is a full config.yml that works with Azure UD

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: azureud
  server:
    tasks:
      exeTimeout: 120s
    services:
      azure:
        driver: azureud
        azureud:
          subscriptionID: abcdef01-2345-6789-abcd-ef0123456789
          resourceGroup: testgroup
          tenantID: usernamehotmail.onmicrosoft.com
          storageAccount: username
          storageAccessKey: XXXXXXXX
          clientID: 123def01-2345-6789-abcd-ef0123456789
          clientSecret: XXXXXXXX

Caveats

VirtualBox

The VirtualBox driver registers a storage driver named virtualbox with the libStorage service registry and is used by VirtualBox's VMs to connect and manage volumes provided by VirtualBox.

Prerequisites

In order to leverage the virtualbox driver, the libStorage client or must be located on each VM that you wish to be able to consume external volumes. The driver leverages the vboxwebserv HTTP SOAP API which is a process that must be started from the VirtualBox host (ie OS X) using vboxwebsrv -H 0.0.0.0 -v or additionally with -b for running in the background. This allows the VMs running libStorage to remotely make calls to the underlying VirtualBox application. A test for connectivity can be done with telnet virtualboxip 18083 from the VM. The virtualboxip is what you would put in the endpoint value.

Leveraging authentication for the VirtualBox webserver is optiona.. The HTTP SOAP API can have authentication disabled by running VBoxManage setproperty websrvauthlibrary null.

Hot-Plugging is required, which limits the usefulness of this driver to SATA only. Ensure that your VM has pre-created this controller and it is named SATA. Otherwise the controllerName field must be populated with the name of the controller you wish to use. The port count must be set manually as it cannot be increased when the VMs are on. A count of 30 is suggested.

VirtualBox 5.0.10+ must be used.

Note

For a VirtualBox VM to work successfully, the following three points are of the utmost importance:

  1. The SATA controller should be named SATA.
  2. The SATA controller's port count must allow for additional connections.
  3. The MAC address must not match that of any other VirtualBox VM's MAC addresses or the MAC address of the host.

The REX-Ray Vagrantfile has a section that automatically configures these options.

Configuration

The following is an example configuration of the VirtualBox driver.
The localMachineNameOrId parameter is for development use where you force libStorage to use a specific VM identity. Choose a volumePath to store the volume files or virtual disks. This path should be created ahead of time.

virtualbox:
  endpoint: http://virtualboxhost:18083
  userName: optional
  password: optional
  tls: false
  volumePath: $HOME/VirtualBox/Volumes
  controllerName: name
  localMachineNameOrId: forDevelopmentUse

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Activating the Driver

To activate the VirtualBox driver please follow the instructions for activating storage drivers, using virtualbox as the driver name.

Examples

Below is a working config.yml file that works with VirtualBox.

libstorage:
  # The libstorage.service property directs a libStorage client to direct its
  # requests to the given service by default. It is not used by the server.
  service: virtualbox
  server:
    services:
      virtualbox:
        driver: virtualbox
        virtualbox:
          endpoint:       http://10.0.2.2:18083
          tls:            false
          volumePath:     $HOME/VirtualBox/Volumes
          controllerName: SATA

Caveats