Storage Providers

Connecting storage and platforms...


Overview

This page reviews the storage providers and platforms supported by libStorage.

Client/Server Configuration

Regarding the examples below, please read the provision about client/server configurations before proceeding.

Dell EMC Isilon

The Isilon driver registers a storage driver named isilon with the libStorage driver manager and is used to connect and manage Isilon NAS storage. The driver creates logical volumes in directories on the Isilon cluster. Volumes are exported via NFS and restricted to a single client at a time. Quotas can also be used to ensure that a volume directory doesn't exceed a specified size.

Configuration

The following is an example configuration of the Isilon driver.

isilon:
  endpoint: https://endpoint:8080
  insecure: true
  username: username
  group: groupname
  password: password
  volumePath: /libstorage
  nfsHost: nfsHost
  dataSubnet: subnet
  quotas: true

For information on the equivalent environment variable and CLI flag names please see the section on how configuration properties are transformed.

Extra Parameters

The following items are configurable specific to this driver.

Optional Parameters

The following items are not required, but available to this driver.

Activating the Driver

To activate the Isilon driver please follow the instructions for activating storage drivers, using isilon as the driver name.

Examples

Below is a full config.yml file that works with Isilon.

libstorage:
  server:
    services:
      isilon:
        driver: isilon
        isilon:
          endpoint: https://endpoint:8080
          insecure: true
          username: username
          password: password
          volumePath: /libstorage
          nfsHost: nfsHost
          dataSubnet: subnet
          quotas: true

Instructions

It is expected that the volumePath exists already within the Isilon system. This example would reflect a directory create under /ifs/volumes/libstorage for created volumes. It is not necessary to export this volume. The dataSubnet parameter is required so the Isilon driver can restrict access to attached volumes to the host that REX-Ray is running on.

If quotas are enabled, a SmartQuotas license must also be enabled on the Isilon cluster for the capacity size functionality of libStorage to work.

A SnapshotIQ license must be enabled on the Isilon cluster for the snapshot functionality of libStorage to work.

Caveats

The Isilon driver is not without its caveats:

Dell EMC ScaleIO

The ScaleIO driver registers a storage driver named scaleio with the libStorage driver manager and is used to connect and manage ScaleIO storage.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

scaleio:
  endpoint:             https://host_ip/api
  apiVersion:           "2.0"
  insecure:             false
  useCerts:             true
  userName:             admin
  password:             mypassword
  systemID:             0
  systemName:           sysv
  protectionDomainID:   0
  protectionDomainName: corp
  storagePoolID:        0
  storagePoolName:      gold
  thinOrThick:          ThinProvisioned

Configuration Notes

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Runtime Behavior

The storageType field that is configured per volume is considered the ScaleIO Storage Pool. This can be configured by default with the storagePool setting. It is important that you create unique names for your Storage Pools on the same ScaleIO platform. Otherwise, when specifying storageType it may choose at random which protectionDomain the pool comes from.

The availabilityZone field represents the ScaleIO Protection Domain.

Configuring the Gateway

Activating the Driver

To activate the ScaleIO driver please follow the instructions for activating storage drivers, using scaleio as the driver name.

Troubleshooting

Examples

Below is a full config.yml file that works with ScaleIO.

libstorage:
  server:
    services:
      scaleio:
        driver: scaleio
        scaleio:
          endpoint: https://gateway_ip/api
          insecure: true
          userName: username
          password: password
          systemName: tenantName
          protectionDomainName: protectionDomainName
          storagePoolName: storagePoolName

VirtualBox

The VirtualBox driver registers a storage driver named virtualbox with the libStorage driver manager and is used by VirtualBox's VMs to connect and manage volumes provided by VirtualBox.

Prerequisites

In order to leverage the virtualbox driver, the libStorage client or must be located on each VM that you wish to be able to consume external volumes. The driver leverages the vboxwebserv HTTP SOAP API which is a process that must be started from the VirtualBox host (ie OS X) using vboxwebsrv -H 0.0.0.0 -v or additionally with -b for running in the background. This allows the VMs running libStorage to remotely make calls to the underlying VirtualBox application. A test for connectivity can be done with telnet virtualboxip 18083 from the VM. The virtualboxip is what you would put in the endpoint value.

Leveraging authentication for the VirtualBox webserver is optiona.. The HTTP SOAP API can have authentication disabled by running VBoxManage setproperty websrvauthlibrary null.

Hot-Plugging is required, which limits the usefulness of this driver to SATA only. Ensure that your VM has pre-created this controller and it is named SATA. Otherwise the controllerName field must be populated with the name of the controller you wish to use. The port count must be set manually as it cannot be increased when the VMs are on. A count of 30 is suggested.

VirtualBox 5.0.10+ must be used.

Configuration

The following is an example configuration of the VirtualBox driver.
The localMachineNameOrId parameter is for development use where you force libStorage to use a specific VM identity. Choose a volumePath to store the volume files or virtual disks. This path should be created ahead of time.

virtualbox:
  endpoint: http://virtualboxhost:18083
  userName: optional
  password: optional
  tls: false
  volumePath: $HOME/VirtualBox/Volumes
  controllerName: name
  localMachineNameOrId: forDevelopmentUse

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Activating the Driver

To activate the VirtualBox driver please follow the instructions for activating storage drivers, using virtualbox as the driver name.

Examples

Below is a working config.yml file that works with VirtualBox.

libstorage:
  server:
    services:
      virtualbox:
        driver: virtualbox
        virtualbox:
          endpoint:       http://10.0.2.2:18083
          tls:            false
          volumePath:     $HOME/VirtualBox/Volumes
          controllerName: SATA

Caveats

AWS EBS

The AWS EBS driver registers a storage driver named ebs with the libStorage driver manager and is used to connect and manage AWS Elastic Block Storage volumes for EC2 instances.

Note

For backwards compatibility, the driver also registers a storage driver named ec2. The use of ec2 in config files is deprecated but functional.

Note

The EBS driver does not yet support snapshots or tags, as previously supported in Rex-Ray v0.3.3.

The EBS driver is made possible by the official Amazon Go AWS SDK.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

ebs:
  accessKey:      XXXXXXXXXX
  secretKey:      XXXXXXXXXX
  region:         us-east-1
  kmsKeyID:       arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef

Configuration Notes

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Activating the Driver

To activate the AWS EBS driver please follow the instructions for activating storage drivers, using ebs as the driver name.

Troubleshooting

Examples

Below is a working config.yml file that works with AWS EBS.

libstorage:
  server:
    services:
      ebs:
        driver: ebs
        ebs:
          accessKey:      XXXXXXXXXX
          secretKey:      XXXXXXXXXX
          region:         us-east-1

AWS EFS

The AWS EFS driver registers a storage driver named efs with the libStorage driver manager and is used to connect and manage AWS Elastic File Systems.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

efs:
  accessKey:      XXXXXXXXXX
  secretKey:      XXXXXXXXXX
  securityGroups:
  - sg-XXXXXXX
  - sg-XXXXXX0
  - sg-XXXXXX1
  region:              us-east-1
  tag:                 test
  disableSessionCache: false

Configuration Notes

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Runtime Behavior

AWS EFS storage driver creates one EFS FileSystem per volume and provides root of the filesystem as NFS mount point. Volumes aren't attached to instances directly but rather exposed to each subnet by creating MountPoint in each VPC subnet. When detaching volume from instance no action is taken as there isn't good way to figure out if there are other instances in same subnet using MountPoint that is being detached. There is no charge for MountPoint so they are removed only once whole volume is deleted.

By default all EFS instances are provisioned as generalPurpose performance mode. maxIO EFS type can be provisioned by providing maxIO flag as volumetype.

Its possible to mount same volume to multiple container on a single EC2 instance as well as use single volume across multiple EC2 instances at the same time.

NOTE: Each EFS FileSystem can be accessed only from single VPC at the time.

Activating the Driver

To activate the AWS EFS driver please follow the instructions for activating storage drivers, using efs as the driver name.

Troubleshooting

Examples

Below is a working config.yml file that works with AWS EFS.

libstorage:
  server:
    services:
      efs:
        driver: efs
        efs:
          accessKey:      XXXXXXXXXX
          secretKey:      XXXXXXXXXX
          securityGroups:
          - sg-XXXXXXX
          - sg-XXXXXX0
          - sg-XXXXXX1
          region:         us-east-1
          tag:            test

AWS S3FS

The AWS S3FS driver registers a storage driver named s3fs with the libStorage driver manager and provides the ability to mount Amazon Simple Storage Service (S3) buckets as filesystems using the s3fs FUSE command.

Unlike the other AWS-related drivers, the S3FS storage driver does not need to deployed or used by an EC2 instance. Any client can take advantage of Amazon's S3 buckets.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

Server-Side Configuration

s3fs:
  region:           us-east-1
  accessKey:        XXXXXXXXXX
  secretKey:        XXXXXXXXXX
  disablePathStyle: false

Client-Side Configuration

s3fs:
  cmd:            s3fs
  options:
  - XXXX
  - XXXX
  accessKey:      XXXXXXXXXX
  secretKey:      XXXXXXXXXX

For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.

Runtime Behavior

The AWS S3FS storage driver can create new buckets as well as remove existing ones. Buckets are mounted to clients as filesystems using the s3fs FUSE command. For clients to correctly mount and unmount S3 buckets the s3fs command should be in the path of the executor or configured via the s3fs.cmd property in the client-side REX-Ray configuration file.

The client must also have access to the AWS credentials used for mounting and unmounting S3 buckets. These credentials can be stored in the client-side REX-Ray configuration file or via any means avaialble to the s3fs command.

Activating the Driver

To activate the AWS S3FS driver please follow the instructions for activating storage drivers, using s3fs as the driver name.

Examples

Below is a working config.yml file that works with AWS S3FS.

libstorage:
  server:
    services:
      s3fs:
        driver: s3fs
        s3fs:
          accessKey:      XXXXXXXXXX
          secretKey:      XXXXXXXXXX

Ceph RBD

The Ceph RBD driver registers a driver named rbd with the libStorage driver manager and is used to connect and mount RADOS Block Devices from a Ceph cluster.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

rbd:
  defaultPool: rbd

Configuration Notes

Runtime behavior

The Ceph RBD driver only works when the client and server are on the same node. There is no way for a centralized libStorage server to attach volumes to clients, therefore the libStorage server must be running on each node that wishes to mount RBD volumes.

The RBD driver uses the format of <pool>.<name> for the volume ID. This allows for the use of multiple pools by the driver. During a volume create, if the volume ID is given as <pool>.<name>, a volume named name will be created in the pool storage pool. If no pool is referenced, the defaultPool will be used.

When querying volumes, the driver will return all RBDs present in all pools in the cluster, prefixing each volume with the appropriate <pool>. value.

All RBD creates are done using the default 4MB object size, and using the "layering" feature bit to ensure greatest compatibility with the kernel clients.

Activating the Driver

To activate the Ceph RBD driver please follow the instructions for activating storage drivers, using rbd as the driver name.

Troubleshooting

Examples

Below is a full config.yml that works with RBD

libstorage:
  server:
    services:
      rbd:
        driver: rbd
        rbd:
          defaultPool: rbd

Caveats

GCE Persistent Disk

The Google Compute Engine Persistent Disk (GCEPD) driver registers a driver named gcepd with the libStorage driver manager and is used to connect and mount Google Compute Engine (GCE) persistent disks with GCE machine instances.

Requirements

Configuration

The following is an example with all possible fields configured. For a running example see the Examples section.

gcepd:
  keyfile: /etc/gcekey.json
  zone: us-west1-b
  defaultDiskType: pd-ssd
  tag: rexray

Configuration Notes

Runtime behavior

Activating the Driver

To activate the GCEPD driver please follow the instructions for activating storage drivers, using gcepd as the driver name.

Troubleshooting

Examples

Below is a full config.yml that works with GCE

libstorage:
  server:
    services:
      gcepd:
        driver: gcepd
        gcepd:
          keyfile: /etc/gcekey.json
          tag: rexray

Caveats