Storage Providers
Connecting storage and platforms...
Overview
This page reviews the storage providers and platforms supported by libStorage
.
Client/Server Configuration
Regarding the examples below, please read the provision about client/server configurations before proceeding.
Dell EMC Isilon
The Isilon driver registers a storage driver named isilon
with the
libStorage
driver manager and is used to connect and manage Isilon NAS
storage. The driver creates logical volumes in directories on the Isilon
cluster. Volumes are exported via NFS and restricted to a single client at a
time. Quotas can also be used to ensure that a volume directory doesn't exceed
a specified size.
Configuration
The following is an example configuration of the Isilon driver.
isilon:
endpoint: https://endpoint:8080
insecure: true
username: username
group: groupname
password: password
volumePath: /libstorage
nfsHost: nfsHost
dataSubnet: subnet
quotas: true
For information on the equivalent environment variable and CLI flag names please see the section on how configuration properties are transformed.
Extra Parameters
The following items are configurable specific to this driver.
volumePath
represents the location under/ifs/volumes
to allow volumes to be created and removed.nfsHost
is the configurable NFS server hostname or IP (often a SmartConnect name) used when mounting exportsdataSubnet
is the subnet the REX-Ray driver is running on. This is used for the NFS export host ACLs.
Optional Parameters
The following items are not required, but available to this driver.
insecure
defaults tofalse
.group
defaults to the group of the user specified in the configuration. Only use this option if you need volumes to be created with a different group.volumePath
defaults to "". This will have all new volumes created directly under/ifs/volumes
.quotas
defaults tofalse
. Set totrue
if you have a SmartQuotas license enabled.
Activating the Driver
To activate the Isilon driver please follow the instructions for
activating storage drivers,
using isilon
as the driver name.
Examples
Below is a full config.yml
file that works with Isilon.
libstorage:
server:
services:
isilon:
driver: isilon
isilon:
endpoint: https://endpoint:8080
insecure: true
username: username
password: password
volumePath: /libstorage
nfsHost: nfsHost
dataSubnet: subnet
quotas: true
Instructions
It is expected that the volumePath
exists already within the Isilon system.
This example would reflect a directory create under /ifs/volumes/libstorage
for created volumes. It is not necessary to export this volume. The dataSubnet
parameter is required so the Isilon driver can restrict access to attached
volumes to the host that REX-Ray is running on.
If quotas
are enabled, a SmartQuotas license must also be enabled on the
Isilon cluster for the capacity size functionality of libStorage
to work.
A SnapshotIQ license must be enabled on the Isilon cluster for the snapshot
functionality of libStorage
to work.
Caveats
The Isilon driver is not without its caveats:
- The account used to access the Isilon cluster must be in a role with the
following privileges:
- Namespace Access (ISI_PRIV_NS_IFS_ACCESS)
- Platform API (ISI_PRIV_LOGIN_PAPI)
- NFS (ISI_PRIV_NFS)
- Restore (ISI_PRIV_IFS_RESTORE)
- Quota (ISI_PRIV_QUOTA) (if
quotas
are enabled) - Snapshot (ISI_PRIV_SNAPSHOT) (if snapshots are used)
Dell EMC ScaleIO
The ScaleIO driver registers a storage driver named scaleio
with the
libStorage
driver manager and is used to connect and manage ScaleIO storage.
Requirements
- The ScaleIO
REST Gateway
is required for the driver to function. - The
libStorage
client or application that embeds thelibStorage
client must reside on a host that has the SDC client installed. The command/opt/emc/scaleio/sdc/bin/drv_cfg --query_guid
should be executable and should return the local SDC GUID. - The official Oracle Java Runtime Environment (JRE) is required. During testing, use of the Open Java Development Kit (JDK) resulted in unexpected errors.
Configuration
The following is an example with all possible fields configured. For a running
example see the Examples
section.
scaleio:
endpoint: https://host_ip/api
apiVersion: "2.0"
insecure: false
useCerts: true
userName: admin
password: mypassword
systemID: 0
systemName: sysv
protectionDomainID: 0
protectionDomainName: corp
storagePoolID: 0
storagePoolName: gold
thinOrThick: ThinProvisioned
Configuration Notes
- The
apiVersion
can optionally be set here to force certain API behavior. The default is to retrieve the endpoint API, and pass this version during calls. insecure
should be set totrue
if you have not loaded the SSL certificates on the host. A successful wget or curl should be possible without SSL errors to the APIendpoint
in this case.useCerts
should only be set if you want to leverage the internal SSL certificates. This would be useful if you are deploying the REX-Ray binary on a host that does not have any certificates installed.systemID
takes priority oversystemName
.protectionDomainID
takes priority overprotectionDomainName
.storagePoolID
takes priority overstoragePoolName
.thinkOrThick
determines whether to provision as the defaultThinProvisioned
, orThickProvisioned
.
For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.
Runtime Behavior
The storageType
field that is configured per volume is considered the
ScaleIO Storage Pool. This can be configured by default with the storagePool
setting. It is important that you create unique names for your Storage Pools
on the same ScaleIO platform. Otherwise, when specifying storageType
it
may choose at random which protectionDomain
the pool comes from.
The availabilityZone
field represents the ScaleIO Protection Domain.
Configuring the Gateway
- Install the
EMC-ScaleIO-gateway
package. - Edit the
/opt/emc/scaleio/gateway/webapps/ROOT/WEB-INF/classes/gatewayUser.properties
file and append the proper MDM IP addresses to the followingmdm.ip.addresses=
parameter. - By default the password is the same as your administrative MDM password.
- Start the gateway
service scaleio-gateway start
. - With 1.32 we have noticed a restart of the gateway may be necessary as well
after an initial install with
service scaleio-gateway restart
.
Activating the Driver
To activate the ScaleIO driver please follow the instructions for
activating storage drivers,
using scaleio
as the driver name.
Troubleshooting
- Verify your parameters for
system
,protectionDomain
, andstoragePool
are correct. - Verify that have the ScaleIO SDC service installed with
rpm -qa EMC-ScaleIO-sdc
- Verify that the following command returns the local SDC GUID
/opt/emc/scaleio/sdc/bin/drv_cfg --query_guid
. - Ensure that you are able to open a TCP connection to the gateway with the
address that you will be supplying below in the
gateway_ip
parameter. For exampletelnet gateway_ip 443
should open a successful connection. Removing theEMC-ScaleIO-gateway
package and reinstalling can force re-creation of self-signed certs which may help resolve gateway problems. Also try restarting the gateway withservice scaleio-gateway restart
. - Ensure that you have the correct authentication credentials for the gateway.
This can be done with a curl login. You should receive an authentication
token in return.
curl --insecure --user admin:XScaleio123 https://gw_ip:443/api/login
- Please review the gateway log at
/opt/emc/scaleio/gateway/logs/catalina.out
for errors.
Examples
Below is a full config.yml
file that works with ScaleIO.
libstorage:
server:
services:
scaleio:
driver: scaleio
scaleio:
endpoint: https://gateway_ip/api
insecure: true
userName: username
password: password
systemName: tenantName
protectionDomainName: protectionDomainName
storagePoolName: storagePoolName
VirtualBox
The VirtualBox driver registers a storage driver named virtualbox
with the
libStorage
driver manager and is used by VirtualBox's VMs to connect and
manage volumes provided by VirtualBox.
Prerequisites
In order to leverage the virtualbox
driver, the libStorage
client or must
be located on each VM that you wish to be able to consume external volumes.
The driver leverages the vboxwebserv
HTTP SOAP API which is a process that
must be started from the VirtualBox host (ie OS X) using
vboxwebsrv -H 0.0.0.0 -v
or additionally with -b
for running in the
background. This allows the VMs running libStorage
to remotely make calls to
the underlying VirtualBox application. A test for connectivity can be done with
telnet virtualboxip 18083
from the VM. The virtualboxip
is what you
would put in the endpoint
value.
Leveraging authentication for the VirtualBox webserver is optiona.. The HTTP
SOAP API can have authentication disabled by running
VBoxManage setproperty websrvauthlibrary null
.
Hot-Plugging is required, which limits the usefulness of this driver to SATA
only. Ensure that your VM has pre-created this controller and it is
named SATA
. Otherwise the controllerName
field must be populated
with the name of the controller you wish to use. The port count must be set
manually as it cannot be increased when the VMs are on. A count of 30
is suggested.
VirtualBox 5.0.10+ must be used.
Configuration
The following is an example configuration of the VirtualBox driver.
The localMachineNameOrId
parameter is for development use where you force
libStorage
to use a specific VM identity. Choose a volumePath
to store the
volume files or virtual disks. This path should be created ahead of time.
virtualbox:
endpoint: http://virtualboxhost:18083
userName: optional
password: optional
tls: false
volumePath: $HOME/VirtualBox/Volumes
controllerName: name
localMachineNameOrId: forDevelopmentUse
For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.
Activating the Driver
To activate the VirtualBox driver please follow the instructions for
activating storage drivers,
using virtualbox
as the driver name.
Examples
Below is a working config.yml
file that works with VirtualBox.
libstorage:
server:
services:
virtualbox:
driver: virtualbox
virtualbox:
endpoint: http://10.0.2.2:18083
tls: false
volumePath: $HOME/VirtualBox/Volumes
controllerName: SATA
Caveats
- Snapshot and create volume from volume functionality is not available yet with this driver.
- The driver supports VirtualBox 5.0.10+
AWS EBS
The AWS EBS driver registers a storage driver named ebs
with the
libStorage
driver manager and is used to connect and manage AWS Elastic Block
Storage volumes for EC2 instances.
Note
For backwards compatibility, the driver also registers a storage driver
named ec2
. The use of ec2
in config files is deprecated but functional.
Note
The EBS driver does not yet support snapshots or tags, as previously supported in Rex-Ray v0.3.3.
The EBS driver is made possible by the official Amazon Go AWS SDK.
Requirements
- AWS account
- VPC - EBS can be accessed within VPC
- AWS Credentials
Configuration
The following is an example with all possible fields configured. For a running
example see the Examples
section.
ebs:
accessKey: XXXXXXXXXX
secretKey: XXXXXXXXXX
region: us-east-1
kmsKeyID: arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef
Configuration Notes
- The
accessKey
andsecretKey
configuration parameters are optional and should be used when explicit AWS credentials configuration needs to be provided. EBS driver uses official golang AWS SDK library and supports all other ways of providing access credentials, like environment variables or instance profile IAM permissions. region
represents AWS region where EBS volumes should be provisioned. See official AWS documentation for list of supported regions.- If the
kmsKeyID
field is specified it will be used as the encryption key for all volumes that are created with a truthy encryption request field.
For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.
Activating the Driver
To activate the AWS EBS driver please follow the instructions for
activating storage drivers,
using ebs
as the driver name.
Troubleshooting
- Make sure that AWS credentials (user or role) has following AWS permissions on
libStorage
server instance that will be making calls to AWS API:ec2:AttachVolume
,ec2:CreateVolume
,ec2:CreateSnapshot
,ec2:CreateTags
,ec2:DeleteVolume
,ec2:DeleteSnapshot
,ec2:DescribeAvailabilityZones
,ec2:DescribeInstances
,ec2:DescribeVolumes
,ec2:DescribeVolumeAttribute
,ec2:DescribeVolumeStatus
,ec2:DescribeSnapshots
,ec2:CopySnapshot
,ec2:DescribeSnapshotAttribute
,ec2:DetachVolume
,ec2:ModifySnapshotAttribute
,ec2:ModifyVolumeAttribute
,ec2:DescribeTags
Examples
Below is a working config.yml
file that works with AWS EBS.
libstorage:
server:
services:
ebs:
driver: ebs
ebs:
accessKey: XXXXXXXXXX
secretKey: XXXXXXXXXX
region: us-east-1
AWS EFS
The AWS EFS driver registers a storage driver named efs
with the
libStorage
driver manager and is used to connect and manage AWS Elastic File
Systems.
Requirements
- AWS account
- VPC - EFS can be accessed within VPC
- AWS Credentials
Configuration
The following is an example with all possible fields configured. For a running
example see the Examples
section.
efs:
accessKey: XXXXXXXXXX
secretKey: XXXXXXXXXX
securityGroups:
- sg-XXXXXXX
- sg-XXXXXX0
- sg-XXXXXX1
region: us-east-1
tag: test
disableSessionCache: false
Configuration Notes
- The
accessKey
andsecretKey
configuration parameters are optional and should be used when explicit AWS credentials configuration needs to be provided. EFS driver uses official golang AWS SDK library and supports all other ways of providing access credentials, like environment variables or instance profile IAM permissions. region
represents AWS region where EFS should be provisioned. See official AWS documentation for list of supported regions.securityGroups
list of security groups attached toMountPoint
instances. If no security groups are provided the default VPC security group is used.tag
is used to partition multiple services within single AWS account and is used as prefix for EFS names in format[tagprefix]/volumeName
.disableSessionCache
is a flag that can be used to disable the session cache. If the session cache is disabled then a new AWS connection is established with every API call.
For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.
Runtime Behavior
AWS EFS storage driver creates one EFS FileSystem per volume and provides root
of the filesystem as NFS mount point. Volumes aren't attached to instances
directly but rather exposed to each subnet by creating MountPoint
in each VPC
subnet. When detaching volume from instance no action is taken as there isn't
good way to figure out if there are other instances in same subnet using
MountPoint
that is being detached. There is no charge for MountPoint
so they are removed only once whole volume is deleted.
By default all EFS instances are provisioned as generalPurpose
performance mode.
maxIO
EFS type can be provisioned by providing maxIO
flag as volumetype
.
Its possible to mount same volume to multiple container on a single EC2 instance as well as use single volume across multiple EC2 instances at the same time.
NOTE: Each EFS FileSystem can be accessed only from single VPC at the time.
Activating the Driver
To activate the AWS EFS driver please follow the instructions for
activating storage drivers,
using efs
as the driver name.
Troubleshooting
- Make sure that AWS credentials (user or role) has following AWS permissions on
libStorage
server instance that will be making calls to AWS API:elasticfilesystem:CreateFileSystem
elasticfilesystem:CreateMountTarget
ec2:DescribeSubnets
ec2:DescribeNetworkInterfaces
ec2:CreateNetworkInterface
elasticfilesystem:CreateTags
elasticfilesystem:DeleteFileSystem
elasticfilesystem:DeleteMountTarget
ec2:DeleteNetworkInterface
elasticfilesystem:DescribeFileSystems
elasticfilesystem:DescribeMountTargets
Examples
Below is a working config.yml
file that works with AWS EFS.
libstorage:
server:
services:
efs:
driver: efs
efs:
accessKey: XXXXXXXXXX
secretKey: XXXXXXXXXX
securityGroups:
- sg-XXXXXXX
- sg-XXXXXX0
- sg-XXXXXX1
region: us-east-1
tag: test
AWS S3FS
The AWS S3FS driver registers a storage driver named s3fs
with the
libStorage
driver manager and provides the ability to mount Amazon Simple
Storage Service (S3) buckets as filesystems using the
s3fs
FUSE command.
Unlike the other AWS-related drivers, the S3FS storage driver does not need to deployed or used by an EC2 instance. Any client can take advantage of Amazon's S3 buckets.
Requirements
- AWS account
- The
s3fs
FUSE command must be present on client nodes.
Configuration
The following is an example with all possible fields configured. For a running
example see the Examples
section.
Server-Side Configuration
s3fs:
region: us-east-1
accessKey: XXXXXXXXXX
secretKey: XXXXXXXXXX
disablePathStyle: false
- The
accessKey
andsecretKey
configuration parameters are optional and should be used when explicit AWS credentials configuration needs to be provided. The S3FS driver uses the official Golang AWS SDK library and supports all other ways of providing access credentials, like environment variables or instance profile IAM permissions. region
represents AWS region where S3FS buckets should be provisioned. Please see the official AWS documentation for list of supported regions.- The
disablePathStyle
property disables the use of the path style for bucket endpoints. The path style is more stable with regards to regions than bucket URI FQDNs, but the path style is also less performant.
Client-Side Configuration
s3fs:
cmd: s3fs
options:
- XXXX
- XXXX
accessKey: XXXXXXXXXX
secretKey: XXXXXXXXXX
- The
cmd
property defaults simply tos3fs
with the assumption that thes3fs
binary will be in the path. This value can also be the absolute path to thes3fs
binary. options
is a list of options to pass to thes3fs
command. Please see the official documentation for a full list of CLI options. The-o
prefix should not be provided in the configuration file.- The credential properties can be defined on the client via the configuration
file and will be supplied to the
s3fs
process via environment variables. However, thes3fs
command will also look in all the usual places for the credentials if they're not in this file.
For information on the equivalent environment variable and CLI flag names please see the section on how non top-level configuration properties are transformed.
Runtime Behavior
The AWS S3FS storage driver can create new buckets as well as remove existing
ones. Buckets are mounted to clients as filesystems using the
s3fs
FUSE command. For clients
to correctly mount and unmount S3 buckets the s3fs
command should be in
the path of the executor or configured via the s3fs.cmd
property in the
client-side REX-Ray configuration file.
The client must also have access to the AWS credentials used for mounting and
unmounting S3 buckets. These credentials can be stored in the client-side
REX-Ray configuration file or via
any means avaialble
to the s3fs
command.
Activating the Driver
To activate the AWS S3FS driver please follow the instructions for
activating storage drivers,
using s3fs
as the driver name.
Examples
Below is a working config.yml
file that works with AWS S3FS.
libstorage:
server:
services:
s3fs:
driver: s3fs
s3fs:
accessKey: XXXXXXXXXX
secretKey: XXXXXXXXXX
Ceph RBD
The Ceph RBD driver registers a driver named rbd
with the libStorage
driver
manager and is used to connect and mount RADOS Block Devices from a Ceph
cluster.
Requirements
- The
ceph
andrbd
binary executables must be installed on the host - The
rbd
kernel module must be installed - A
ceph.conf
file must be present in its default location (/etc/ceph/ceph.conf
) - The ceph
admin
key must be present in/etc/ceph/
Configuration
The following is an example with all possible fields configured. For a running
example see the Examples
section.
rbd:
defaultPool: rbd
Configuration Notes
- The
defaultPool
parameter is optional, and defaults to "rbd". When set, all volume requests that do not reference a specific pool will use thedefaultPool
value as the destination storage pool.
Runtime behavior
The Ceph RBD driver only works when the client and server are on the same node.
There is no way for a centralized libStorage
server to attach volumes to
clients, therefore the libStorage
server must be running on each node that
wishes to mount RBD volumes.
The RBD driver uses the format of <pool>.<name>
for the volume ID. This allows
for the use of multiple pools by the driver. During a volume create, if the
volume ID is given as <pool>.<name>
, a volume named name will be created in
the pool storage pool. If no pool is referenced, the defaultPool
will be
used.
When querying volumes, the driver will return all RBDs present in all pools in
the cluster, prefixing each volume with the appropriate <pool>.
value.
All RBD creates are done using the default 4MB object size, and using the "layering" feature bit to ensure greatest compatibility with the kernel clients.
Activating the Driver
To activate the Ceph RBD driver please follow the instructions for
activating storage drivers, using rbd
as the
driver name.
Troubleshooting
- Make sure that
ceph
andrbd
commands work without extra parameters for ID, key, and monitors. All configuration must come fromceph.conf
. - Check status of the ceph cluster with
ceph -s
command.
Examples
Below is a full config.yml
that works with RBD
libstorage:
server:
services:
rbd:
driver: rbd
rbd:
defaultPool: rbd
Caveats
- Snapshot and copy functionality is not yet implemented
- libStorage Server must be running on each host to mount/attach RBD volumes
- There is not yet options for using non-admin cephx keys or changing RBD create features
- Volume pre-emption is not supported. Ceph does not provide a method to forcefully detach a volume from a remote host -- only a host can attach and detach volumes from itself.
- RBD advisory locks are not yet in use. A volume is returned as "unavailable" if it has a watcher other than the requesting client. Until advisory locks are in place, it may be possible for a client to attach a volume that is already attached to another node. Mounting and writing to such a volume could lead to data corruption.
GCE Persistent Disk
The Google Compute Engine Persistent Disk (GCEPD) driver registers a driver
named gcepd
with the libStorage
driver manager and is used to connect and
mount Google Compute Engine (GCE) persistent disks with GCE machine instances.
Requirements
- GCE account
- The libStorage server must be running on a GCE instance created with a Service
Account with appropriate permissions, or a Service Account credentials file
in JSON format must be supplied. If not using the Compute Engine default
Service Account with the Cloud Platform/"all cloud APIs" scope, create a new
Service Account via the IAM Portal.
This Service Account requires the
Compute Engine/Instance Admin
,Compute Engine/Storage Admin
, andProject/Service Account Actor
roles. Then create/download a new private key in JSON format. see creating a service account for details. The libStorage service must be restarted in order for permissions changes on a service account to take effect.
Configuration
The following is an example with all possible fields configured. For a running
example see the Examples
section.
gcepd:
keyfile: /etc/gcekey.json
zone: us-west1-b
defaultDiskType: pd-ssd
tag: rexray
Configuration Notes
- The
keyfile
parameter is optional. It specifies a path on disk to a file containing the JSON-encoded Service Account credentials. This file can be downloaded from the GCE web portal. Ifkeyfile
is specified, the GCE instance's service account is not considered, and is not necessary. Ifkeyfile
is not specified, the application will try to lookup application default credentials. This has the effect of looking for credentials in the priority described here. - The
zone
parameter is optional, and configures the driver to only allow access to the given zone. Creating and listing disks from other zones will be denied. If a zone is not specified, the zone from the client Instance ID will be used when creating new disks. - The
defaultDiskType
parameter is optional and specifies what type of disk to create, eitherpd-standard
orpd-ssd
. When not specified, the default ispd-ssd
. - The
tag
parameter is optional, and causes the driver to create or return disks that have a matching tag. The tag is implemented by using the GCE label functionality available in the beta API. The value of thetag
parameter is used as the value for a label with the keylibstoragetag
. Use of this parameter is encouraged, as the driver will only return volumes that have been created by the driver, which is most useful to eliminate listing the boot disks of every GCE disk in your project/zone. If you wsih to "expose" previously created disks to theGCEPD
driver, you can edit the labels on the existing disk to have a key oflibstoragetag
and a value matching that given intag
.
Runtime behavior
- The GCEPD driver enforces the GCE requirements for disk sizing and naming.
Disks must be created with a minimum size of 10GB. Disk names must adhere to
the regular expression of
[a-z]([-a-z0-9]*[a-z0-9])?
, which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash. - If the
zone
parameter is not specified in the driver configuration, and a request is received to list all volumes that does not specify a zone in the InstanceID header, volumes from all zones will be returned. - By default, all disks will be created with type
pd-ssd
, which creates an SSD based disk. If you wish to create disks that are not SSD-based, change the default via the driver config, or the type can be changed at creation time by using theType
field of the create request.
Activating the Driver
To activate the GCEPD driver please follow the instructions for
activating storage drivers, using gcepd
as the
driver name.
Troubleshooting
- Make sure that the JSON credentials file as specified in the
keyfile
configuration parameter is present and accessible, or that you are running in a GCE instance created with a Service Account attached. Whether using akeyfile
or the Service Account associated with the GCE instance, the Service Account must have the appropriate permissions as described inConfiguration Notes
Examples
Below is a full config.yml
that works with GCE
libstorage:
server:
services:
gcepd:
driver: gcepd
gcepd:
keyfile: /etc/gcekey.json
tag: rexray
Caveats
- Snapshot and copy functionality is not yet implemented
- Most GCE instances can have up to 64 TB of total persistent disk space attached. Shared-core machine types or custom machine types with less than 3.75 GB of memory are limited to 3 TB of total persistent disk space. Total persistent disk space for an instance includes the size of the root persistent disk. You can attach up to 16 independent persistent disks to most instances, but instances with shared-core machine types or custom machine types with less than 3.75 GB of memory are limited to a maximum of 4 persistent disks, including the root persistent disk. See GCE Disks docs for more details.
- If running libStorage server in a mode where volume mounts will not be
performed on the same host where libStorage server is running, it should be
possible to use a Service Account without the
Service Account Actor
role, but this has not been tested. Note that if persistent disk mounts are to be performed on any GCE instances that have a Service Account associated with the, theService Account Actor
role is required.