OpenStack Introduction for Ubuntu Part IV

This is the fourth Post about Open Stack Introduction. The first part of this post series is about general concepts, basic configuration and Identity service installation , the second part one continue with Image and Compute services., the third part is about about dashboard and block storage configuration.. This part is about Object Storage service. At this link a complete installation guide can be found.

Add Object Storage

Object Storage service

The Object Storage service is a storage system for large amounts of unstructured data through a RESTful HTTP API. It includes the following components:

  • Proxy servers (swift-proxy-server). Accepts Object Storage API and raw HTTP requests to upload files, modify metadata, and create containers.
  • Account servers (swift-account-server). Manage accounts defined with the Object Storage service
  • Container servers (swift-container-server). Manage a mapping of containers, or folders, within the Object Storage service.
  • Object servers (swift-object-server). Manage actual objects, such as files, on the storage nodes.
  • Periodic process for general maintenance tasks (auditors, updaters, reapers)

Systems Requirements

At this guide we won't think about this, but for a real production environment You need to study carefully the system requirements defined at this link.

Plan networking for Object Storage

This network will have one proxy and 3 storage nodes with the next ip addresses:

    192.168.0.13    swift-proxy
    192.168.0.14    storage1
    192.168.0.15    storage2
    192.168.0.16    storage3

The param STORAGE_LOCAL_NET_IP is the local ip address of every storage node.

Another network options can be found at this link

Example Object Storage installation architecture

  • Node: A host machine that runs one or more OpenStack Object Storage services.
  • Proxy node: Runs Proxy services.
  • Storage node: Runs Account, Container, and Object services.
  • Ring: A set of mappings between OpenStack Object Storage data to physical devices.
  • Replica: A copy of an object. By default, three copies are maintained in the cluster.
  • Zone: A logically separate section of the cluster, related to independent failure characteristics.

Note: for this guide we will install one proxy node which runs the swift-proxy-server processes and three storage nodes that run the swift-account-server, swift-container-server, and swift-object-server processes which control storage of the account databases, the container databases, as well as the actual stored objects.

Edit /etc/hosts in all nodes (controller, block1 , storage1...)

    127.0.0.1       localhost
    192.168.0.10    controller
    192.168.0.11    compute1
    192.168.0.12    block1
    192.168.0.13    swift-proxy
    192.168.0.14    storage1
    192.168.0.15    storage2
    192.168.0.16    storage3

Edit /etc/hostname and set hostname to swift-proxy, storage1, storage2, storage3

General Installation steps

Add Open Stack repositories

   # apt-get install python-software-properties
   # add-apt-repository cloud-archive:havana 
   # apt-get update && apt-get dist-upgrade
   # reboot

Create a swift user that the Object Storage Service can use to authenticate with the Identity Service. Execute the next commands in the controller node:

   # keystone user-create --name=swift --pass=SWIFT_PASS \
  --email=swift@example.com
  # keystone user-role-add --user=swift --tenant=service --role=admin 

    +----------+----------------------------------+
    | Property |              Value               |
    +----------+----------------------------------+
    |  email   |        swift@example.com         |
    | enabled  |               True               |
    |    id    | b64f304b791d485ea960d8a0296bb63d |
    |   name   |              swift               |
    +----------+----------------------------------+

Create a service entry for the Object Storage Service:

   # keystone service-create --name=swift --type=object-store \
  --description="Object Storage Service" 

    +-------------+----------------------------------+
    |   Property  |              Value               |
    +-------------+----------------------------------+
    | description |      Object Storage Service      |
    |      id     | e54246ea9eb64d5ca002cbf5481dd5eb |
    |     name    |              swift               |
    |     type    |           object-store           |
    +-------------+----------------------------------+

Specify an API endpoint for the Object Storage Service by using the returned service ID.

# keystone endpoint-create \
  --service-id=the_service_id_above \
  --publicurl='http://swift-proxy:8080/v1/AUTH_%(tenant_id)s' \
  --internalurl='http://swift-proxy:8080/v1/AUTH_%(tenant_id)s' \
  --adminurl=http://swift-proxy:8080

    +-------------+----------------------------------------------+
    |   Property  |                    Value                     |
    +-------------+----------------------------------------------+
    |   adminurl  |            http://controller:8080            |
    |      id     |       e8f5a0654b7f4af6a9c2357250dd5c63       |
    | internalurl | http://controller:8080/v1/AUTH_%(tenant_id)s |
    |  publicurl  | http://controller:8080/v1/AUTH_%(tenant_id)s |
    |    region   |                  regionOne                   |
    |  service_id |       e54246ea9eb64d5ca002cbf5481dd5eb       |
    +-------------+----------------------------------------------+

Create the configuration directory on all swift nodes:

   # mkdir -p /etc/swift 

Create /etc/swift/swift.conf on all nodes:

    [swift-hash]
    # random unique string that can never change (DO NOT LOSE)
    swift_hash_path_suffix = afLIeftgibit

Note: The suffix value in /etc/swift/swift.conf should be set to some random string of text to be used as a salt when hashing to determine mappings in the ring. This file must be the same on every node in the cluster!

Install and configure Storage nodes

Install Storage node packages:

   # apt-get install swift swift-account swift-container swift-object xfsprogs 

For each device on the node that you want to use for storage, set up the XFS volume (/dev/sdb is used as an example). Use a single partition per drive.

    # fdisk /dev/sdb
    # mkfs.xfs /dev/sdb1
    # echo "/dev/sdb1 /srv/node/sdb1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
    # mkdir -p /srv/node/sdb1
    # mount /srv/node/sdb1
    # chown -R swift:swift /srv/node

Create /etc/rsyncd.conf:

    uid = swift
    gid = swift
    log file = /var/log/rsyncd.log
    pid file = /var/run/rsyncd.pid
    address = STORAGE_LOCAL_NET_IP
     
    [account]
    max connections = 2
    path = /srv/node/
    read only = false
    lock file = /var/lock/account.lock
     
    [container]
    max connections = 2
    path = /srv/node/
    read only = false
    lock file = /var/lock/container.lock
     
    [object]
    max connections = 2
    path = /srv/node/
    read only = false
    lock file = /var/lock/object.lock

Edit the following line in /etc/default/rsync:

    RSYNC_ENABLE=true

Start the rsync service:

   # service rsync start 
   # mkdir -p /var/swift/recon
  # chown -R swift:swift /var/swift/recon

Install and configure the proxy node

The proxy server takes each request and looks up locations for the account, container, or object and routes the requests correctly. The proxy server also handles API requests. You enable account management by configuring it in the /etc/swift/proxy-server.conf file.

Install swift-proxy service:

   # apt-get install swift-proxy memcached python-keystoneclient python-swiftclient python-webob 

Modify memcached to listen on the default interface on a local, non-public network. Edit this line in the /etc/memcached.conf file: change

   -l 127.0.0.1 

to:

   -l PROXY_LOCAL_NET_IP 

Restart the memcached service:

   # service memcached restart 

Create /etc/swift/proxy-server.conf:

   [DEFAULT]
    bind_port = 8080
    user = swift
     
    [pipeline:main]
    pipeline = healthcheck cache authtoken keystoneauth proxy-server
     
    [app:proxy-server]
    use = egg:swift#proxy
    allow_account_management = true
    account_autocreate = true
     
    [filter:keystoneauth]
    use = egg:swift#keystoneauth
    operator_roles = Member,admin,swiftoperator
     
    [filter:authtoken]
    paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
     
    # Delaying the auth decision is required to support token-less
    # usage for anonymous referrers ('.r:*').
    delay_auth_decision = true
     
    # cache directory for signing certificate
    signing_dir = /home/swift/keystone-signing
     
    # auth_* settings refer to the Keystone server
    auth_protocol = http
    auth_host = controller
    auth_port = 35357
     
    # the service tenant and swift username and password created in Keystone
    admin_tenant_name = service
    admin_user = swift
    admin_password = SWIFT_PASS
     
    [filter:cache]
    use = egg:swift#memcache
    memcache_servers = PROXY_LOCAL_NET_IP

    [filter:catch_errors]
    use = egg:swift#catch_errors
     
    [filter:healthcheck]
    use = egg:swift#healthcheck 

Create the account, container, and object rings. The builder command creates a builder file with a few parameters.

    # cd /etc/swift
    # swift-ring-builder account.builder create 18 3 1
    # swift-ring-builder container.builder create 18 3 1
    # swift-ring-builder object.builder create 18 3 1 

The parameter with the value of 18 represents 2 ^ 18th, the value that the partition is sized to. Set this “partition power” value based on the total amount of storage you expect your entire ring to use. The value 3 represents the number of replicas of each object, with the last value being the number of hours to restrict moving a partition more than once.

For every storage device on each node add entries to each ring:

    # swift-ring-builder account.builder add zZONE-STORAGE_LOCAL_NET_IP:6002[RSTORAGE_REPLICATION_NET_IP:6005]/DEVICE 100
    # swift-ring-builder container.builder add zZONE-STORAGE_LOCAL_NET_IP_1:6001[RSTORAGE_REPLICATION_NET_IP:6004]/DEVICE 100
    # swift-ring-builder object.builder add zZONE-STORAGE_LOCAL_NET_IP_1:6000[RSTORAGE_REPLICATION_NET_IP:6003]/DEVICE 100

In this guide. every storage node has a partition in Zone 1 . Their IP adress are 192.168.0.14, 192.168.0.15 and 192.168.0.16 without replication network. The mount point of this partition is /srv/node/sdb1, and the path in /etc/rsyncd.conf is /srv/node/, the DEVICE would be sdb1 and the commands are:


    # swift-ring-builder account.builder add z1-192.168.0.14:6002/sdb1 100
    # swift-ring-builder container.builder add z1-192.168.0.14:6001/sdb1 100
    # swift-ring-builder object.builder add z1-192.168.0.14:6000/sdb1 100

    # swift-ring-builder account.builder add z1-192.168.0.15:6002/sdb1 100
    # swift-ring-builder container.builder add z1-192.168.0.15:6001/sdb1 100
    # swift-ring-builder object.builder add z1-192.168.0.15:6000/sdb1 100

    # swift-ring-builder account.builder add z1-192.168.0.16:6002/sdb1 100
    # swift-ring-builder container.builder add z1-192.168.0.16:6001/sdb1 100
    # swift-ring-builder object.builder add z1-192.168.0.16:6000/sdb1 100

Verify the ring contents for each ring:

   # cd /etc/swift/

   # swift-ring-builder account.builder
   # swift-ring-builder container.builder
   # swift-ring-builder object.builder 

    account.builder, build version 1
    262144 partitions, 3.000000 replicas, 1 regions, 1 zones, 1 devices, 100.00 balance
    The minimum number of hours before a partition can be reassigned is 1
    Devices:    id  region  zone      ip address  port  replication ip  replication port      name weight partitions balance meta
                 0       1     1    192.168.0.16  6002    192.168.0.16              6002      sdb1 100.00          0 -100.00 

Rebalance the rings:

    # swift-ring-builder account.builder rebalance
    # swift-ring-builder container.builder rebalance
    # swift-ring-builder object.builder rebalance

Copy the account.ring.gz, container.ring.gz, and object.ring.gz files to each of the Proxy and Storage nodes in /etc/swift.

Make sure the swift user owns all configuration files:

   # chown -R swift:swift /etc/swift 

Restart service:

    #service swift-proxy stop
    #service swift-proxy start 

Start services on the storage nodes

    # swift-init all start

Verify the installation

You can run these commands from the proxy server or any server that has access to the Identity Service.

  #swift stat  
   Account: AUTH_fe13b472ad9e43e2aa8c71e7cc1c5f7c
Containers: 0
   Objects: 0
     Bytes: 0
Content-Type: text/plain; charset=utf-8
X-Timestamp: 1402842465.82440
X-Put-Timestamp: 1402842465.82440
    

Create and upload a test.txt file

    $echo "this is a test" > test.txt
    $swift upload myfiles test.txt 

    $swift download myfiles