Configure an Object Storage deployment#
To configure stand-alone Object Storage for RPCO, you update the following files:
/etc/openstack_deploy/openstack_user_config.yml
/etc/openstack_deploy/user_osa_variables_overrides.yml
/etc/openstack_deploy/conf.d/swift.yml
For direct links to these files, see Object Storage configuration files.
Update the stand-alone Object Storage configuration files#
In the
openstack_user_config.yml
file, set theidentity_hosts
andshared-infra_hosts
entries to the following values to set up the Identity and infrastructure services. The following example showsnode1
throughnode3
. The values must be adjusted for allidentity_hosts
andshared-infra_hosts
entries.# identity_hosts: # node1: # ip: 192.168.100.1 # node2: # ip: 192.168.100.2 # node3: # ip: 192.168.100.3 # shared-infra_hosts: # node1: # ip: 192.168.100.1 # node2: # ip: 192.168.100.2 # node3: # ip: 192.168.100.3
Note
In the
openstack_user_config.yml
file, do not set up anycompute_hosts
orstorage_hosts
entries. When the hosts are not set up in theopenstack_user_config.yml
file, the Ansible playbooks do not deploy the hosts.In the
user_osa_variables_overrides.yml
file, update the following user variables.Note
The
rackspace_cloud
variables are needed only if you are using the Rackspace Monitoring service.# container_openstack_password: # keystone_auth_admin_password: # keystone_auth_admin_token: # keystone_container_mysql_password: # keystone_service_password: # memcached_encryption_key: # mysql_debian_sys_maint_password: # mysql_root_password: # rabbitmq_cookie_token: # rabbitmq_password: # rackspace_cloud_api_key: # rackspace_cloud_auth_url: # rackpace_cloud_password: # rackspace_cloud_tenant_id: # rackspace_cloud_username: # rackspace_cloudfiles_tenant_id: # rpc_support_holland_password: # swift_container_mysql_password: # swift_hash_path_prefix: # swift_hash_path_suffix: # swift_service_password: # swift_dispersion_user: # swift_dispersion_password: # service_syslog: # galera_root_password:
In the
/etc/openstack_deploy/conf.d/swift.yml
file, update theglobal_overrides
values as shown in the following example. The parameters are explained following the example.# global_overrides: # swift: # part_power: 8 # weight: 100 # min_part_hours: 1 # repl_number: 3 # storage_network: 'br-storage' # replication_network: 'br-repl' # drives: # - name: sdc # - name: sdd # - name: sde # - name: sdf # mount_point: /mnt # account: # container: # storage_policies: # - policy: # name: gold # index: 0 # default: True # - policy: # name: silver # index: 1 # repl_number: 3 # deprecated: True
part_power
Set the partition power value based on the total amount of storage that the entire ring will use.Multiply the maximum number of drives used with this Object Storage installation by 100 and round that value up to the closest power of 2 value. For example, a maximum of 6 drives, times 100, equals 600. The nearest power of 2 above 600 is 2 to the power of 9, so the partition power is 9. The partition power can’t be changed after the Object Storage rings are built.
weight
The default weight is 100. If the drives are different sizes, set the weight value to avoid uneven distribution of data. For example, a 1 TB disk would have a weight of 100, while a 2 TB drive would have a weight of 200.
min_part_hours
The default value is 1. Set the minimum partition hours to the amount of time to lock a partition’s replicas after a partition has been moved. Moving multiple replicas at the same time might make data inaccessible. This value can be set separately in the
swift
,container
,account
, andpolicy
sections, with the value in other sections superseding the value in theswift
section.Note
For account and container rings,
min_part_hours
andrepl_number
are the only values that can be set. Setting them in this section overrides the defaults for the specific ring.repl_number
The default value is 3. Set the replication number to the number of replicas of each object. This value can be set separately in the
swift
,container
,account
, andpolicy
sections, with the value in the other sections superseding the value in theswift
section.Note
For account and container rings,
min_part_hours
andrepl_number
are the only values that can be set. Setting them in this section overrides the defaults for the specific ring.storage_network
By default, the swift services will listen on the default management IP address. Optionally, specify the interface of the storage network.
Note
If the
storage_network
value is not set, but thestorage_ips
value per host is set (or thestorage_ips
value is not set on the storage network interface), the proxy server can’t connect to the storage services.replication_network
Optionally, specify a dedicated replication network interface, so dedicated replication can be set up. If this value is not specified, no dedicated replication network is set.
Note
As with the storage network, if the
repl_ip
value is not set on the replication network interface, replication does not work properly.drives
Set the default drives per host. This is useful when all hosts have the same drives. These values can be overridden for each host.
mount_point
Set the
mount_point
value to the location where the swift drives are mounted. For example, with a mount point of/mnt
and a drive ofsdc
, a drive is mounted at/mnt/sdc
on the swift host. This value can be overridden for each host.storage_policies
Storage policies determine on which hardware data is stored, how the data is stored across that hardware, and in which region the data resides. Each storage policy must have an unique
name
and a uniqueindex
value. There must be a storage policy with an index of 0 in theswift.yml
file to use any legacy containers created before storage policies were instituted.default
Set the
default
value toyes
for at least one policy. This policy is the default storage policy for any non-legacy containers that are created.deprecated
Set the
deprecated ``value to ``yes
to turn off storage policies.
In the
/etc/openstack_deploy/conf.d/swift.yml
file, update the Object Storage proxy hosts values by setting the IP address of the hosts to which Ansible will connect to deploy the swift-proxy containers. Theswift-proxy_hosts
value should match the infra nodes.# swift-proxy_hosts: # infra-node1: # ip: 192.0.2.1 # infra-node2: # ip: 192.0.2.2 # infra-node3: # ip: 192.0.2.3
(Optional) In the
/etc/openstack_deploy/conf.d/swift.yml
file, update the Object Storage hosts values as shown in the following example. The parameters are explained following the example.# swift_hosts: # swift-node1: # ip: 192.0.2.4 # container_vars: # swift_vars: # zone: 0 # swift-node2: # ip: 192.0.2.5 # container_vars: # swift_vars: # zone: 1 # swift-node3: # ip: 192.0.2.6 # container_vars: # swift_vars: # zone: 2 # swift-node4: # ip: 192.0.2.7 # container_vars: # swift_vars: # zone: 3 # swift-node5: # ip: 192.0.2.8 # container_vars: # swift_vars: # storage_ip: 198.51.100.8 # repl_ip: 203.0.113.8 # zone: 4 # region: 3 # weight: 200 # groups: # - account # - container # - silver # drives: # - name: sdb # storage_ip: 198.51.100.9 # repl_ip: 203.0.113.9 # weight: 75 # groups: # - gold # - name: sdc # - name: sdd # - name: sde # - name: sdf
swift_hosts
Specify the hosts to be used as the storage nodes. The
ip
value is the address of the host to which Ansible connects. Set the name and IP address of each Object Storage host. Theswift_hosts
section is not required.swift_vars
Contains the values specific to the Object Storage hosts.
storage_ip
andrepl_ip
These values are based on the IP addresses of the host’s storage network or replication network. For example, if the storage_network is
br-storage
andnode1
has an IP address of 1.1.1.1 onbr-storage
, then that IP address is used for thestorage_ip
value. If only thestorage_ip
value is specified, therepl_ip
value defaults to thestorage_ip
value. If neither are specified, both default to the host IP address.Note
Overriding these values on a host or drive basis can cause problems if the IP address that the service listens on is based on a specified storage network or replication network and the ring is set to a different IP address.
zone
The default is
0
. Optionally, set the Object Storage zone for the ring.region
Optionally, set the Object Storage region for the ring.
weight
The default weight is
100
. If the drives are different sizes, set the weight value to avoid uneven distribution of data. You can specify this value on a host or drive basis (if specified on both, the drive setting takes precedence).groups
Set this value to list the rings to which a host’s drive belongs. You can set this value on a per-drive basis, which overrides the host setting.
drives
Set the names of the drives on this Object Storage host. At least one name must be specified.
weight
The default weight is
100
. If the drives are different sizes, set the weight value to avoid uneven distribution of data. You can specify this value on a host or drive basis (if specified on both, the drive setting takes precedence).
In the following example,
swift-node5
shows values in theswift_hosts
section that override the global values. Groups are set, which overrides the global settings for drivesdb
. The weight is overridden for the host and specifically adjusted on drivesdb
. Also, thestorage_ip
andrepl_ip
values are set differently forsdb
.# swift-node5: # ip: 192.0.2.8 # container_vars: # swift_vars: # storage_ip: 198.51.100.8 # repl_ip: 203.0.113.8 # zone: 4 # region: 3 # weight: 200 # groups: # - account # - container # - silver # drives: # - name: sdb # storage_ip: 198.51.100.9 # repl_ip: 203.0.113.9 # weight: 75 # groups: # - gold # - name: sdc # - name: sdd # - name: sde # - name: sdf
Verify that the
swift.yml
file is in the/etc/openstack_deploy/conf.d/
folder.(Optional) If you are using the built-in HAProxy as the load balancer, add the following variables to the
user_osa_variables_overrides.yml
file to ensure that only the appropriate load balancer endpoints are set.# haproxy_service_configs: # - service: # haproxy_service_name: repo_all # haproxy_backend_nodes: "{{ groups['pkg_repo'] }}" # haproxy_port: 8181# # haproxy_backend_port: 8181 # haproxy_balance_type: http # - service: # haproxy_service_name: galera # haproxy_backend_nodes: "{{ [groups['galera_all'][0]] }}" # list # expected # haproxy_backup_nodes: "{{ groups['galera_all'][1:] }}" # haproxy_port: 3306 # haproxy_balance_type: tcp # haproxy_timeout_client: 5000s # haproxy_timeout_server: 5000s # haproxy_backend_options: # - "mysql-check user {{ galera_monitoring_user }}" # - service: # haproxy_service_name: swift_proxy # haproxy_backend_nodes: "{{ groups['swift_proxy'] }}" # haproxy_balance_alg: source # haproxy_port: 8080 # haproxy_balance_type: http # - service: # haproxy_service_name: keystone_admin # haproxy_backend_nodes: "{{ groups['keystone_all'] }}" # haproxy_port: 35357 # haproxy_balance_type: http # haproxy_backend_options: # - "forwardfor" # - "httpchk" # - "httplog" # - service: # haproxy_service_name: keystone_service # haproxy_backend_nodes: "{{ groups['keystone_all'] }}" # haproxy_port: 5000 # haproxy_balance_type: http # haproxy_backend_options: # - "forwardfor" # - "httpchk" # - "httplog"
Allow Identity users to use Object Storage#
Users with the _member_
role are authorized Identity (keystone)
users. You can allow these users to create containers and upload
objects to Object Storage by opening the
user_osa_variables_overrides.yml
file for editing and setting
swift_allow_all_users
to True
.
Note
If this value is False
, then by default, only users with the
admin
or swiftoperator
role are allowed to create containers or
manage projects.
When the back-end type for the Image service (glance) is set to
swift
, the Image service can access the Object Storage cluster
regardless of whether this value is True
or False
.