Apache CloudStack Cloud Operation
This document presents the concept of cloud computing environments, under
the context of private cloud, and a introduction to the Apache CloudStack cloud
orchestration platform, focusing on its operation process.
Last update: February 13, 2025
Revision:
Contents
1 Introduction 19
1.1 Private cloud platforms . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.1.1 Basic functionalities . . . . . . . . . . . . . . . . . . . . . . . 20
1.2 Apache CloudStack history . . . . . . . . . . . . . . . . . . . . . . . 22
2 Apache CloudStack basic concepts 23
2.1 Control Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 Compute Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.3 Data Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.4 Logical organization of Apache CloudStack . . . . . . . . . . . . . . 25
2.5 Apache CloudStack components . . . . . . . . . . . . . . . . . . . . 26
3 Apache CloudStack functionalities 28
3.1 Home dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2 VM settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.2.1 General settings . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.2.2 Miscellaneous settings for internal usage . . . . . . . . . . . 30
3.2.3 VM settings specific for KVM . . . . . . . . . . . . . . . . . . 30
3.2.4 VM settings specific for VMware . . . . . . . . . . . . . . . . 30
3.2.5 Settings specific for XenServer internal use . . . . . . . . . . 31
3.2.6 Settings specific for Mac OSX - Guest . . . . . . . . . . . . . 31
3.2.7 Global settings for VMs . . . . . . . . . . . . . . . . . . . . . 31
3.2.7.1 User access restriction . . . . . . . . . . . . . . . . 31
3.2.7.2 Extra settings metadata . . . . . . . . . . . . . . . . 32
3.2.7.3 VM statistics retention . . . . . . . . . . . . . . . . 33
3.3 Volume management . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3.1 Volume migration with stopped VM (cold migration) . . . . 35
3.3.2 Volume migration with running VMs (hot migration) . . . . 36
3.3.3 Volume migration via CLI . . . . . . . . . . . . . . . . . . . . 37
2
3.3.4 Importing a volume to another zone or account . . . . . . . 39
3.4 Virtual router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.4.1 Stopping the virtual router . . . . . . . . . . . . . . . . . . . 42
3.4.2 Restarting the virtual router . . . . . . . . . . . . . . . . . . 43
3.4.3 Destroying a virtual router . . . . . . . . . . . . . . . . . . . 44
3.4.4 Default offering definition for virtual router . . . . . . . . . 44
3.5 Public IPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.5.1 Public IP reserved for system VMs . . . . . . . . . . . . . . . 46
3.6 IPv6 support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6.1 Isolated networks and VPCs tiers . . . . . . . . . . . . . . . . 46
3.6.2 Shared networks . . . . . . . . . . . . . . . . . . . . . . . . . 52
3.7 Host, storage and network tags . . . . . . . . . . . . . . . . . . . . . 54
3.7.1 Host tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.7.2 Storage tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.7.3 Network tags . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.7.4 Flexible tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.8 Managing instance deployment based in their operating system . 63
3.8.1 Hosts Preference OS . . . . . . . . . . . . . . . . . . . . . . . 63
3.8.2 Flexible guest OS . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.9 Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.9.1 Snapshot settings . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.10 Event and alert audit . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.10.1 Alert e-mails . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
3.10.2 Searching and filtering alerts . . . . . . . . . . . . . . . . . . 68
3.10.3 Removing or archiving alerts . . . . . . . . . . . . . . . . . . 68
3.10.4 Event and alert removal automation . . . . . . . . . . . . . 69
3.11 Service offerings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
3.11.1 Compute offerings . . . . . . . . . . . . . . . . . . . . . . . . 69
3.12 Network offerings - throttling . . . . . . . . . . . . . . . . . . . . . . 70
3
3.12.1 System offerings . . . . . . . . . . . . . . . . . . . . . . . . . 71
3.12.1.1 Creating system offerings . . . . . . . . . . . . . . 72
3.12.1.2 Editing system offerings . . . . . . . . . . . . . . . 74
3.12.1.3 Removing system offerings . . . . . . . . . . . . . . 75
3.12.1.4 Changing the system offering of a system VMs . . 76
3.12.2 Backup offerings . . . . . . . . . . . . . . . . . . . . . . . . . 76
3.12.2.1 Enabling backup offerings . . . . . . . . . . . . . . 77
3.12.2.2 Importing backup offerings . . . . . . . . . . . . . 78
3.12.2.3 Backup offering removal . . . . . . . . . . . . . . . 80
3.12.2.4 Using backup offerings . . . . . . . . . . . . . . . . 80
3.12.3 IOPS and BPS limitation in disk offerings . . . . . . . . . . . 83
3.13 Storage management within Apache CloudStack . . . . . . . . . . 87
3.13.1 Primary storage . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.13.1.1 Adding a primary storage . . . . . . . . . . . . . . . 88
3.13.1.2 Disabling a primary storage . . . . . . . . . . . . . 91
3.13.1.3 Maintenance mode for primary storage . . . . . . 92
3.13.1.4 Behaviour after restarting hosts . . . . . . . . . . . 92
3.13.1.5 Local storage usage . . . . . . . . . . . . . . . . . . 93
3.13.2 Secondary storage . . . . . . . . . . . . . . . . . . . . . . . . 98
3.13.2.1 Adding secondary storages . . . . . . . . . . . . . . 99
3.13.2.2 Data migration between secondary storages . . . 100
3.13.2.3 Read-only mode for secondary storage . . . . . . 101
3.13.2.4 Read-write mode for secondary storage . . . . . . 102
3.13.2.5 Secondary storage removal . . . . . . . . . . . . . 103
3.14 Resource allocation for secondary storage . . . . . . . . . . . . . . 103
4 Apache CloudStack settings 108
4.1 Settings scopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.2 Global settings that control the primary storage usage . . . . . . . 109
4.3 Settings for limiting resource . . . . . . . . . . . . . . . . . . . . . . 111
4
4.4 Settings that control Kubernetes usage . . . . . . . . . . . . . . . . 111
4.4.1 Enabling Kubernetes integration . . . . . . . . . . . . . . . . 111
4.4.2 Kubernetes clusters creation . . . . . . . . . . . . . . . . . . 112
5 UI customization 113
5.1 Changing logo and other elements . . . . . . . . . . . . . . . . . . . 113
5.2 Changing logo when resizing the page . . . . . . . . . . . . . . . . . 115
5.3 Theme management . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.3.1 Theme creation . . . . . . . . . . . . . . . . . . . . . . . . . . 117
5.3.2 Theme list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
5.3.3 Updating a theme . . . . . . . . . . . . . . . . . . . . . . . . 120
5.3.3.1 CSS field . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.3.3.2 JSON settings . . . . . . . . . . . . . . . . . . . . . . 121
5.3.4 Removing a theme . . . . . . . . . . . . . . . . . . . . . . . . 123
5.3.5 Common UI customization examples . . . . . . . . . . . . . 123
5.3.5.1 Creating themes with external stylization file . . . 123
5.3.5.2 Notes about style conflicts . . . . . . . . . . . . . . 123
5.3.5.3 Adding fonts . . . . . . . . . . . . . . . . . . . . . . 124
5.3.5.4 Using CSS variables . . . . . . . . . . . . . . . . . . 124
5.3.5.5 Login page . . . . . . . . . . . . . . . . . . . . . . . 124
5.3.5.6 Header stylization . . . . . . . . . . . . . . . . . . . 126
5.3.5.7 Sidebar stylization . . . . . . . . . . . . . . . . . . . 127
5.3.5.8 Cards and dashboard graphs stylization . . . . . . 129
5.3.5.9 Listings and links stylization . . . . . . . . . . . . . 130
5.4 Redirection to external links . . . . . . . . . . . . . . . . . . . . . . . 131
6 Resources consumption accounting 133
6.1 Usage Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
6.1.1 Usage Server setup . . . . . . . . . . . . . . . . . . . . . . . . 133
6.1.2 Usage type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
5
6.2 Quota . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.2.1 Quota setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
6.2.2 Tariffs management . . . . . . . . . . . . . . . . . . . . . . . 139
6.2.2.1 Creating tariffs . . . . . . . . . . . . . . . . . . . . . 140
6.2.2.2 Tariff details . . . . . . . . . . . . . . . . . . . . . . 142
6.2.2.3 Editing tariffs . . . . . . . . . . . . . . . . . . . . . . 142
6.2.2.4 Removing tariffs . . . . . . . . . . . . . . . . . . . . 143
6.2.3 Activation rules . . . . . . . . . . . . . . . . . . . . . . . . . . 143
6.2.3.1 Default presets for all resource types . . . . . . . . 145
6.2.3.2 Presets for the RUNNING\_VM type . . . . . . . . . 147
6.2.3.3 Presets for the ALLOCATED\_VM type . . . . . . . 148
6.2.3.4 Presets for the VOLUME type . . . . . . . . . . . . 149
6.2.3.5 Presets for the TEMPLATE and ISO type . . . . . . 149
6.2.3.6 Presets for the SNAPSHOT type . . . . . . . . . . . 150
6.2.3.7 Presets for the NETWORK_OFFERING type . . . . . 151
6.2.3.8 Presets for the VM_SNAPSHOT type . . . . . . . . 151
6.2.3.9 Presets for the BACKUP type . . . . . . . . . . . . . 151
6.2.3.10 Presets for the NETWORK USAGE type . . . . . . . 152
6.2.3.11 Presets for the BACKUP OBJECT type . . . . . . . . 152
6.2.3.12 Verifying presets via API . . . . . . . . . . . . . . . 152
6.2.3.13 Presets for the other resources . . . . . . . . . . . 152
6.2.3.14 Expressions examples . . . . . . . . . . . . . . . . . 153
6.2.4 Credits management . . . . . . . . . . . . . . . . . . . . . . . 156
6.2.4.1 Adding/removing credits . . . . . . . . . . . . . . . 157
6.2.5 Active accounts . . . . . . . . . . . . . . . . . . . . . . . . . . 158
6.2.6 Managing e-mail templates from Quota . . . . . . . . . . . 159
6.2.6.1 Notes about using the Quota plugin . . . . . . . . 161
7 Operation 162
7.1 Debugging issues and troubleshooting process . . . . . . . . . . . 162
6
7.1.1 Debugging via logs . . . . . . . . . . . . . . . . . . . . . . . . 162
7.1.2 Debugging via web interface . . . . . . . . . . . . . . . . . . 162
7.1.3 Debugging network problems . . . . . . . . . . . . . . . . . 163
7.1.4 Log files path . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
7.1.5 Log level increase on Management Servers and KVM Agents164
7.1.6 Troubleshooting process . . . . . . . . . . . . . . . . . . . . 167
7.2 Host failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
7.3 Apache CloudStack services management . . . . . . . . . . . . . . 175
7.3.1 Managing the cloudstack-management . . . . . . . . . . . . 175
7.3.2 Managing cloudstack-agent (for KVM hypervisor) . . . . . . 176
7.3.3 Managing cloudstack-usage . . . . . . . . . . . . . . . . . . . 178
7.4 System VMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
7.4.1 Console Proxy Virtual Machine . . . . . . . . . . . . . . . . . 179
7.4.2 Secondary Storage Virtual Machine . . . . . . . . . . . . . . 180
7.4.3 Virtual Router . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
7.4.3.1 Virtual Router health checks . . . . . . . . . . . . . . 180
7.4.4 Accessing the system VMs . . . . . . . . . . . . . . . . . . . . 184
7.4.5 Randomizing the system VMs passwords . . . . . . . . . . . 184
7.4.6 URL for CPVM and SSVM consumption . . . . . . . . . . . . 185
7.5 Enabling VMs computational resources increase . . . . . . . . . . 186
7.6 Overprovisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
7.7 Updating the Apache CloudStack . . . . . . . . . . . . . . . . . . . . 189
7.7.1 Major versions updates . . . . . . . . . . . . . . . . . . . . . 189
7.7.2 Updates within the same major version . . . . . . . . . . . 191
7.8 SSL certificate update in the environment (nginx and ACS) . . . . . 191
7.8.1 Root and intermediary certificates extraction . . . . . . . . 192
7.8.2 Key conversion to PKCS#8: . . . . . . . . . . . . . . . . . . . 192
7.8.3 Adding certificates in nginx . . . . . . . . . . . . . . . . . . . 192
7.8.4 Adding certificates in the Apache CloudStack . . . . . . . . 192
7
7.9 SSH key pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
8 KVM virtualizer 200
8.1 KVM installation and CloudStack Agent . . . . . . . . . . . . . . . . 201
8.2 KVM and CloudStack Agent setup . . . . . . . . . . . . . . . . . . . 202
8.3 KVM operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
8.4 KVM’s CPU topology . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
8.5 CPU control with KVM . . . . . . . . . . . . . . . . . . . . . . . . . . 206
9 VMware virtualizer 209
9.1 Creating datastores . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
9.2 Installing the ESXi hosts . . . . . . . . . . . . . . . . . . . . . . . . . 210
9.2.1 Default ESXi hosts installation . . . . . . . . . . . . . . . . . 211
9.2.2 ESXi hosts basic settings . . . . . . . . . . . . . . . . . . . . . 214
9.2.3 Advanced ESXi hosts settings . . . . . . . . . . . . . . . . . . 219
9.2.3.1 Adding new IPs . . . . . . . . . . . . . . . . . . . . . 220
9.2.3.2 Adding new datastores . . . . . . . . . . . . . . . . 222
9.2.3.3 Adding the license key . . . . . . . . . . . . . . . . 224
9.3 vCenter installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
9.3.1 Adding license key . . . . . . . . . . . . . . . . . . . . . . . . 235
9.3.2 Adding multiple ESXi hosts . . . . . . . . . . . . . . . . . . . 237
9.3.3 Removing the Linux graphical interface . . . . . . . . . . . . 241
9.4 Adding VMware cluster in the Apache CloudStack . . . . . . . . . . 242
9.5 Problems when adding a VMware cluster . . . . . . . . . . . . . . . 247
9.6 Importing VMware VM to the Apache CloudStack . . . . . . . . . . 251
9.6.1 Importing VMs via UI . . . . . . . . . . . . . . . . . . . . . . . 251
9.6.2 Importing VMs via API . . . . . . . . . . . . . . . . . . . . . . 254
10 Conclusion 256
Appendix A Terminology 258
8
List of Figures
1 Cloud computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2 Example of a fault tolerant Apache CloudStack architecture . . . . 23
3 Logical organization of Apache CloudStack . . . . . . . . . . . . . . 26
4 Home dashboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5 VM settings tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
6 Global settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
7 Example for the query return with the total size for the vm _ s ta t s
table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
8 Migrating volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
9 Confirming operation . . . . . . . . . . . . . . . . . . . . . . . . . . 36
10 Migrating the VM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
11 Configuring the migration . . . . . . . . . . . . . . . . . . . . . . . . 37
12 Getting to the virtual routers tab . . . . . . . . . . . . . . . . . . . . 41
13 Verifying if the VR were correctly created . . . . . . . . . . . . . . . 41
14 Browsing to the virtual routers . . . . . . . . . . . . . . . . . . . . . 42
15 Stopping the virtual router . . . . . . . . . . . . . . . . . . . . . . . 42
16 Confirming operation . . . . . . . . . . . . . . . . . . . . . . . . . . 43
17 Confirming that the virtual router stopped . . . . . . . . . . . . . . 43
18 Restarting the virtual router . . . . . . . . . . . . . . . . . . . . . . . 43
19 Destroying the virtual router . . . . . . . . . . . . . . . . . . . . . . 44
20 Confirming operation . . . . . . . . . . . . . . . . . . . . . . . . . . 44
21 Confirming virtual router removal . . . . . . . . . . . . . . . . . . . 44
22 Steps to define the default offering for the virtual router . . . . . . 45
23 Beginning to add the IPv6 interval for the public network . . . . . 47
24 Adding an IPv6 interval for the public network . . . . . . . . . . . . 47
25 A /52 prefix allows 4096 IPv6 subnetworks in /64 block . . . . . . . 47
26 Beginning to add the IPv6 interval for the guest network . . . . . . 48
9
27 Adding an IPv6 interval for the guest network . . . . . . . . . . . . 48
28 Creating VPC offering with IPv6 support . . . . . . . . . . . . . . . . 49
29 Creating offering for VPC tiers with IPv6 support . . . . . . . . . . 49
30 VM’s IPv6 autoconfiguration validation (via SLAAC) . . . . . . . . . 50
31 Exhibition via UI for the routes that muset be added to the edge
router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
32 Exhibition via API for the routes that muset be added to the edge
router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
33 Creating an isolated network offering with IPv6 support . . . . . . 51
34 VM’s IPv6 autoconfiguration validation (via SLAAC) . . . . . . . . . 51
35 Exhibition via UI for the routes that muset be added to the edge
router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
36 Exhibition via API for the routes that muset be added to the edge
router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
37 Shared network creation from IPv4 info . . . . . . . . . . . . . . . . 53
38 Shared network creation from IPv6 info . . . . . . . . . . . . . . . . 53
39 Field for updating storage tags and host tags of a compute offering 54
40 Accessing the host editing . . . . . . . . . . . . . . . . . . . . . . . . 61
41 Creating the flexible tag in the host . . . . . . . . . . . . . . . . . . 61
42 Accessing the primary storage editing . . . . . . . . . . . . . . . . . 62
43 Creating the flexible tag in the primary storage . . . . . . . . . . . 63
44 Access to the host to be configured . . . . . . . . . . . . . . . . . . 64
45 Access to the host’s editing form . . . . . . . . . . . . . . . . . . . . 64
46 Host’s Preference OS configurations . . . . . . . . . . . . . . . . . . 65
47 Guest OS rule configuration for the host . . . . . . . . . . . . . . . 65
48 Accessing alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
49 Archiving or deleting an alert . . . . . . . . . . . . . . . . . . . . . . 69
50 Default system offerings . . . . . . . . . . . . . . . . . . . . . . . . . 72
51 Starting to create a new system offering . . . . . . . . . . . . . . . 72
10
52 Creating a new system offering - 1 - continues . . . . . . . . . . . . 73
53 Creating a new system offering - 2 . . . . . . . . . . . . . . . . . . . 73
54 Starting to edit the system offering . . . . . . . . . . . . . . . . . . 74
55 Editing the system offering . . . . . . . . . . . . . . . . . . . . . . . 74
56 Changing system offering order . . . . . . . . . . . . . . . . . . . . 75
57 Starting the system offering removal . . . . . . . . . . . . . . . . . 75
58 Removing a system offering . . . . . . . . . . . . . . . . . . . . . . . 75
59 Backup offering tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
60 Starting to import a new backup offering . . . . . . . . . . . . . . . 79
61 Importing a new backup offering . . . . . . . . . . . . . . . . . . . . 79
62 Starting backup offering removal . . . . . . . . . . . . . . . . . . . . 80
63 Removing the backup offering . . . . . . . . . . . . . . . . . . . . . 80
64 Starting to assign a VM to a backup offering . . . . . . . . . . . . . 80
65 Assigning a VM to a backup offering . . . . . . . . . . . . . . . . . . 81
66 Starting manual backup for a VM . . . . . . . . . . . . . . . . . . . . 81
67 Performing a manual backup for a VM . . . . . . . . . . . . . . . . 81
68 Starting backup scheduling . . . . . . . . . . . . . . . . . . . . . . . 81
69 Scheduling backups . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
70 Starting to remove backup offering from the VM . . . . . . . . . . 82
71 Removing backup offering from the VM . . . . . . . . . . . . . . . . 82
72 Selecting a QoS type . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
73 Limitando a taxa de BPS . . . . . . . . . . . . . . . . . . . . . . . . . 85
74 Accessing the primary storage addition menu . . . . . . . . . . . . 89
75 Details for adding a primary storage . . . . . . . . . . . . . . . . . . 89
76 Adding a primary storage with NFS . . . . . . . . . . . . . . . . . . 90
77 Adding a primary storage with Shared Mount Point . . . . . . . . . 91
78 Disabling a primary storage . . . . . . . . . . . . . . . . . . . . . . . 92
79 Enabling the maintenance mode . . . . . . . . . . . . . . . . . . . . 92
80 Enabling the usage of local storage for system VMs . . . . . . . . . 93
11
81 Accessing the zone to be edited . . . . . . . . . . . . . . . . . . . . 94
82 Accessing the zone editing menu . . . . . . . . . . . . . . . . . . . . 94
83 Enabling the usage of local storage for user’s VMs . . . . . . . . . . 95
84 Adding a compute offering with highlight in their Storage Type . . 96
85 Adding a disk offering with highlight in the Storage Type . . . . . . 97
86 Cold volume migration to local storage . . . . . . . . . . . . . . . . 98
87 Adding a new secondary storage . . . . . . . . . . . . . . . . . . . . 100
88 Deatails while adding a new secondary storage . . . . . . . . . . . 100
89 Migrating data between secondary storages . . . . . . . . . . . . . 101
90 Details for migrating data between secondary storages . . . . . . 101
91 Defining a secondary storage as read-only . . . . . . . . . . . . . . 102
92 Defining a secondary storage as read-write . . . . . . . . . . . . . . 102
93 Deleting a secondary storage . . . . . . . . . . . . . . . . . . . . . . 103
94 Confirming secondary storage removal . . . . . . . . . . . . . . . . 103
95 Global setting endpoint.url . . . . . . . . . . . . . . . . . . . . . . . 112
96 Customized banner . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
97 Customized footer . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
98 Whole logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
99 Cut logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
100 Mini logo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
101 Customized login page . . . . . . . . . . . . . . . . . . . . . . . . . . 126
102 Stylized header . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
103 Stylized sidebar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
104 Stylized closed sidebar . . . . . . . . . . . . . . . . . . . . . . . . . . 129
105 Stylized dashboard on root admin account . . . . . . . . . . . . . . 130
106 Stylized dashboard on user account . . . . . . . . . . . . . . . . . . 130
107 Stylized listing and links . . . . . . . . . . . . . . . . . . . . . . . . . 131
108 Link with an icon attribute . . . . . . . . . . . . . . . . . . . . . . . . 132
109 Accessing the settings . . . . . . . . . . . . . . . . . . . . . . . . . . 134
12
110 Editing the settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
111 Quota plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
112 List of active tariffs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
113 Listing filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
114 Tariff creation form . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
115 Tariff details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
116 Possible actions for active tariffs . . . . . . . . . . . . . . . . . . . . 142
117 Tariff editing form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
118 Removing a tariff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
119 List of active accounts . . . . . . . . . . . . . . . . . . . . . . . . . . 157
120 Form for adding/removing credits . . . . . . . . . . . . . . . . . . . 157
121 List of active accounts . . . . . . . . . . . . . . . . . . . . . . . . . . 158
122 Listing filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
123 List of e-mail templates from Quota . . . . . . . . . . . . . . . . . . 160
124 Editing the e-mail template . . . . . . . . . . . . . . . . . . . . . . . 160
125 Quota state for admin accounts and custom-account . . . . . . . . 161
126 VM created through the UI . . . . . . . . . . . . . . . . . . . . . . . 168
127 Identifying the command sent. . . . . . . . . . . . . . . . . . . . . . 168
128 Identifying the logid for the desired process. . . . . . . . . . . . . . 169
129 Console Proxy Virtual Machine . . . . . . . . . . . . . . . . . . . . . . 179
130 Secondary Storage Virtual Machine . . . . . . . . . . . . . . . . . . . 180
131 Virtual Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
132 Health checks display for a certain VR. . . . . . . . . . . . . . . . . . 181
133 Adding SSL certificate via UI . . . . . . . . . . . . . . . . . . . . . . . 193
134 Accessing the SSH key pairs sections . . . . . . . . . . . . . . . . . . 194
135 Creating a SSH key pair . . . . . . . . . . . . . . . . . . . . . . . . . . 195
136 SSH key pair automatic creation . . . . . . . . . . . . . . . . . . . . . 196
137 Creating the SSH key . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
138 Creating instance and implementing the SSH key . . . . . . . . . . 199
13
139 KVM virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
140 Virtualizers comparison . . . . . . . . . . . . . . . . . . . . . . . . . 201
141 Output example for the command virsh schedinfo . . . . . . . . . 207
142 Initial ESXi installation screen . . . . . . . . . . . . . . . . . . . . . . 211
143 Accept terms of use . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
144 Choosing the disk to install the system . . . . . . . . . . . . . . . . 212
145 Selecting the keyboard layout . . . . . . . . . . . . . . . . . . . . . . 212
146 Creating the root password . . . . . . . . . . . . . . . . . . . . . . . 212
147 Potential warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
148 Confirm installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
149 Host installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
150 Restarting the host . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
151 Initial login . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
152 Root user is the default, and its password is the same set during
installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
153 Section Configure Management Network . . . . . . . . . . . . . . . . 215
154 Section IPv4 Configuration . . . . . . . . . . . . . . . . . . . . . . . . 216
155 Static IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
156 Applying changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
157 Section Troubleshooting Options . . . . . . . . . . . . . . . . . . . . . 217
158 Shell access on the host . . . . . . . . . . . . . . . . . . . . . . . . . 218
159 SSH access on the host . . . . . . . . . . . . . . . . . . . . . . . . . . 218
160 URL for accessing the web interface . . . . . . . . . . . . . . . . . . 219
161 Login screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
162 Available NICs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
163 Virtual switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
164 Configuring the new virtual switch . . . . . . . . . . . . . . . . . . . 221
165 VM Kernel NICs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
166 Configuring the new VM Kernel NIC . . . . . . . . . . . . . . . . . . 222
14
167 Datastores . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
168 Datastore types supported. . . . . . . . . . . . . . . . . . . . . . . . 223
169 Configuring the new datastore . . . . . . . . . . . . . . . . . . . . . 223
170 Finish the datastore setup . . . . . . . . . . . . . . . . . . . . . . . . 224
171 Accessing the license . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
172 Verifying if the license is valid . . . . . . . . . . . . . . . . . . . . . . 225
173 Adding the license . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
174 Possible operation types . . . . . . . . . . . . . . . . . . . . . . . . . 227
175 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
176 Terms of use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
177 Available deploy types . . . . . . . . . . . . . . . . . . . . . . . . . . 228
178 Adding ESXi host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
179 Adding the root password . . . . . . . . . . . . . . . . . . . . . . . . 229
180 Infrastructure size . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
181 Available datastores . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
182 Configuring the vCenter network . . . . . . . . . . . . . . . . . . . . 231
183 Finish configuring the vCenter . . . . . . . . . . . . . . . . . . . . . 231
184 Start configuring the PSC . . . . . . . . . . . . . . . . . . . . . . . . 232
185 Applying appliance basic settings . . . . . . . . . . . . . . . . . . . . 232
186 Creating and setting up a new SSO . . . . . . . . . . . . . . . . . . . 233
187 VMware improvement program . . . . . . . . . . . . . . . . . . . . 233
188 Finishing the setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234
189 Finishing the installation . . . . . . . . . . . . . . . . . . . . . . . . . 234
190 vSphere home screen . . . . . . . . . . . . . . . . . . . . . . . . . . 235
191 Licenses screen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
192 Adding the license . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
193 Changing the license name . . . . . . . . . . . . . . . . . . . . . . . 236
194 Finish adding the license . . . . . . . . . . . . . . . . . . . . . . . . . 237
195 Starting to add a new datacenter/folder . . . . . . . . . . . . . . . . 238
15
196 Naming the new datacenter . . . . . . . . . . . . . . . . . . . . . . . 238
197 Adding a new host to the datacenter . . . . . . . . . . . . . . . . . 239
198 Adding the IP to the host . . . . . . . . . . . . . . . . . . . . . . . . . 239
199 User and password of the host . . . . . . . . . . . . . . . . . . . . . 239
200 Host details overview . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
201 Host license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
202 Host lockdown mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
203 VM location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
204 Final details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
205 Adding a VMware datacenter . . . . . . . . . . . . . . . . . . . . . . 242
206 Configuring a VMware datacenter in Apache CloudStack . . . . . . 243
207 Accessing the physical networks details in Apache CloudStack . . 243
208 Updating networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
209 Adding the VMware network mapping in the Apache CloudStack . 246
210 Configuring the VMware cluster in Apache CloudStack . . . . . . . 247
211 Tag error messagen on the UI when trying to add the vCluster . . 248
212 vCenter zone selection . . . . . . . . . . . . . . . . . . . . . . . . . . 249
213 Removing the cloud.zone attribute . . . . . . . . . . . . . . . . . . . 250
214 Selecting and erasing the vCluster from the database . . . . . . . 251
215 Accessing the Tools view to import the VMs . . . . . . . . . . . . . . 252
216 Checking VMware cluster VMs . . . . . . . . . . . . . . . . . . . . . 252
217 Importing a VM to the Apache CloudStack . . . . . . . . . . . . . . 253
218 Details for VM import . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
219 VM successfully imported . . . . . . . . . . . . . . . . . . . . . . . . 254
16
List of Tables
1 createGuiTheme parameters . . . . . . . . . . . . . . . . . . . . . . 118
2 listGuiThemes parameters . . . . . . . . . . . . . . . . . . . . . . . 119
3 updateGuiTheme parameters . . . . . . . . . . . . . . . . . . . . . . 120
4 jsonconfiguration attributes . . . . . . . . . . . . . . . . . . . . . . . 122
5 jsonconfiguration plugin attributes . . . . . . . . . . . . . . . . . . 122
6 Usage types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
17
Notes
This document don’t cover all the topics, subjects and use cases for Apache
CloudStack. Therefore, in case you don’t find what you’re looking for, do
a review or inclusion solicitation through GitLab.
This document is periodically updated. So, stay tuned for new updates.
18
1. Introduction
During the last few years, there was a huge growth in the usage of cloud
computing (private, public and/or hybrid), intensifying opportunities in the dig-
ital services market. For supplying this demand the infrastructure needed to
created and maintain this kind of environment grows constantly, directly im-
pacting the management and operation costs.
Cloud computing environments are complex and heterogeneous, possess-
ing dynamic variables and many components that must be interwined and man-
aged. For an efficient management of this kind of environment it’s necessary
the usage of tools capable of orchestrating the infrastructure, helping during
implementation, maintenance and management of cloud services and systems.
Apache CloudStack and OpenStack are the main open-source alternatives
for creating private cloud environments, highlighted by their robustness, project
maturity and range of technologies and functionalities supported. Such charac-
teristics translate in hundreds of companies that adopt such platforms around
the world, including companies like: USP, UNICAMP, Dell, Deutsche Telekom,
Apple, Planetel, Locaweb and many others.
SC Clouds comes in aid of this need, providing consultancy, support and con-
tinued development to cloud infrastructure providers and companies that have
the cloud technology as the main pillar for creating and providing services. In
partnership with clients is carried out the planning and execution for the cloud
computing environments that provide infrastructure as a service. Our profes-
sionals have full mastery with the Apache CloudStack and OpenStack platforms,
being responsible for various new features and bug fixes in the projects. Cur-
rently, SC Clouds helps with management, planning, implantation, support, bug
fixing and implementing new functionalities for companies in Germany, Brazil,
Italy, Mexico and Swiss.
19
1.1. Private cloud platforms
Cloud orchestration platforms, such as Apache CloudStack and OpenStack,
are used to provide private, public and hybrid cloud in companies and institu-
tions around the globe. They are capable to connect different computational
systems (storage, network and virtualizers), creating the abstraction for infrastructure-
as-a-service for the system users and administrators.
Both OpenStack and CloudStack are free open-source softwares for cloud
environment orchestration. Both share the goal of managing computational
resources, working with the virtualization concept to provide resources on de-
mand, focusing in the provision infrastructure-as-a-service resources.
Figure 1: Cloud computing
1.1.1. Basic functionalities
Both CloudStack and OpenStack offer basic functionalities needed for pro-
vision of cloud computing services. Some of the provided functionalities are:
Computing services
VM provision and management fo their resources (networks, mem-
ory, CPU, disks, etc);
20
Manage the images which the VMs will be created
Support for multiple hypervisors, like KVM, XenServer and VMware.
Network services
Management and configuration for shared and dedicated networks
VLAN, VxLANS (overlay networks) and other topologies creation
Define access and routing policies
Qos, Load balancers and firewalls management;
Storage services
Disks management
Block and object based storage
Backups;
SDS support (Software Defined Storage), e.g. Ceph/RBD.
Resource monitoring services
event monitoring;
Display for available and used resources
Pricing and billing.
Open REST APIs, standardized and documented, to manage all services,
their automation and integration with other tools;
Authentication service with native LDAP integration;
Support to SAML2 and OpenID Connect federated systems;
Containers-as-a-service.
Within basic functionalities for cloud orchestration system, Apache Cloud-
Stack have limitations compared to federated systems, since it supports only
the SAML2 protocol.
21
1.2. Apache CloudStack history
The CloudStack project started in 2008 as a startup project known as VMOps.
The company changed its name to Cloud.com and released a CloudStack stable
version in 2010, under the GNU GPLv3 license (General Public License version
3). In 2011, Cloud.com was acquired by Citrix and in 2012 the project was do-
nated to the Apache foundation. In the last few years, CloudStack has grew up
and has become reference as a orchestration tool, being used by many organi-
zations, such as Globo, RNP, UNICAMP, USP, Locaweb, CGEE, Apple, Dell, Nokia
e Disney.
22
2. Apache CloudStack basic concepts
This sections aims to show the basic concepts about the Apache CloudStack,
its logical organization and the components that make up the orchestrator.
The CloudStack architecture is prepared for both vertical scaling (increase a
machine’s resources) and horizontal scaling (adding new machines, either Man-
agement Servers or CloudStack Agents).
Implantação com tolerância à falha
Balanceador
de carga 01
Balanceador
de carga 02
Management
Server 01
Management
Server 02
Management
Server 03
MariaDB
Galera 01
MariaDB
Galera 02
Interface
gráca, CLI,
Rest API
Interface
gráca, CLI,
Rest API
MariaDB
Galera 03
Figure 2: Example of a fault tolerant Apache CloudStack architecture
A minimum Apache CloudStack implantation project consists in Control Plane,
Compute Plane and Data Plane.
2.1. Control Plane
The Control Plane refers to the layer composed of a group of services re-
sponsible for keep the cloud available resources under control, such as physical
servers, storages and network switches. Also, it’s on the Control Plane that is
perfomed the configuration and management of virtual networks, routing and
load balancer. Among the basic components for implanting the Control Plane
there are:
Management Server: responsible for managing and orchestrating avail-
able resources in the infrastructure, besides also being responsible for the
REST APIs and a management web interface. It can be duplicated/repli-
cated to provide a high availability structure to the API and UI, however its
23
necessary the use of a web proxy for this setup, and sticky session for the
UI.
Load balancer: CloudStack allows the use of a load balancer that provides
a virtual IP to perform load distribution between the Management Servers
and allow the use of sticky sessions. The environment administrator its
response to create the load balancer rules for the Management Servers
based in the environment needs.
Database: CloudStack uses various settings that define how the cloud will
be managed. These settings are stored in a database (MariaDB or MySQL),
which can be handled by admin users
1
.
Galera Cluster: In the context of database clustering, Galera Cluster is a
synchronous virtual replication multi-primary cluster. By offering the syn-
chronous replication functionality and allowing read and write in any clus-
ter node, Galera comes as a way to ensure fault tolerance and consistence
between all database nodes.
Usage Server (optional): aside from the essential components for implan-
tation, the Usage Server is a optional service responsible for processing
resource consumption events in the infrastructure that makes possible to
external agents to charge for these events. This topic will be detailed in
the section 6.1.
2.2. Compute Plane
The Compute Plane its the structure responsible for sustaining the infras-
tructure offered as a service, acting during computational resource allocation
and work load distribution between available hosts in the environment. Its the
1
We don’t recommend performing changes in the database without the supervision of a SC
Clouds member or previous knowledge, because this kind of change may cause inconsistencies
in processes and work flows from CloudStack.
24
Compute Plane that deals with creating, migrating and monitoring the VMs.
Physical servers destined for this service must support virtualization, cause its
where the created VMs will be allocated.
CloudStack Agent: only necessary when using the KVm hypervisor, the
CloudStack Agent performs the connection between the Management Server
and KVM.
2.3. Data Plane
The Data Plane is set of elements responsible for storage management (ap-
pliances or when using SDSs, e.g. Ceph/RBD). The main concepts about stor-
ages will be addressed in the section 3.13.
2.4. Logical organization of Apache CloudStack
The logical organization used by CloudStack can be split in:
Region: broadest cloud implementation unit. Groups one or more zones.
ZoneM: represents a single datacenter, composed of one or more pods
and secondary storages.
Pod: represents a single rach on the datacenter.
Cluster: consists of one or more homogeneous hosts that share the same
primary storage. Ideally, the hosts share the same processor family, lev-
eled for the oldest generation functionalities.
Host: a server where the user VMs are executed. When using KVm as the
virtualizer it will also have the CloudStack Agent installed.
Primary Storage: operates at either host, cluster or zone level. Responsible
for the storage of disks (also called volumes) utilized by the VMs.
Secondary Storage: operates at either zone or VM templates/ISOs/backup-
s/snapshots storage.
25
Figure 3: Logical organization of Apache CloudStack
In the figure 3 it’s possible to visualize the ACS hierarchy, in which is observ-
able that the outermost component of the structure it’s a region that groups
one of more zones. The zones represents a datacenter composed of pods,
while pods are composed by clusters, clusters composed by hosts, and these
act as active processing nodes.
2.5. Apache CloudStack components
The CloudStack structure is formed by the following components:
1. Management Servers;
2. Hosts;
3. Network : network that connects the ACS components. It’s recommended
to make use of network segregation, physically or virtually (via broadcast
domain), providing more security;
4. Primary storage;
5. Secondary storage;
6. Virtual router: System VM which emulates a router, implementing the guest
network services. Making it possible, for example, the internet access;
26
7. Console proxy virtual machine: System VM responsible for providing console
view for any VM via web interface;
8. Secondary storage virtual machine: System VM responsible for managing
secondary storages from a zone. Provides functions such as: download,
registry and upload for volumes, templates and ISOs.
27
3. Apache CloudStack functionalities
In this section will be briefly introduced the main Apache CloudStack func-
tionalities about the environment operation (RootAdmin users) and how to use
them. For informations about the platform’s final user consumption, consult
the Apache CloudStack Cloud Consumption document.
3.1. Home dashboard
This is the default Apache CloudStack dashboard, where the infrastructure
details are shown.
Figure 4: Home dashboard
3.2. VM settings
The VMs have both general settings and settings specific for the hypervisor.
It’s recommended to keep certain settings to prevent issues, such as those for
internal usage.
This section will present settings available only at operation level. For VM
settings available at usage level, check the Apache CloudStack usage documen-
tation.
To change a VM’s settings is necessary to firstly stop it. Then, the VM settings
can be modified accessing the Settings tab.
28
Figure 5: VM settings tab
3.2.1. General settings
Setting Description.
rootdisksize Defines the root disk size for the VM. If a service offer-
ing have its root disk size configured, this parameter is
overwritten by the service offering.
rootDiskController Defines the disk controller used by the VM’s root disk.
dataDiskController Defines the disk controller used by the VM’s data disks.
nameonhypervisor Defines the VM name within the hypervisor.
29
3.2.2. Miscellaneous settings for internal usage
Setting Description
cpuOvercommitRatio Defines the ratio for CPU super-
allocation, which allows that more
CPU cores are allocated than the
number of physically available cores
in the host machine.
memoryOvercommitRatio Defines the ratio for memory
super-allocation, which allows that
more memory are allocated than
the amount of physically available
memory in the host machine.
Message.ReservedCapacityFreed.Flag Internal flag used to indicate if the re-
served resources for a VM were re-
lease.
3.2.3. VM settings specific for KVM
Setting Description
kvm.vnc.port Defines the VNC port used by the VM in the KVM environ-
ment.
kvm.vnc.address Defines the IP used to access the VM via NIC in a KVM
environment.
video.hardware Defines the virtual video adapter used by the VM.
video.ram Defines the amount of video RAM available for the VM.
io.policy Defines the strategy used to deal with data I/O.
iothreads Allows the usage of specific threads for I/O.
When setting video.hardware with the value virtio, Windows VMs start to
utilize the VirtIO driver, making it possible the usage of new console resolution
in the VM. More information related to KVM can be found in Section 8.
3.2.4. VM settings specific for VMware
Setting Description
svga.vramSize Defines the amount of video memory available for
the VM.
nestedVirtualizationFlag Enables or disables nested virtualization within
the VM.
ramReservation Defines the minimum RAM amount allocted to the
VM.
30
More information related to KVM can be found in section 9.
3.2.5. Settings specific for XenServer internal use
Setting Description
hypervisortoolsversion Defines the hypervisor version used by the VM.
platform Defines the platform type in which the VM is being
executed, such as Linux or Windows.
timeoffset Defines the timezone for the VM.
3.2.6. Settings specific for Mac OSX - Guest
Setting Description
smc.present Enable or disables SMC support in the VM.
firmware Defines the firmware used by the VM.
3.2.7. Global settings for VMs
Beyond the settings that individually affect VMs, Apache CloudStack pro-
vides some global settings related to the VMs, so that the environment may be
customized as a whole.
3.2.7.1. User access restriction
It’s possible to restrict that accounts from the user type modify certain settings
in the VMs. There’s two restriction options, that are defined through the follow-
ing global settings:
user.vm.denied.details: prevents adding, editing and visualizing settings;
user.vm.readonly.details: prevents adding or editing settings, but they are
still visible for the users in the Settings tab.
31
Figure 6: Global settings
3.2.7.2. Extra settings metadata
It’s possible to add extra properties to the deployment of a VM, as long that the
enable.additio n al.vm.configuration setting is enabled (it’s necessary to restart
the Management Server to changes to this setting to be applied).
The cloud administrator may define through the following global settings
which commands the users may add to their VMs, based with their respective
hypervisors.
allow.additional.vm.configuration.list.kvm
Additional KVM settings must be provided in XML format, encoded as URL.
For example:
<memoryBacking> <hugepages/> </memoryBacking>
As URL:
%3CmemoryBacking%3E%0D%0A++%3Chugepages%2F%3E%0D%0A%3C%2FmemoryBacking%3E
allow.additional.vm.configuration.list.vmware
Additional VMware settings are key=value pairs, encoded as URL. For ex-
ample:
hypervisor.cpuid.v0=FALSE
32
As URL:
hypervisor.cpuid.v0%3DFALSE
allow.additional.vm.configuration.list.xenserver
Additional XenServer settings are vm-param-s et command parameters,
in the form of key=value pairs, encoded as URL. For example:
HVM-boot-policy=
PV-bootloader=pygrub
PV-args=hvc0w
As URL:
HVM-boot-policy%3D%0APV-bootloader%3Dpygrub%0APV-args%3Dhvc0
3.2.7.3. VM statistics retention
VM statistics retention is governed through two global settings:
vm .stats.interval: interval, in millisecondsiseconds, between each data
collection. The default value for this setting is 60000 milliseconds, equiv-
alent to 1 minute.
vm.stats.max.retention.time: retention time, in minutes, for the data on
the database. The default value for this setting is 720 minutes, equivalent
to 12 hours.
Based on the value defined in this settings, it’s possible to calculate the max-
imum statistics records within the database with the following formula:
storageSpace =
(retention · 60000)
interval
· M Ss · V M s · recordSize
storageSpace: space, in bytes, necessary to store the statistics from the
VMs;
retention: value for the vm.stats.max.retention.time setting;
33
interval: value for the vm.stats.interval setting;
MSs: amount of Management Servers running in the environment;
VMs: amount of VMs running in the environment;
recordSize: estimated size, in bytes, for each record in the database.
Therefore, in an environment with 2 Management Servers e 1000 VMs, in
which the retention time is 10 minutes, the collection interval is 30 seconds
and the size for each record is 400 bytes, the maximum amount of records in
the database will be 16 MB:
(10 · 60000)
30000
· 2 · 1000 · 400 = 16000000B = 16MB
Furthermore, it’s possible to track the physical growth of the table with the
following command in the database:
SELECT TABLE_NAME AS 'Table',
DATA_LENGTH AS 'Data size (B)',
INDEX_LENGTH AS 'Index size (B)',
ROUND((DATA_LENGTH + INDEX_LENGTH) / 1024) AS 'Total size (KiB)'
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = 'cloud'
AND TABLE_NAME = 'vm_stats';
Figure 7: Example for the query return with the total size for the vm_stats table
Still, should be highlighted that the MySQL database shows value changes
for each 16KiB of data. With this, coinsidering that each record has a 400 bytes
size, changes in those values will be registered observed every 40 records, ap-
proximately.
34
3.3. Volume management
In this section, procedures related to operational volume management within
ACS
2
will be addressed.
3.3.1. Volume migration with stopped VM (cold migration)
As default, this operation it’s allowed only for root admin users. It’s impor-
tant to note that the VM’s host must have access to the volume’s destination
storage.
Figure 8: Migrating volumes
2
Other procedures related to volume management can be found in the Apache CloudStack
usage documentation.
35
Figure 9: Confirming operation
3.3.2. Volume migration with running VMs (hot migration)
As default, this operation it’s allowed only for root admin users. If the uti-
lized hypervisor is KVM, to perform hot volume migration it’s necessary to also
migrate the VM which the volume belongs. More details about the limitations
of hot migration are found in the next item.
Figure 10: Migrating the VM
36
Figure 11: Configuring the migration
3.3.3. Volume migration via CLI
There’s two possibles cases for volume migration: migration with stopped
VMs (cold migration) and migration with running VMs (hot migration).
In cold volume migration the volume is copied to the secondary storage and
then to the primary storage. While in hot migration, the volume is directly mi-
grated to the target primary storage, which makes it faster and cheaper for the
environment.
To perform a cold migration, just use the following command:
migrate volume volumeid=<volume_id>
storageid=<destination_storage_id>
In the other hand, for hot volume migration, some details are important to
note:
37
If the hypervisor used is VMware or XenServer, the command above can
be used. However, if the hypervisor used is KVM, it’s necessary to perform
a VM migration between at the same time. The KVM hypervisor have a
feature called live block copy, which allows to migrate volumes from one
running VM to another storage, without the need to perfom the VM mi-
gration to other host. After the migration, the XML and the VM process
are updated targeting the new storage. However, this functionality isn’t
used in ACS currently. So, ACS have a limitation that prevents volume mi-
gration in running VMs only for those executed with KVM; consequently,
the volume migration between storages while the VM is running needs to
be perfomed together with migrating the VM between hosts, so that the
XML and the VM process are updated. There’s a known issue with XFS,
in which it may have its partitions corromped during the hosts migration,
making it necessary to execute a process to recover these volumes;
If the VM have data disks that are too big, the ACS timeout settings must be
checked, cause volumes too big may cause a timeout, resulting in failures
in the migration process;
KVM have some timeout settings that deals with VMs migration. If the VM
possess a huge amount of RAM memory, the migration process may fail.
The main global settings that affects migrations are:
enable.storage.migration: Enables volume migrations;
secstorage.max.m igrate.sessions: Maximum of data copy tasks that a
SSVM may perform concurrently;
secstorage.session.max: Maximum requests that a SSVM may be queued.
When the value is exceed, a new SSVM is created. Moreover, when the
request amount is far from reaching this value and there’s inactive SSVMs,
they are destroyed;
38
storage.pool.max.waitseconds: Wait timeout for storage pool synchro-
nization operations;
migratewait: VM migration timeout. After this time limit the migration is
cancelled;
kvm.s torage.offl ine.migration.wait: Timetou for cold volume migration
(only applies to KVM), and
kvm.storage.online.migration.wait: Timeout for hot volume migration (only
applies to KVM).
The KVM agent settings may be changed in the following file: /etc/cloudst a
ck-features/agent/agent.properties, with the main settings affecting migration
process being:
vm.migrate. pauseafter: Wait time for the completion of hot migrations.
After this limit, the VM is paused for completing the migration;
vm.migrate.downtime: Time that the VM will still paused to complete the
migration, and
vm.migrate.speed: Migration speed (in MB/s). As default, ACS tries to es-
timate the network transfer speed.
To perform a volume migration, use the command:
migrate virtualmachinewithvolume virtualmachineid=<vm_id> hostid=<destination_host_id> \
migrateto[0].volume=<volume_id> migrateto[0].pool=<target_storage_id>
3.3.4. Importing a volume to another zone or account
There’re some situations that is necessary to import volumes to another
zone and/or account. Via UI, this can be achieved following these steps:
39
1. Acquire the volume download link from the Apache CloudStack usage doc-
ument
3
;
2. Copy and paste the URL on the volume upload URL field
4
;
3. Configure the volume to belong to the desired zone and/or account.
It’s also possible to perform this procedure via API. For this, follow these
steps:
1. Use the following command to generate the URL:
extract volume zoneid=<zone_id> id=<volume_id> mode=HTTP_DOWNLOAD
2. To perform the importation, use the command
5
:
upload volume zoneid=<destination_zone_id> name=<volume_name> format=<volume_format> \
url=<previously_generated_URL> domainid=<destination_domain_id> account=<destination_account_id>
3.4. Virtual router
This section will exclusively address virtual router operations. For more in-
formation related to guest networks, access the Apache CloudStack usage doc-
umentation.
When a VM uses an isolated network and a shared network is created, the
Apache CloudStack automatically creates a virtual router for them.
3
More information may be found in the section Volume download in the Apache CloudStack
usage document.
4
More information may be found in the section Online volume upload via URL in the Apache
CloudStack usage document.
5
The parameters domainid and account are optional, however the account depends on the
domainid.
40
Figure 12: Getting to the virtual routers tab
Figure 13: Verifying if the VR were correctly created
41
3.4.1. Stopping the virtual router
Figure 14: Browsing to the virtual routers
Figure 15: Stopping the virtual router
42
Figure 16: Confirming operation
Here is important to take notice of the Force option. If selected, VMs that
utilizes the router will be forced to stop. This option is recommended only when
trying to stop the router resulted in a error.
Figure 17: Confirming that the virtual router stopped
3.4.2. Restarting the virtual router
Figure 18: Restarting the virtual router
43
3.4.3. Destroying a virtual router
It’s possible to destroy a VR, but it will be recreated when the network is
restarted. When destroying a VR all services provided by it will be interrupted,
therefore VMs running that utilizes it may show errors and faults.
Figure 19: Destroying the virtual router
Figure 20: Confirming operation
Figure 21: Confirming virtual router removal
3.4.4. Default offering definition for virtual router
When creating a new network, there’s a sequence of actions established by
CloudStack to define which offering (section 3.11) the VR will utilize, determined
through the following steps:
44
1. If the network offering used in the network creation is specified a service
offering, it will be used in the VR creation. Otherwise, the next step will be
verified:
2. If the router.service.offering setting, at account level, defines the offering,
it will be used in the VR creation. Otherwise, the next step will be verified:
3. If the r outer.service.off ering global setting defines the offering, it will be
used in the VR creation. Otherwise, the next step will be verified:
4. The system offering default will be used (System Offering For Software Router
or System Offering For Software Router - Local Storage).
Figure 22: Steps to define the default offering for the virtual router
It’s important to verify that if an already existing network is updated for using
a new network offering, or the network is restarted with clean up, the new VRs
will also follow the same steps to define which offering will be used.
45
3.5. Public IPs
This section will approach only topics relevant to public IP operation level.
For more information, access the Apache CloudStack Cloud Consumption doc-
ument.
3.5.1. Public IP reserved for system VMs
CloudStack operates with reserved IPs for system VMs. They are not strict,
so if there is only public IPs reserved for open system VMs, ACS will use them
for user VMs. To change this ACS behaviour and restrict reserved public IPs
for system VMs to be only available for them is necessary to change the global
setting system.vm.public.ip.reservation.mode.strictness to true.
3.6. IPv6 support
ACS provides support for IPv6 in shared and isolated networks, as well VPCs
tiers Currently, it’s not possible to create any of the mentioned networks with
only IPv6, all of them needs to be created with an IPv4 + IPv6 dual-stack. The
next subsections detail the requirements and limitations related to shared net-
works and isolated networks/VPCs tiers support.
3.6.1. Isolated networks and VPCs tiers
Currently, ACS only supports IPv6 through the SLAAC configuration, not sup-
porting DHCPv6 stateless or stateful, making it necessary to manually add routes
in the edge router for each network created in the environment. The addition of
routes in the edge router should be only made after the network creation via
ACS. Furthermore, public networks are allocated using a whole /64 block for
IPv6, making it necessary that the prefix be lower or equal to /64. The following
steps are mandatory to enable IPv6 in isolated networks/VPCs tiers:
1. Enable the global setting ipv6.offering.enabled.
2. Add the IPv6 interval for the public network.
46
Figure 23: Beginning to add the IPv6 interval for the public network
Figure 24: Adding an IPv6 interval for the public network
Figure 25: A /52 prefix allows 4096 IPv6 subnetworks in /64 block
3. Add the IPv6 prefix, which should equal or lower to /64.
47
Figure 26: Beginning to add the IPv6 interval for the guest network
Figure 27: Adding an IPv6 interval for the guest network
Furthermore, it’s necessary to create offerings for VPCs and their tiers, spec-
ifying both protocols; the same applies to isolated networks. Down below it’s
shown a step-by-step to how to create tiers with IPv6 support.
1. Create VPC offering with IPv6 support.
48
Figure 28: Creating VPC offering with IPv6 support
2. Create network offering for tiers with IPv6 support.
Figure 29: Creating offering for VPC tiers with IPv6 support
49
3. The VM allocated to the network will use the SLAAC protocol, assigning an
unused IPv6 to it.
Figure 30: VM’s IPv6 autoconfiguration validation (via SLAAC)
4. In the edge router is necessary to add the informed routes, via UI or via
API.
Figure 31: Exhibition via UI for the routes that muset be added to the edge router
Figure 32: Exhibition via API for the routes that muset be added to the edge router
The process to create isolated networks is similar, differing only in the pro-
cess of network offering creation.
50
1. Create an isolated network offering with IPv6 support.
Figure 33: Creating an isolated network offering with IPv6 support
2. The VM allocated to the network will use the SLAAC protocol, assigning an
unused IPv6 to it.
Figure 34: VM’s IPv6 autoconfiguration validation (via SLAAC)
3. In the edge router is necessary to add the informed routes, via UI or via
API.
51
Figure 35: Exhibition via UI for the routes that muset be added to the edge router
Figure 36: Exhibition via API for the routes that muset be added to the edge router
3.6.2. Shared networks
Different to isolated networks and VPC tiers, it’s not necessary to perform
any configuration within ACS to use shared networks with IPv6, being enough
to create a network with the shared type while specifying the gateway, CIDR and
IP range used by the network. This informations must be specified for both IP
protocol versions, for, currently, it’s not possible to create a network using only
IPv6.
Shared networks management is performed directly through the network
router, being done separately from ACS. VMs will also use the SLAAC protocol
for IPv6 autoconfiguration, so, it’s needed that the gateway have the Router
52
Advertisements service enabled, informing the IPv6 network prefix. With this
informations the VM itself will obtain a valid IP within the network. Down below
is shown an example for creating a dual-stack shared network.
Figure 37: Shared network creation from IPv4 info
Figure 38: Shared network creation from IPv6 info
53
3.7. Host, storage and network tags
Host tags and storage tags, despite their names, don’t relate to resource
tags and are a functionality to direct resource allocation, such as in which host
the VM deploy will be perfomed or in which storage the volume will be created.
There are many reasons for using tags (such as directing volumes to a better
quality storage, based in the offering) Each resource type have a different be-
haviour for the tags.
Figure 39: Field for updating storage tags and host tags of a compute offering
3.7.1. Host tags
Host tags are responsible for directing VMs to compatible hosts. They are
validated with the hosts tags informed in the compute offerings (section 3.11.1)
or in the system offerings (section 3.12.1).
To explain the host tags behaviour some examples with two hosts (Host1
and Host2) will be presented:
1. Tags organization:
Host1: h1
Host2: h2
Offering: h1
54
When creating a VM with the offering, the deploy will be perfomed in
Host1, since it has the tag compatible with the offering..
2. Tags organization:
Host1: h1
Host2: h2,h3
Offering: h3
The hosts accept a tag list, using comma (,) as a separator for them. There-
fore, in this example, Host2 has the tags h2 and h3. When creating a VM
with the offering, the deploy will be performed in Host2, cause is has the
tag compatible with the offering.
3. Tags organization:
Host1: h1
Host2: h2,h3
Offering: h2,h3
Contrary to hosts, offerings don’t accept a list of host tags, therefore in
this example, h2,h3 is the only tag in the offering. None of the hosts have
compatible tags, so no deploy it’s possible for a VM with the offering. How-
ever, CloudStack ignores this behaviour when a host is manually selected
and allows the deployment.
4. Tags organization:
Host1: h1
Host2: h2,h3
Offering: (no tags)
When the offering don’t have any tags, the VM deployment may be per-
formed in any host.
5. Tags organization:
55
Host1: (no tags)
Host2: h2
Offering: h3
None of the hosts have compatible tags and the deployment for VMs with
this offering are not possible. However, CloudStack ignores this behaviour
when a host is manually selected and allows the deployment.
Example Host tags Offering tag Behaviour
1 Host1: h1
Host2: h2
h1 Deploy in Host1
2 Host1: h1
Host2: h2,h3
h3 Deploy in Host2
3 Host1: h1
Host2: h2,h3
h2,h3 The deploy will not be per-
fomed, unless one of the hosts
are selected
4 Host1: h1
Host2: h2,h3
(no tags) Deploy in any host
5 Host1: (no tags)
Host2: h2
h3 The deploy will not be per-
fomed, unless one of the hosts
are selected
3.7.2. Storage tags
Storage tags are responsible to direct volumes to compatible primary stor-
ages. They are validated with the storage tags informed in the disk offerings
6
,
compute offerings (section 3.11.1) or system offerings (section 3.12.1).
To examplify the behaviour for storage tags, some examples are presented:
1. Tags organization:
Storage: A
Offering: A,B
6
Information about disk offerings can be checked in the Apache CloudStack usage docu-
mentation.
56
Both storage and offering accepts a list of tags, using comma (,) as a sep-
arator for them. Therefore, in this example the offering have the tags A
and B. In this example will not be possible to allocate the volume, because
all offering tags needs to exist in the storage. Besides the storage having
the tag A, it don’t have the tag B.
2. Tags organization:
Storage: A,B,C,D,X
Offering: A,B,C
In this example will be possible to allocate the volume, because all of the
offering tags exists in the storage.
3. Tags organization:
Storage: A,B,C
Offering: (no tags)
In this example will be possible to allocate the volume, because the offer-
ing don’t have any tag requirements.
4. Tags organization:
Storage: (no tags)
Offering: D,E
In this example will not be possible to allocate the volume, because the
storage don’t have tags, therefore don’t fulfills the tag requirements of the
offering.
Example Storage tags Offering tags Behaviour
1 A A,B The volume will not be allocated
2 A,B,C,D,X A,B,C The volume will be allocated
3 A,B,C (no tags) The volume will be allocated
4 (no tags) D,E The volume will not be allocated
57
In summary, if the offering have tags, the storage must have all the tags for
the volume allocation. If the offering don’t have any tags, the volume can be
allocated, regardless of the storage having any tags.
3.7.3. Network tags
Network tags are responsible to direct the virtual networks to compatible
physical networks. They’re validated with network tags informed in the network
offerings
7
.
Some examples will be presented to explain the behaviour for network tags:
1. Tags organization:
Physical network: A
Offering: A,B
The physical network and the offering don’t accept lists for tags, i.e., there
could be only one tag for each resource. In this case, the value A,B set
fot the offering corresponds to a single tag. As the tags from the offering
and the physical network are different, it will not be possible to direct the
virtual network for this physical network.
2. Tagas organization:
Physical network: B
Offering: B
In this example the physical network and the offering have the same tag,
therefore will be possible to direct the virtual network to the physical net-
work.
3. Tags organization:
Physical network: C
7
Information about network offerings can be found in the Apache CloudStack usage docu-
mentation.
58
Offering: (no tags)
In this example will be possible to direct the virtual network to the physical
network, because the offering don’t have any tags as requirements.
4. Tags organization:
Physical network: (no tags)
Offering: D
In this example will not be possible to direct the virtual network to the
physical network, because the physical network don’t have any tags and,
consequently, don’t fulfill the offering requirements.
Example Network tag Offering tag Behaviour
1 A A,B Will not be directed
2 B B Will be directed
3 C (no tags) Will be directed
4 (no tags) D Will not be directed
In summary, directing the virtual network to the physical network will not
be possible if the offering have a different tag than the physical network. If the
offering don’t have any tag, directing will be possible, regardless of the physical
network having tags or not.
3.7.4. Flexible tags
When defining tags for a resource (i.e. a host), the offerings with those tags
will be directed to this resource. However, offerings without tags can also be di-
rected to it. In a way that, even after adding tags to a resource with the purpose
to make it exclusive for some kind of offerings, this exclusivity can be ignored.
Furthermore, the default tag system allows that only the user inform a sim-
ple list of tags, without the possibility to create more complex rules, such as
verifying if the offering have certain pairs of tags.
To circunvent this situations, ACS allows that hosts and storages have tags
that are rules written in JavaScript, also known as flexible tags. With flexible
59
tags, the role of tags are inverted, instead of the need for the host or storage
to have the offering’s tag to be directed, the offering that will need to have the
resource’s tag for which it will be added. This inversion causes that offerings
without tags can’t be directed to any resource. This way, operators may have
finer control over VMs and volumes directing within the environment.
The configuration of rules in the hosts are done through the updateHost
API, informing the rule in the hosttags field. Still, the configuration of rules in
the storages is done through the updateStoragePool API, informing the rule in
the tags field. For the informed tag to be effectively interpreted as JavaScript,
it’s necessary to declare the istagarule parameter as true every time that one
of the shown APIs are used.
It’s important to highlight that tag in compute offerings or disk offerings
are injected as a list. Therefore, when validating an offering with the tags A,B,
during processing, there will be the tags variable, in which tags[0] will be tag A
and tags[1] will be tag B.
Example for updateHost API usage for updating a host with a tag using JS:
update host id=3d7d8532-d0cf-476c-a36e-1b936d780abb istagarule=true hosttags="tags[0] == 'Premium'"
Via UI:
Select the host and click on edit:
60
Figure 40: Accessing the host editing
Create the flexible tag:
Figure 41: Creating the flexible tag in the host
Example for updateStoragePool API usage to update a primary storage with
a tag using JS:
61
update storagepool id=dc916dc7-cc1c-3f3d-905b-b198daf15a79 istagarule=true tags="tags.length == 0 ||
tags[0] === 'test'"
Via UI:
Select the primary storage and click in edit:
Figure 42: Accessing the primary storage editing
Create the flexible tag:
62
Figure 43: Creating the flexible tag in the primary storage
It’s important to mention that flexible tags are not compatible with Quota
activation rules.
3.8. Managing instance deployment based in their operating system
Apache CloudStack provide two functionalities to direct instance allocation
based on their operating system: preference OS and flexible guest OS.
3.8.1. Hosts Preference OS
Preference OS for hosts is a configuration that let the operator define a pri-
oritary operating system for a certain host.. This manner, hosts that have this
priority configured and have the same operating system as the VM will gain
more priority during the allocation process. The opposite is also valid, hosts
with the Preference OS configuration set with a different operating system than
the VM will be allocated with lower priority compared to the others hosts in the
environment.
However, this functionality only defines the priority order during the host
allocation, still being able that VMs with operating systems differing from the
63
host configuration be allocated in them. Therefore, the importance to empha-
size that this configuration don’t make possible the isolation of a host for a
determined operating system.
To access this configuration through the UI:
Figure 44: Access to the host to be configured
Figure 45: Access to the host’s editing form
64
Figure 46: Host’s Preference OS configurations
3.8.2. Flexible guest OS
The Flexible guest OS functionality allows the creation of rules, written in
JavaScript, that filters instances based in their operating systems during allo-
cation process for a VM in a host.. Thus, using the functionality is possible to
dedicate a host for one or more operating systems.
To access the configuration through the UI, access the host’s editing form:
Figure 47: Guest OS rule configuration for the host
For the definition of the rules, the variable vmGuestOs is provided and con-
65
tains the operating system of the instance to allocate, as a string. Thus, to ded-
icate a host for Windows VMs, the following rule can be added:
vmGuestOs.toLowerCase().indexOf('windows') >= 0
To dedicate the host for more than one operating system the logical opera-
tor OR, || in JavaScript, can be used, as such:
// dedicar host para VMs Ubuntu e Debian
vmGuestOs.toLowerCase().indexOf('ubuntu') >= 0 ||
vmGuestOs.toLowerCase().indexOf('debian') >= 0
Also, it’s possible to add rules that forbids allocating certain operating sys-
tems in the host, for example: :
// host accepts only VMs that don't use Ubuntu
vmGuestOs.toLowerCase().indexOf('ubuntu') == -1
3.9. Snapshots
Snapshots are recovery points for a whole VM or just a single disk. They
can be used as a way to recover from failures or even to create templates and
new volumes. In some hypervisors, such as VMware and KVM, they behave like
backups.
In this section we’ll touch only topics at operation level. For information
related to the Snapshot usage, check the Apache CloudStack usage documen-
tation.
3.9.1. Snapshot settings
The main snapshot settings are:
66
Setting Description Default value
max.account.snapshots Maximum number of
snapshots per account
20
max.domain.snapshots Maximum number of
snapshots per domain
Infinite
max.project.snapshots Maximum number of
snapshots per project
20
vmsnapshot.max Maximum number of
snapshots for a VM
10
3.10. Event and alert audit
Events are generated whenever an user performs an action within the cloud
or when the state of a resource, virtual or physical, is changed. Policy based
events are called alerts by Apache CloudStack.
Using events, is possible to monitor task, jobs and processes in CloudStack,
including their possible errors .
The main event and alert configurations are:
Name Description Default
Value
event.purge.delay Days until a created event is removed.
The value 0 indicates events that are
never removed.
15
event.purge.interval Interval, in seconds, until the creation for
a job to remove old events
86400
alert.purge.delay Days until a created alert is removed. The
value 0 indicates alerts that are never re-
moved.
0
alert.purge.interval Interval, in seconds, until the creation for
a job to remove old alerts
86400
alert.smtp.host Host in which the SMTP server is running
alert.smtp.password SMTP server password
alert.smtp.port SMTP server port
alert.smtp.username SMTP server user
alert.email.sender E-mail shown as issuer
alert.email.addresses E-mail receiver, which can be a comma
separated list
Value, in seconds, for a full day.
67
3.10.1. Alert e-mails
By default, alerts, as critical events, can be configured to send e-mails to the
cloud administrators. Possible scenarios which an alert e-mail will be sent to
the cloud administrators are:
The Management Server has low CPU, memory or storage;
The Management Server couldn’t communicate with a host for at least 3
minutes;
A host has low CPU, memory or storage.
3.10.2. Searching and filtering alerts
To view and search for alerts in the web interface you need to:
Figure 48: Accessing alerts
3.10.3. Removing or archiving alerts
When selecting the checkbox for an alert is possible to remove or archive
it:
68
Figure 49: Archiving or deleting an alert
No matter which option is chosen, a pop-up will appear asking for confirma-
tion.
3.10.4. Event and alert removal automation
ACS provides ways to automate the event and alert removal through the
global settings
event.purge.de lay e alert.purg e.del ay, respectively. The value
for these settings stands for the number of days until an event/alert is removed,
since its creation date
So, if the configuration is set with the value 5, after 5 days since events/alerts
are created they will automatically removed by ACS. The default value for even
t.purge.delay is 15 and for alert.p urge.delay is 0. If they are set with the value
0, events/alerts are never removed.
It’s advised to increase event retention to make possible to audit the envi-
ronment in larger time periods, such as 90 or 180 days.
3.11. Service offerings
To make it possible to create VMs, virtualizers require specifications for their
characteristics, such as CPU, memory size or disk to allocate, among others def-
initions. In CloudStack these VM characteristics are grouped and standardized
in form of service offerings, in order to ease VM specification, resource usage
monitoring and, if applicable, charge their usage.
3.11.1. Compute offerings
Compute offerings are offerings dedicated to computational resources of a
VM, like CPU and RAM memory. A compute offering also possess a disk offering,
69
that defines the characteristics of the VM’s root disk (when created based in a
template).
There are configurations that limit some compute offerings aspects, such
as:
Setting Description Value
Deafult
vm.serviceoffering.cpu.cores.max Maximum CPU cores during
the offering or VM creation. 0
implies in no limits.
0
vm.serviceoffering.ram.size.max Maximum RAM memory dur-
ing the offering or VM creation.
0 implies in no limits.
0
For information related to compute offerings usage, check the Cloud Con-
sumption Apache CloudStack documentation.
3.12. Network offerings - throttling
Process of controlling network access and bandwidth based in certain rules.
This behaviour is controlled by CloudStack in the guest networks through net-
work rates parameters (default transfer rates, measured in Mbps, allowed in
a guest network). This parameter determines maximum limits for network us-
age; if the current usage is higher that the defined limits, the access won’t be
granted.
It’s possible to control network jamming and accounts using the network
beyond the defined limits through bandwidth limitations. The network rate to
your cloud can be adjusted in the following way:
Network offering
Service offering
Global settings
If the network rate is defined as NULL in the service offering, the value set
in the global setting vm.network.thr ottling.rate will be applied. If the value is
70
defined as NULL for the service offering, the value set in the global setting net
work.throttling.rate will be considered.
For public networks, either storage or predefined management, the network
rate is defined as 0, which implies that they have unlimited bandwidth as de-
fault. Now for guest networks, the network rate is defined as NULL.
The network rate defined for a network offering that is used by a specific net-
work within CloudStack, is used for traffic modelling policies for a port group,
i.e., VRs connected to this port group encurrs that the network instance also
connected. However, an istance that were implemented with a compute offer-
ing that defines a network rate, will be connected to the port group imposed
for such traffic modelling policy, in which the network rate will be used.
3.12.1. System offerings
System offerings are similar to compute offerings but intended for system
VMs: console proxy VMs, secondary storage VMs and virtual routers. Cloud-
Stack comes with some offering intended for each of those VMs by default.
However, the root admin user may create new offerings and change which of
them one of those machines are using.
An example would be when a guest network is created, in which the vir-
tual router uses the system offering as their network offering. To satisfy a new
network offering it’s possible to define resources that will be suppplied by the
virtual router. This way, all VRs in this network will utilize this new service of-
fering.
71
Figure 50: Default system offerings
3.12.1.1. Creating system offerings
Adding a new system offering:
Figure 51: Starting to create a new system offering
By clicking on the Add System Service Offering button, a form for inserting
72
the information will open:
Figure 52: Creating a new system offering - 1 - continues
Via UI, the option S ystem VM Type show only the following types: Domai
n router, C onsole proxy and Secondary storage VM. However, via API is also
possible to created offerings of the internalloadbalancervm type.
Figure 53: Creating a new system offering - 2
73
Host tags will be used to direct the system VMs to compatible hosts (check
section 3.7.1).
Storage tags will be used to direct volumes to compatible storages (check
section 3.7.2).
3.12.1.2. Editing system offerings
After the creation of a system offering only a few attributes of it can me changed.
Editing:
Figure 54: Starting to edit the system offering
Figure 55: Editing the system offering
Change order:
74
Figure 56: Changing system offering order
3.12.1.3. Removing system offerings
To remove a system offering, access it and click on the trash can icon (default
offerings can’t be removed):
Figure 57: Starting the system offering removal
Figure 58: Removing a system offering
If there’s system VMs running under the system offering they will keep work-
ing normally, as CloudStack copies their system offering characteristics to the
system VMs.
75
3.12.1.4. Changing the system offering of a system VMs
It’s possible to change the offering that a system VM uses with the changeServ
iceForSystemVm API. With the root admin logged on in CloudMonkey
89
:
stop systemvm id=<system_vm_id>
change serviceforsystemvm id=<system_vm_id> serviceofferingid=<system_offering_id>
start systemvm id=<system_vm_id>
However, if the VM is recreated it will use the default offering again. To
change the system offering used by a system VM when created is necessary to
indicate the offering uuid in the global settings. The null value indicates that
the default CloudStack offering will be used.
Setting Description Value
Deafult
consoleproxy.service.offering uuid of the offering used by console
proxy VMs by default.
null
internallbvm.service.offering uuid of the offering used by internal
load balancer VMs by default.
null
router.service.offering uuid of the offering used by virtual
routers by default
null
secstorage.service.offering uuid of the offering used by sec-
ondary storage VMs by default.
null
After changing the values and restarting the Management Servers, when
destroying the system VMs will cause CloudStack to recreate them with the
new offerings.
3.12.2. Backup offerings
This offers are intended for importing backup offerings from providers out-
side CloudStack. Currently, the only supported provider is Veeam, for VMware.
CloudStack have a fake provider (Dummy) just for sufficing logical requirements,
8
To change the service offering is needed that the system VM is not running. When turning
it on it will already be using the new offering.
9
When stopping the system VM it will automatically start after some time. To keep it shutted
down is necessary to disable the zone.
76
however, when offerings are imported from this provider nothing will actually
happen.
3.12.2.1. Enabling backup offerings
To enable the backup offering feature is necessary to define the backup.frame
work.enabled global setting to true. Besides this, there’s other settings for this
feature:
Setting Description Value
Deafult
backup.framework.enabled Defines if the backup offering
feature is enabled or not.
false
backup.framework.provider.plugin Defines which provider will be
used.
dummy
backup.framework.sync.interval Defines the interval, in sec-
onds, for task synchronization
between CloudStack and the
provider.
300
The settings backup.framework.enabled and backup.framework.provider.p
lugin works within the zone scope.
There’s also specific settings for Veeam providers:
Setting Description Default Value
backup.plugin.veeam.password Password for
Veeam access.
backup.plugin.veeam.request
.timeout
Timeout, in seconds,
for Veeam requests.
300
backup.plugin.veeam.url Veeam URL. https://localhost:9398/api/
backup.plugin.veeam.username Username for
Veeam access.
administrator
backup.plugin.veeam.validate
.ssl
If set true will val-
idate the SSL cer-
tificate in https/ssl
connection with the
Veeam API.
false
Furthermore, the Veeam server needs to have SSH installed and running on
port 22.
77
Also, is important to highlight that when using Veeam, its templates shouldn’t
be used if they contain special characters. The reason is that using Regex to es-
pace such characters causes errors on Veeam.
The following are considered special characters: blank space, \, #, $, (), *, +,
., ?, [], ^, {}, | and accented letters. The characters “-" e “_" are not considered
special characters and can be used normally.
With the backup.framework.enabled global setting set to true, when restart-
ing CloudStack a new sub-tab will show up under Service Offerings:
Figure 59: Backup offering tab
3.12.2.2. Importing backup offerings
To import a backup offering:
78
Figure 60: Starting to import a new backup offering
When clicking on the Import Backup Offering button a form will show up to
fill the information needed:
Figure 61: Importing a new backup offering
The values shown in the External Id field are related to the provider config-
ured for the selected zone. If the Allow User Driven Backups field is checked,
79
the user may schedule backups or perform them manually.
3.12.2.3. Backup offering removal
It’s not possible to edit a backup offering, only remove it:
Figure 62: Starting backup offering removal
Figure 63: Removing the backup offering
It’s not possible to remove a backup offering if it’s already being in use by a
VM.
3.12.2.4. Using backup offerings
To use a backup offering, just select a VM and:
Figure 64: Starting to assign a VM to a backup offering
80
Figure 65: Assigning a VM to a backup offering
To perform a manual backup you just need to select the Start Backup option:
Figure 66: Starting manual backup for a VM
Figure 67: Performing a manual backup for a VM
To schedule a backup, select the Configure Backup Schedule option:
Figure 68: Starting backup scheduling
81
Figure 69: Scheduling backups
To remove a VM from the backup offering, select the Remove VM From
Backup Offering option:
Figure 70: Starting to remove backup offering from the VM
Figure 71: Removing backup offering from the VM
For more details check the official documentation.
82
3.12.3. IOPS and BPS limitation in disk offerings
While creating disk offering it’s possible to limit IOPS and BPS rates for vol-
umes through the QoS type option.
Figure 72: Selecting a QoS type
When setting IOPS and BPS rates limitation it’s important to pay attention
to the following details:
I/O operation size control:
When the operator defines limits for IOPS rates all I/O operations (data
input and output), regardless of size, are treated similarly, and as a result,
users can exploit this to bypass limits previously set. To prevent this the
operator needs to specify the block size for each operation. Supposing
that the IOPS limit for a VM is 1000, the virtualizer will apply a limit of 1000
83
I/O operations per second, regardless of the size of those IO requests.
Therefore, requests with bigger blocks will benefit from this. To illustrate
this case, some tests with the tool were perfomed.
A VM were deployed with a disk offering limiting it for 1000 IOPS for read-
ing and writing data. We executed the fio command specifying the block
size as 64KiB and next, executed the fio command again specifying the
block size as 128 KiB. Even if the total IO operations are different between
both executions, both would perform at around 1000 IOPS. This is be-
cause of execution time needed for each one. Using a bigger writing block,
twice the speed was achieved for reading and writing, even both execu-
tions following the same 1000 IOPS limit defined. The table below shows
briefly a comparison for the discussed values.
Block size Read(IOPS) Read(MiB/s) Write(IOPS) Write(MiB/s)
64 KiB 1008 63 337 21.1
128 KiB 1016 127 346 43.3
Currently, ACS don’t externalize the operation block size, however the
same objective may be achieved setting a limit for reads and writes in
BPS (parameters: bytesreadrate, byteswriterate)
84
Figure 73: Limitando a taxa de BPS
Considering the same case above, specifying diskBytesReadRate to 104857600
(100 MiB) and diskBytesWriteRate to 31457280 (30 MiB), the usage with
block size of 128 KiB would be limited by these values.. This way, when
defined both BPS and IOPS limits are set, the VM will stop with the first
limit being reached. In this case, using the 128 KiB block would first reach
the BPS limit, therefore the VM would never reach 1000 IOPS. The BPS lim-
itation for the 64 KiB block size would have no effect, since the IOPS limit
would be reached first. The table below show the previously mention data
briefly.
85
Block size Read(IOPS) Read(MiB/s) Write(IOPS) Write(MiB/s)
64 KiB 1008 63 337 21.1
128 KiB 710 88.8 242 30.3
Analyzing the situation above, is advisible the usage of the parametersdis
kBytesReadRate and diskBytesWriteRate in union with IOPS limits to limit
the read and write speed for the volumes.
I/O bursts:
A burst is when the read and write limits are exceeded for a certain amount
of time to enable a better performance during intense tasks.
This functionality is present in ACS but it’s only available through the
10
(
createServiceOffering and createDiskOffering) API. The parameters that
enable this settings are:
10
For more details about sending requests directly to the API ??
86
Parameter Description
bytesreadratemax Defines the maximum
value for the read burst
in BPS
bytesreadratemaxlength Defines the maximum
duration for the read
burst in seconds
byteswritemax Defines the maximum
value for the write burst
in BPS
byteswritemaxlength Defines the maximum
duration for the read
burst in seconds
iopsreadratemax Defines the maximum
value for the read burst
in IOPS
iopsreadratemaxlength Defines the maximum
duration for the IOPS
read burst in seconds
iopswriteratemax Defines the maximum
value for the write burst
in IOPS
iopswritemaxlength Defines the maximum
duration for the write
burst in seconds
3.13. Storage management within Apache CloudStack
The goal of this topic is to show in more details what’s the purpose for each
storage type present in ACS, their respective scopes, protocols and providers.
3.13.1. Primary storage
The primary storage is used to store disks used by hosted virtual machines.
It can operate at host level, cluster level or zone level, with the possibility to add
multiple primary storages to clusters and zones. At least one primary storage
per cluster must exist for the orchestrator proper operation.
CloudStack was designed to work with a wide variety of storage systems,
also capable of using local disks in the virtualizers hosts, as long the selected
87
virtualizer offers support for this. THe support for storage types for virtual disks
depends on the virtualizer used.
Storage type XenServer vSphere KVM
NFS Supports Supports Supports
iSCSI Supports Supports via
VMFS
Supports via
cluster file sys-
tem
Fiber Channel Supports via
storage reposito-
ries
Supports Supports via
cluster file sys-
tem
Local disk Supports Supports Supports
ACS offers support for the following protocols:
NFS;
Shared Mount Point;
RDB;
CLVM;
Gluster;
Linstor e
Custom
CloudStack allows that administrators manage the primary storage based
on the cloud environment needs. These can be added, enabled, disabled or set
in maintenance mode, where each of this states presents different effects in
the storage management.
3.13.1.1. Adding a primary storage
Before creating primary storages is necessary to define the storage type and
protocol used
11
. After this, must be guaranteed access to storage from the
11
The table 3.13.1 shows the storage types supported by each virtualizer.
88
hosts, validating read and write for each of them. To access the primary storage
addition menu:
Figure 74: Accessing the primary storage addition menu
Figure 75: Details for adding a primary storage
Addition with the NFS protocol:
To operate with NFS it’s only need to inform the NFS server address and
the path exported from the server. With this protocol, isn’t necessary to
89
mount the primary storage on the host manually, since ACS will perfom
this procedure.
Figure 76: Adding a primary storage with NFS
Addition with Shared Mount Point protocol:
A shared mount point is a path from the local file system for each host
in a given cluster. The path must be the same for all hosts in the cluster,
such as /mnt/p rimary-storag e. ACS will not mount the primary storage
like when using NFS, therefore the operator must ensure that the storage
is properly made available.
90
Figure 77: Adding a primary storage with Shared Mount Point
3.13.1.2. Disabling a primary storage
When disabling a primary storage, running VMs that have volumes in this Stor-
age Pool will not be affected, however new volumes and VMS created will not be
allocated in it. The migration of volumes for a disabled primary storage is only
possible via API, because through UI the primary storage will be shown as Not
suitable. Yet, volumes present in a disabled primary storage can be migrated
out of it either through API or UI.
91
Figure 78: Disabling a primary storage
3.13.1.3. Maintenance mode for primary storage
When setting a primary storage in maintenance mode all VMs that have vol-
umes in it will be stopped and volumes shall be migrated to other primary stor-
age in Up state. Furthermore, will not be possible to allocate new Vms and vol-
umes in this primary storage. When starting a stopped VM that have its volume
in a primary storage under maintenance, Apache CloudStack will automatically
migrate the volume to an appropriate primary storage. It’s important to notice
that the CloudStack Agent will not unmount the primary storage under main-
tenance.
Figure 79: Enabling the maintenance mode
3.13.1.4. Behaviour after restarting hosts
If the CloudStack Agent is restarted and the primary storage is either disabled
or maintenance mode, it will not be automatically mounted. In case that the
state is changed after restarting the CloudStack Agent, the primary storage will
have one of the following behaviours, depending on the initial state:
92
Maintenance mode: If the primary storage leaves the maintenance mode
it will mounted again.
Disabling: If the primary storage is enabled, it will be necessary to restart
the CloudStack Agent for it to mount again. If it’s in the user’s inrest, they
can enable the setting mount.disabled.storage.pool, which makes so that
disabled storage pools are automatically mounted in case of host reboot.
3.13.1.5. Local storage usage
In global settings there’s a parameter called system.vm.use.local.storage, that
indicates if system VMs (CPVM, SSVM, VR) may use local or shared. disk When
the value of this parameter is set to true, the system VM’s data will be preserved
in a available local disk and if there’s no enabled local disks, the Management
Server will show an error message for insufficient capacity during system VM
initialization.
Figure 80: Enabling the usage of local storage for system VMs
To utilize local storage for user’s VMs is necessary to enable the function-
93
ality for the zone. This can be achieved accessing the zone editing menu and
enabling the option Enable local storage for user VMs.
Figure 81: Accessing the zone to be edited
Figure 82: Accessing the zone editing menu
94
Figure 83: Enabling the usage of local storage for user’s VMs
After making changes to system.vm.use.local.storage and Enable loc al st
orag e for user VMs settings it’s necessary to restart the Management Server
services. The section 7.3.1 exemplifies such process in more detail.
For a new instance to make use of local disks, it’s necessary, in addition to en-
abling the setting that allows this behaviour in the zone, that the service offerings
used for creating this instance are also set to utilize local disks and that there’s
local disks enabled. Otherwise, an error message of insufficient capacity will be
displayed from the Management Server, since there’s no available resource to
allocate the instances. TH service offerings can’t be edited directly, therefore
it’s necessary that they are created with the local storage option checked. The
images below show where to find this option when creating a new offering.
95
Figure 84: Adding a compute offering with highlight in their Storage Type
96
Figure 85: Adding a disk offering with highlight in the Storage Type
With the option Ena ble local st orage for us er VMs enabled, available local
storages will be listed under the "Infrastructure Primary storage" menu. To mi-
grate stopped user VM’s volumes to local storage, it’s necessary to perform cold
migration of the volumes, selecting the available local storages, as the image
below shows. However, for migrating user VMs that are running, it’s necessary
to perform hot migration of the volumes. Both processes are exemplified in
more detail in the 3.3 section.
97
Figure 86: Cold volume migration to local storage
For system VMs and virtual routers it’s possible to perform the migration via
the m igrateVirtual MachineWithVolume API, informing the VM ID, destiny host
ID, volume ID and destiny storage ID. For example:
> migrate virtualmachinewithvolume virtualmachineid=<VM-id> hostid=<host-id>
migrateto[0].volume=<volume-id> migrateto[0].pool=<target-storage-id>
To get the ID from the system VM’s volumes it’s necessary to use the listVolume
s API, informing the listall and listsystemvms parameters to true. For example:
> list volumes zoneid=<zone-id> listall=true listsystemvms=true
Therefore, it’s necessary that the settings are configured so that both system
VMs and user VMs utilize the local storage feature, otherwise, there will be er-
rors for insufficient capacity when allocating resources for new instances.
3.13.2. Secondary storage
The secondary storage items are available for all hosts in the same hierar-
chical level of this storage, which is defined zone wise. This storage type is used
to store:
98
Templates: Operating systems’s images that may be used to initialize in-
stances and can include additional configuration information, such as in-
stalled applications.
ISOs: disk images containing data or bootable midia for operating sys-
tems..
Snapshots: saved copies of a VM’s data (either the whole VM or a specific
volume) that may be used for data recovery or for creating new models.
Each zone has at least one secondary storage, in which is shared between
all service points in the zone. CloudStack offers the functionality for automati-
cally replicate the second storage through zones in a way that’s fault tolerant.
ACS also was designed to operate with any scalable secondary storage system.
The only requirement is that the secondary storage system supports the NFS
protocol.
CloudStack provides plugns that allow storing objects from OpenStack Ob-
ject Storage swift.openstack.org and from Amazon Simple Storage Service (S3).
When using those storage plugins, it’s necessary to configure the Swift or S3
storage and then setup a NFS secondary storage in each zone. A NFS storage
must be kept in each zone, cause most of the virtualizers can’t directly mount
S3 storages. This NFS storage acts in each zone as a setup zone which all tem-
plates and othe secondary storage data travel through before are forwarded
to Swift or S3. Swift or S3 storage acts as a resource within the whole cloud,
making available templates and other data for any zone in the cloud.
The following operations are available for secondary storage management:
3.13.2.1. Adding secondary storages
When creating a new zone, firstly the second storage is created as part of the
process. If needed, the operator may create a new secondary storage.
99
Figure 87: Adding a new secondary storage
Figure 88: Deatails while adding a new secondary storage
3.13.2.2. Data migration between secondary storages
CloudStack allows to migrate date from a secondary storage to another, while
choosing between two migration policies:
Complete: migrates all data from on secondary storage to another;
Balance: migrates only a portion of the data, keeping the storages bal-
anced.
100
Figure 89: Migrating data between secondary storages
Figure 90: Details for migrating data between secondary storages
3.13.2.3. Read-only mode for secondary storage
It’s possible to define a read-only secondary storage, preventing any additional
templates, ISOs and snapshots to be stored in it.
101
Figure 91: Defining a secondary storage as read-only
3.13.2.4. Read-write mode for secondary storage
If the read-only mode is selected, the option to reactivate the read-write mode
becomes available, having to reactivate it for performing new store in the sec-
ondary storage.
Figure 92: Defining a secondary storage as read-write
102
3.13.2.5. Secondary storage removal
Lastly, it’s also possible to delete secondary storages:
Figure 93: Deleting a secondary storage
Figure 94: Confirming secondary storage removal
3.14. Resource allocation for secondary storage
CloudStack possess a functionality called secondary storage selectors that
allows it specify in which secondary storage templates, ISOs, volumes and vol-
ume snapshots will be allocated. Currently, the only way to use this resource
is sending requests directly to the API through CloudMonkey. The Secondary
Storage Selectors, through the heuristicrule parameter, use a conditional block,
that must be written in JavaScript, as activation rule. It’s crucial to emphasize
that when specifying an activation rule, the secondary storage attributes will
103
always be present for any kind of resource; on the other hand, snapshot, ISO,
template and volume variables will only be available for their respectives types
based on the purpose parameter from the cre ateSec ondary Storag eSelectors
API
The table below shows possible values for each available resource:
Resource Attributes
Secondary Storage id, usedDiskSize, totalDis
kSize, protocol
Snapshot size, hypervisorType
ISO/TEMPLATE format, hypervisorType
Volume size, format
To carry out the usage of this feature firstly the operator needs to create
the selector via the createSec o ndaryStorageSele c tor API. The created selector
will specify the resource type (ISO, snapshot, template or volume), in which the
activation rule will be validated when the zone allocation is performed, and
the zone where the rule will be applied. It’s important to highlight that’s only
possible to have one activation rule for each type within the same zone. Down
below, there’s some use cases:
1. Allocating a resource type for a specific secondary storage:
function findStorageWithSpecificId(pool) {
return pool.id === '7432f961-c602-4e8e-8580-2496ffbbc45d';
}
secondaryStorages.filter(findStorageWithSpecificId)[0].id
2. Dedicating storage pools for a specific template format;
function directToDedicatedQCOW2Pool(pool) {
return pool.id === '7432f961-c602-4e8e-8580-2496ffbbc45d';
}
function directToDedicatedVHDPool(pool) {
return pool.id === '1ea0109a-299d-4e37-8460-3e9823f9f25c';
}
if (template.format === 'QCOW2') {
secondaryStorages.filter(directToDedicatedQCOW2Pool)[0].id
} else if (template.format === 'VHD') {
secondaryStorages.filter(directToDedicatedVHDPool)[0].id
}
104
3. Directing a volume snapshot with KVm to a specific secondary storage;
if (snapshot.hypervisorType === 'KVM') {
'7432f961-c602-4e8e-8580-2496ffbbc45d';
}
4. Directing resources for a specific domain
if (account.domain.id == '52d83793-26de-11ec-8dcf-5254005dcdac') {
'1ea0109a-299d-4e37-8460-3e9823f9f25c'
} else if (account.domain.id == 'c1186146-5ceb-4901-94a1-dd1d24bd849d') {
'7432f961-c602-4e8e-8580-2496ffbbc45d'
}
The Secondary Storage Selectors feature has APIs: listSecondaryStorageSel
ectors, createSeco ndaryStorageSele c tors,updateSecond aryStorageSelect ors e
removeSecondaryStorageSelectors.
The listSeconda ryStorageSelectors API has the function to show all the
Secondary Storage Selectors available and have the following parameters:
Parameter Description Obligatory Default value
zoneid Id from the zone used in
the selectors search
Yes -
purpose Type from the object
used in the search. Valid
options are ISO, SNAP-
SHOT, TEMPLATE and
VOLUME
Yes -
showRemoved Specify if removed selec-
tors shall be listed
No false
API usage example listSecondaryStorageSelectors
list secondarystorageselectors zoneid=<id da zona> purpose=<object type>
The createSecondaryStorageSelectors API function is to create a Secondary
Storage Selector and have the following parameters:
105
Parameter Description Obligatory
name Name for selection identi-
fication
Yes
description Selector description Yes
zoneid Zone where the selector
will be active
Yes
purpose Object type in which the
selector will operate.
Valid options are ISO,
SNAPSHOT, TEMPLATE and
VOLUME
Yes
heuristicRule The rule specified in
JavaScript that will direct
the object to a specific
secondary storage; the
rule must always return a
text containing the UUID
for a valid secondary
storage
Yes
API usage examplecreateSecondaryStorageSelectors
create secondarystorageselector name=<selector name> description=<description> zoneid=<zone id>
heuristicrule=<activation rule> purpose=<object type>
The upd ateSecondaryS torageSelecto rs API function is to update a Sec-
ondary Storage Selector and has the following parameters:
Parameter Description Obligatory
id Selector identifier of the
secondary storage
Yes
heuristicRule The new rule specified in
JavaScript; the rule must
always return a text con-
taining the UUID for a
valid secondary storage
Yes
API usage example updateSecondaryStorageSelectors
update secondarystorageselector id=<selector id> heuristicrule=<activation rule>
106
The removeSecondaryStorageSelectors function is to remove a Secondary
Storage Selector and have the following parameters:
Parameter Description Obligatory
id Removed selector identi-
fier
Yes
API usage example removeSecondaryStorageSelectors
remove secondarystorageselector id=<selector id>
107
4. Apache CloudStack settings
This section presents concepts related to the ACS settings, as well settings
that are frequently changed for environment customization.
4.1. Settings scopes
One of the main characteristics of the Apache CloudStack is its flexibility
related to environment settings. This flexibility is achieved, among others rea-
sons, by using settings scopes, which are settings groups that affect the Cloud-
Stack behaviour in different levels. This grants more flexibility in the ACS con-
figuration and helps optimizing performance and efficience for the system as
a whole.
ACS provides settings in the following scopes: Global, Zone, Cluster, Domai
n, Account, ManagementServer, StoragePool e ImageStore.
The Global scope is the broadest one, embracing all the environment set-
tings. The settings defined under this scope are applied to all of the cloud subdi-
visions, allowing the user to uniformly customize the settings for the whole en-
vironment. Similarly, other scopes indicate that settings belonging to them are
applied to all resources within their categories. For example, when the value
for a setting within the Account scope is changed, this change will only affect
the current account. In turn, if the change was made in the Domain scope, the
change would affect all members within that domain.
Beyond setting scopes, there are other settings that are useful during the
ACS customization process.
enable.account.settings.for.domain: Allows that settings under the ac-
count scope are also visible at domain level. Furthermore, if an account
setting is set by the administrator, that will be the value considered. Oth-
erwise, the value from the domain scope that will be considered. If the
value under the domain scope isn’t setted, the value from the global set-
ting will be selected. In case that this setting is set to false, if a setting at
108
account level isn’t set by the administrator, teh value considered will be
that from the global setting.
enable.domain.settings.for.child.domain: if enabled, this setting allows
that child domains (subdomains) can inherit the settings defined by their
parent domain. For example, if a parent domain defines a certain setting
to limit the CPU usage to 50%, all other children domain will inherit this
setting, unless they have a specific setting that overwrites the one from its
parent domain. This eases domain and account configuration and make
the process more efficient, allowing that administrators define settings
at a higher level and inherit them to lower levels. As default, this setting
comes disabled.
4.2. Global settings that control the primary storage usage
The following global settings manage the CloudStack behaviour when a stor-
age is close to running out of storage capacity:
109
Setting Description Default
Value
cluster.storage.allocated.capacity.notificationthreshold Value between 0
and 1 to indicate
the storage allo-
cation percentage
that will trigger
sending alerts for
low available stor-
age space.
0.75
cluster.storage.capacity.notificationthreshold Value between 0
and 1 to indicate
the storage usage
percentage that
will trigger sending
alerts for low avail-
able storage space.
0.75
pool.storage.allocated.capacity.disablethreshold Value between 0
and 1 to indicate
the storage allo-
cation percentage
that will trigger to
disable itself.
0.85
pool.storage.capacity.disablethreshold Value between 0
and 1 to indicate
the storage usage
percentage that
will trigger to dis-
able itself.
0.85
More information about primary storage management can be found in 3.13.1.
110
4.3. Settings for limiting resource
Setting Description
max.account.public.ips Maximum number of public IPs for an ac-
count
max.account.snapshots Maximum number of snapshots for an
account
max.account.templates Maximum number of templates for an ac-
count
max.account.user.vms Maximum number of VMs for an account
max.account.volumes Maximum number of disks for an ac-
count
max.template.iso.size Maximum size for a template or ISO (GB)
max.volume.size.gb Maximum size for disks (GB)
max.account.cpus Maximum number of CPUs for an ac-
count
max.account.ram Maximum amount of RAM (MB) for an ac-
count
max.account.primary.storage Maximum space (GB) in the primary stor-
age that may be used by an account
max.account.secondary.storage Maximum space (GB) in the secondary
storage that may be used by an account
max.project.cpus Maximum number of CPUs for a project
max.project.ram Maximum amount of RAM (MB) for a
project
max.project.primary.storage Maximum space (GB) in the primary stor-
age that may be used by a project
max.project.secondary.storage Maximum space (GB) in the secondary
storage that may be used by a project
4.4. Settings that control Kubernetes usage
This section will approach only global settings necessary for using Kuber-
netes. For information related to the usage of Kubernetes, access the Apache
CloudStack usage documentation.
4.4.1. Enabling Kubernetes integration
Firstly, its important to pay attention to the version used from both Cloud-
Stack and Kubernetes. New Kubernetes versions are released more frequently
that CloudStack version, therefore it’s possible that new Kubernetes versions
won’t properly work with the current CloudStack version.
111
The Kubernetes integration is disabled by default. To enable it, access the
global settings and change the setting cloud.kubernetes.service.enabled to true
and then restart the Management Server.
Once with the integration enabled, the new APIs will be available and new
Kubernetes tabs will show up in the UI.
4.4.2. Kubernetes clusters creation
To create a Kubernetes cluster, the Management Server must have access
to the public IPs from the networks used for virtual machine provision in the
cluster. For that, it’s necessary to change the global setting endpoint.url to the
domain accessing the ACS, as shown in the example below:
Figure 95: Global setting endpoint.url
For a new network to be created when none are selected during the creation
of the Kubernetes cluster, the global setting cloud.kubernetes.cluster.network.
offering must be defined, setted with the desired offering to be used as default.
112
5. UI customization
This section shows how to customize the Apache CloudStack web interface
in different ways and preferences.
5.1. Changing logo and other elements
Note: this is the old GUI customization model. We recommend using a
theme management system (Section 5.3) for greater customizations and better
management from them.
It’s possible to change/customize some aspects from the CloudStack web
interface, such as logo, banner, page title, footers, among other elements. In
this section will be presented the logo and banner customization process. It’s
advisible to perform a backup from the folder containing th following settings,
in case it’s necessary to revert changes made.
To customize logos and banner, the procedure is the following:
1. Login into the Management Server via ssh:
user@machine:~$ ssh username@ip-do-management
# A practical example:
user@machine:~$ ssh root@192.168.103.168
2. Browse to the directory: /usr/share/cloudstack -m an a g e me n t / we b a pp / a s
sets:
user@machine:~$ cd /usr/share/cloudstack-management/webapp/assets
# In case a "Permission denied" error occurs, execute the command:
user@machine:~$ sudo su
# And then browse to the CloudStack logs directory.
3. In this directory the images used by CloudStack may be found. If you want
to change the logo or banner, just change the files logo.svg and banner.sv
g, respectively and update them in the config.json settings file. To change
this images the procedure consists in upload the logo and banner to the
Management Server cliente via scp.
4. Repeat this process in each existent Management Server in the infrastruc-
ture.
113
5. If any procedure on the Management Servers results in a "Permission de-
nied" error, execute the command:
user@machine:~$ sudo su
Then repeat the process that caused the error.
Some notes:
There’s a file named cloud.ico in the directory /usr/share/cloudstack-man
agement/webapp, responsible for the icon shown in the browser tab, that
can also be changed.
In this same directory, there’s a file named config.json, that can be used
to change the page title, as well as the footers and other details.
The CloudStack UI have a cache period. If changes aren’t visible it’s rec-
ommended to clean the cache and check again if the changes made were
applied.
As the CloudStack web interface, at least until the writting of this doc-
ument, is relatively new and it’s still under development, maybe will be
necessary to repeat this procedure when the current CloudStack version
is updated.
An example of CloudStack customization:
{
"apiBase": "/client/api",
"docBase": "http://docs.cloudstack.apache.org/en/latest",
"appTitle": "SC Clouds",
"footer": "Welcome to the portal with custom aspects",
"logo": "assets/logo.svg",
"banner": "assets/banner.svg",
"error": {
"403": "assets/403.png",
"404": "assets/404.png",
"500": "assets/500.png"
},
// ...
}
114
Figure 96: Customized banner
Figure 97: Customized footer
5.2. Changing logo when resizing the page
When the user resizes the page to some extent, or clicks to collapse the side
menu, the environment logo setted is cut to be adjusted to the menu size.
The default ACS logo is designed so that when cut it becomes kind of a "mini
logo", while this can’t be observed for other logos as shown below:
Figure 98: Whole logo
115
Figure 99: Cut logo
To customize the logo change behaviour the settings in the file /usr/s h a r e /
cloudstack-management/webapp/config.json must be changed in each of the
Management Servers of the environment.
Properties related to this improvement (mini logo) are shown below. Notice
that other properties in this file were omitted with [...] for better reading:
{
[...]
"logo": "assets/logo.svg",
"minilogo": "assets/mini-logo.svg",
[...]
"theme": {
"@logo-background-color": "#ffffff",
"@mini-logo-background-color": "#ffffff",
[...]
"@logo-width": "256px",
"@logo-height": "64px",
"@mini-logo-width": "80px",
"@mini-logo-height": "64px",
}
}
Figure 100: Mini logo
Note: To apply the effects the page’s cache must be refreshed.
5.3. Theme management
Themes are an alternative for UI customization in CloudStack. They have
distinct functionalities and purposes, designed, but not limited, to answer the
116
needs from cloud providers that wish to implement a resale model and pro-
vide a White Label cloud system. It’s possible to manage themes at different
application levels, allowing greater customization freedom.
This functionality allows managing themes at account, domain and internet
common name level. It’s possible to setup CSS rules and attributes in JSON for-
mat that will be used to load the theme dynamically. If no theme is registered,
the GUI will use the current environment settings.
Down below, there are the APIs that allow managing themes:
5.3.1. Theme creation
The API createGuiTheme it’s accessible only for root admin accounts and
allows that themes are created at different scopes. This API have the following
parameters:
Parameter Description Obligatory Default
value
name Name to identify the theme. Yes -
description Theme description. No null
css CSS imported by the GUI to cor-
respond to the theme access set-
tings.
No null
jsonconfiguration JSON containing the settings to
be imported by the GUI when
match the theme access settings.
More details about the JSON set-
tings may be found in the Section
5.3.3.2.
No null
commonnames Set of internet common names,
comma separated, that have ac-
cess to the theme.
No null
117
Parameter Description Obligatory Default
value
domainids ID set, comma separated, from
the domains that have access to
the theme.
No null
accountsids ID set, comma separated, from
the accounts that have access to
the theme.
No null
ispublic Defines if the theme access is
open to anyone, when only the co
mmonnames it’s informed. If the
domainids or accountids are in-
formed, it’s considered false.
No true
Table 1: createGuiTheme parameters
If commonnames, doma inids and accountids aren’t informed, the theme
used will be the default; only one default theme can exist and it’s automatically
public.. If there’s no corresponding them, the GUI will use the current environ-
ment settings as fallback.
Besides the fields css and jsonc onfiguration are not mandatory, it’s neces-
sary that at least one of the fields are informed to create the theme.
The subsection 5.3.5 exemplifies theme creation and common interface cus-
tomizations.
5.3.2. Theme list
The listGuiThemes API is accessible to any user and search for themes based
with the parameters and the requesting user’s access. It have the following
parameters:
118
Parameter Description Obligatory Default
value
id Theme ID. No null
name Theme name. No null
commonname Filtered internet common name. No null
domainid Filtered domain ID. No null
accountid Filtered account ID. No null
listall If used, lists all themes. No false
listremoved If used, lists removed themes. No false
ispublic If used, lists public themes. By de-
fault, all will be listed.
No true
listonlydefaulttheme Lists only the default theme. If set
to true, then all other parameters
will be ignored.
No false
Table 2: listGuiThemes parameters
To allow that the theme is shown on the login page, it’s possible to call this
API without authentication; however there are limitations to this use case. An
unauthenticated call in the listGuiThemes API always retrieve the default theme
or the latest active public theme that corresponds to the internet common
name requested to the API. In addition to that, all API parameters are ignored.
One possible way to make queries is shown in the example below:
(admin) > list guithemes listall=true
{
"count": 1,
"guiThemes": [
{
"created": "2023-09-15T16:20:18+0000",
"css": ".layout.ant-layout .header {background: purple;}",
"id": "a9a05158-2870-48b7-877a-626badde4b28",
"ispublic": true,
"jsonconfiguration": "{\"banner\":\
"https://res.cloudinary.com/sc-clouds/image/upload/v1645707938
/identidade/scclouds-logo_f56v2y.png\", \
"logo\":\"https://res.cloudinary.com/sc-clouds/image/upload/v1645707938
/identidade/scclouds-logo_f56v2y.png\", \"favicon\":\
"https://gitlab.scclouds.com.br/uploads/-/system/appearance/favicon
/1/scclouds-avatar.ico\"}",
"name": "example-theme"
}
]
}
119
5.3.3. Updating a theme
The updateGuiTheme API is accessible only for root admin accounts and
allows updating a theme already added. It have the following parameters:
Parameter Description Obligatory Default
value
id ID from theme for update. Yes -
name Name to identify the theme. No null
description Theme description. No null
css CSS imported by the GUI when
the access settings match those
from the theme.
No null
jsonconfiguration JSON containing the settings to
be imported by the GUI when
match the theme access settings.
More details about the JSON set-
tings may be found in the Section
5.3.3.2.
No null
commonnames Set of internet common names,
comma separated, that can ob-
tain the theme.
No null
domainids ID set, comma separated, from
the domains that can obtain the
theme.
No false
accountids ID set, comma separated, from
the accounts that can obtain the
theme.
No false
ispublic Boolean parameter that defines if
the theme access is open to any-
one, when only the commonnam
es it’s informed. If the domainid
s or accountid s are informed, it’s
considered false.
No true
Table 3: updateGuiTheme parameters
Below is presented an update example for the theme listed in the section
5.3.2. We highlight that updated will overwrite the fields css and jsonconfigura-
tion, as can be shown in the API call return. Besides that, to remove a parame-
ter, it’s necessary to explicitly pass it as a empty string.
120
(admin) > update guitheme id=a9a05158-2870-48b7-877a-626badde4b28
css='.layout.ant-layout .header {background: orange;}'
{
"guiThemes": {
"created": "2023-09-15T16:20:18+0000",
"css": ".layout.ant-layout .header {background: orange;}",
"id": "a9a05158-2870-48b7-877a-626badde4b28",
"ispublic": true,
"jsonconfiguration": "{\"banner\":\"https://res.cloudinary.com/sc-clouds/image
/upload/v1645707938/identidade/scclouds-logo_f56v2y.png\", \
"logo\":\"https://res.cloudinary.com/sc-clouds/image/upload/v1645707938
/identidade/scclouds-logo_f56v2y.png\", \"favicon\":\
"https://gitlab.scclouds.com.br/uploads/-/system/appearance/favicon/1
/scclouds-avatar.ico\"}",
"name": "example-theme"
}
}
5.3.3.1. CSS field
In the CSS field it’s possible to add customized styles to the CloudStack UI, di-
rectly applied in the HTML tags or within the <style> tags, using the CSS lan-
guage. Another option is to use @imp o rt, which makes possible to add a style
through an URL to the CSS file.
@import <url|string> <media-queries-list>
url: An URL or string that represents the location of the resource to import.
The URl may be absolute or relative.
media queries: List, comma separated, of midia queries that set the appli-
cation of the CSS rules defined in the given URL.
Using @import may be advantageous to ease adding themes and to orga-
nize them, as ACS don’t have a mechanism to control CSS files, which limits the
operator possibilities. For that, importing a versioning system it’s simpler and
easier, besides small disadvantages, such as the need to perform additional
HTTP requests, which have little to no performance impact.
5.3.3.2. JSON settings
The JSON settings follow the current ACS standard, config.json; however, limited
in some attributes:
121
Attribute Description.
appTitle Web page title.
favicon Web page favicon.
footer Web page footer.
loginFooter Web page login footer.
logo Local or external link for the image to be presented as the left
bar logo.
minilogo Local or external link for the image to be presented as the left
bar logo miniature, when collapsed.
banner Local or external link for the image to be presented as the login
background.
error.403 Local or external link for the image to be presented when re-
ceiving an error 403.
error.404 Local or external link for the image to be presented when re-
ceiving an error 404.
error.500 Local or external link for the image to be presented when re-
ceiving an error 500.
plugins Set of plugin objects.
Table 4: jsonconfiguration attributes
The plugins structure are as follows:
Attribute Description
name Plugin name.
path Link to the web page to add the plugin.
icon Icon for the plugin.
isExternalLink Determines if the plugin refers to an external link.
Table 5: jsonconfiguration plugin attributes
jsonconfiguration example:
{
"appTitle": "CloudStack",
"favicon": "cloud.ico",
"footer": "Licensed under the Apache License, Version 2.0.",
"loginFooter": "",
"logo": "assets/logo.svg",
"minilogo": "assets/mini-logo.svg",
"banner": "assets/banner.svg",
"error": {
"403": "assets/403.png",
"404": "assets/404.png",
"500": "assets/500.png"
},
"plugins": [
122
{
"name": "Apache CloudStack",
"path": "https://cloudstack.apache.org/",
"isExternalLink": "true",
"icon": "https://cloudstack.apache.org/images/favicon.ico"
}]
}
If other attributes are specified, they are ignored.
5.3.4. Removing a theme
The removeG uiTheme API it’s accessible only for root admin accounts and
allows that they remove themes. It’s necessary to give the theme ID for removal,
as shown in the example below:
(admin) > remove guitheme id=a9a05158-2870-48b7-877a-626badde4b28
{
"success": true
}
5.3.5. Common UI customization examples
This topic aims to present the most common UI customization examples,
besides providing general tips to ease the stylization process.
5.3.5.1. Creating themes with external stylization file
Firstly, it’s necessary to execute the createGuiTheme API via CloudMonkey to
create the theme. As mentioned in the 5.3.3.1 topic, it’s advisible that the CSS
@import at-rule is used to configure the stylization. Through that, it’s possible
to introduce CSS properties in a separated file instead of inserting them via
terminal.
create guitheme name="theme" css="@import url('<css-file-url>')"
5.3.5.2. Notes about style conflicts
When styling the ACS interface it’s common that conflicts may appear between
styles that are being inserted and those already under use. They occur when
two or more selectors have conflicting styles applied to a same element.
To solve this issue it’s necessary that the selector being inserted have the
highest specificity possible. As a last resort can be used the !important rule.
123
Styles declared with it will be preferred over any other style. As shown in the
following examples, using this rule turns to be a common practice.
5.3.5.3. Adding fonts
In CSS, fonts may be imported. For this example the font Poppins was utilized.
@import url("<font-url>");
* {
font-family: 'Poppins' , Courier !important;
}
5.3.5.4. Using CSS variables
It’s recommend adding CSS variables to abstract property values and keep con-
sistence during stylization. For that, they can be inserted in the CSS file, prefer-
ably in the :root selector. The variables used in this example are the following:
:root {
--main-green: #3E7C59;
--main-red: #ae0000;
--main-default: #327355;
--main-primary-light: #db4e2d22;
--main-primary-dark: #db4e2d;
--main-secondary-light: #E9782622;
--main-secondary-dark: #E97826;
--main-gray-light: #eee;
--main-linear-colora: #ddd;
--main-linear-colorb: #ddd;
--main-image-menu: url("<figure-url>");
--main-image-login: url("<figure-url>");
--main-image-icon: url("<figure-url>");
}
5.3.5.5. Login page
/* Login page logo */
.userLayout .user-layout-header {
min-height: 100px;
background-image: var(--main-image-login);
background-repeat: no-repeat;
background-size: 250px;
background-position: top center;
}
/* ACS logo hidden by default */
.userLayout .user-layout-header > img {
display: none;
}
/* Page background-color, excluding footer */
.ant-tabs-tab,
.user-layout {
124
background-color: var(--main-gray-light) !important;
}
/*
background-color applied in the HTML tag.
However, in practice, the property will be aplied in the login page footer
*/
html {
background-color: var(--main-secondary-dark);
}
/* continuous border under the tabs */
.ant-tabs-top-bar .ant-tabs-nav-scroll {
border-bottom: 3px solid var(--main-primary-dark);
width: 90.5%;
margin: 0 auto;
}
/* Tabs title color ("Portal login", "Single sign-on", ...) */
.user-layout .ant-tabs-top-bar .ant-tabs-nav-scroll .ant-tabs-tab {
color: var(--main-primary-dark);
border-bottom: 0px solid var(--main-primary-dark);
}
/* active tab border-bottom */
.user-layout .ant-tabs-top-bar .ant-tabs-nav-scroll .ant-tabs-tab.ant-tabs-tab-active {
border-width: 3px;
}
/* Tab styles when :hover */
.user-layout .ant-tabs-top-bar .ant-tabs-nav-scroll .ant-tabs-tab:hover {
background-color: var(--main-secondary-dark) !important;
color: #fff;
border-width: 3px;
}
/* Necessary to remove the ACS default border */
.ant-tabs-ink-bar.ant-tabs-ink-bar-no-animated {
display: none !important;
}
/* All buttons, except for those that have the class .ant-btn-icon-only*/
button.ant-btn:not(.ant-btn-icon-only){
background-color: var(--main-primary-dark) !important;
color: #fff;
border: 0px;
}
/* :hover buttons */
button.ant-btn:not(.ant-btn-icon-only):hover {
background-color: var(--main-secondary-dark) !important;
color: #fff;
}
/* Inputs */
.ant-input-affix-wrapper,
.ant-input-affix-wrapper input {
background-color: var(--main-gray-light) !important;
}
/* Inputs placeholder */
.ant-input-search .ant-input-search.input-search input::placeholder {
color: #666;
}
125
Figure 101: Customized login page
5.3.5.6. Header stylization
/* Cores do header */ .layout.ant-layout .header {
background: linear-gradient(var(--main-linear-colora), var(--main-linear-colorb)) !important;
box-shadow: 0px 0px 10px #00000055;
}
/* Header icon */
.layout.ant-layout .header .user-menu > span *,
.layout.ant-layout .header .anticon-menu-unfold,
.layout.ant-layout .header .anticon-menu-fold {
color: #000 !important;
}
/* :hover header icons */
.layout.ant-layout .header .anticon-menu-unfold:hover,
.layout.ant-layout .header .anticon-menu-fold:hover,
.layout.ant-layout .header .user-menu > span:hover {
background-color: var(--main-primary-dark) !important;
color: #fff !important;
}
/* User menu icons */
.layout.ant-layout .header .user-menu > span:hover * {
color: #fff !important;
}
/* Dropdown triggers when active */
.layout.ant-layout .header .user-menu > span.ant-dropdown-open {
background-color: var(--main-primary-light) !important;
border-bottom: 5px solid var(--main-primary-dark) !important;
}
126
Figure 102: Stylized header
5.3.5.7. Sidebar stylization
/* background sidebar */
aside {
background: linear-gradient(var(--main-linear-colora), var(--main-linear-colorb)) !important;
}
.sider.light .ant-menu-light, .sider.light .ant-menu-submenu > .ant-menu {
background-color: transparent;
}
/* Navigation links colors */
.sider.light .ant-menu-light a,
.sider.light .ant-menu-submenu > .ant-menu-submenu-title {
color: #000 !important;
}
/* border-right for the active link */
.ant-menu-vertical .ant-menu-item::after,
.ant-menu-vertical-left .ant-menu-item::after,
.ant-menu-vertical-right .ant-menu-item::after,
.ant-menu-inline .ant-menu-item::after {
border-color: var(--main-primary-dark) !important;
}
/* Active link background */
.ant-menu:not(.ant-menu-horizontal) .ant-menu-item.ant-menu-item-selected {
background-color: var(--main-primary-light) !important;
}
/* background link :hover */
.ant-menu:not(.ant-menu-horizontal) .ant-menu-item:hover,
.ant-menu:not(.ant-menu-horizontal) .ant-menu-submenu div:hover {
background-color: var(--main-primary-dark) !important;
}
/* :hover icons color*/
.ant-menu:not(.ant-menu-horizontal) span.ant-menu-title-content:hover .custom-icon path {
fill: #fff !important;
}
/* :hover links color*/
.ant-menu:not(.ant-menu-horizontal) .ant-menu-item-active:hover a,
.ant-menu:not(.ant-menu-horizontal) .ant-menu-submenu span.ant-menu-title-content:hover{
color: #fff !important;
}
/* Sidebar logo */
.ant-layout-sider.light.ant-fixed-sidemenu > div > div{
height: 100px !important;
background-image: var(--main-image-menu);
background-repeat: no-repeat;
background-size: 200px;
background-position: 25px 25px;
margin-bottom: 20px;
}
127
/* Closed sidebar logo*/
.ant-layout-sider.light.ant-fixed-sidemenu.ant-layout-sider-collapsed > div > div{
height: 70px !important;
background-image: var(--main-image-icon);
background-repeat: no-repeat;
background-size: 30px;
background-position: 25px 20px;
}
/* Hide the ACS default logo */
.ant-layout-sider img {
display: none;
}
Figure 103: Stylized sidebar
128
Figure 104: Stylized closed sidebar
5.3.5.8. Cards and dashboard graphs stylization
/* Card "Running VMs" */
.usage-dashboard .ant-row .ant-col:nth-child(1) .ant-card.usage-dashboard-chart-card {
background-color: var(--main-green) !important;
}
/* Card "Stopped VMs" */
.usage-dashboard .ant-row .ant-col:nth-child(2) .ant-card.usage-dashboard-chart-card {
background-color: var(--main-red) !important;
}
/* Cards text: "RunningVMs" and "Stopped VMs" */
.usage-dashboard .ant-row .ant-col:nth-child(1) .ant-card.usage-dashboard-chart-card h2,
.usage-dashboard .ant-row .ant-col:nth-child(1) .ant-card.usage-dashboard-chart-card h3,
.usage-dashboard .ant-row .ant-col:nth-child(2) .ant-card.usage-dashboard-chart-card h2,
.usage-dashboard .ant-row .ant-col:nth-child(2) .ant-card.usage-dashboard-chart-card h3 {
color: #fff !important;
}
/* Graphs percentages */
.ant-progress-circle.ant-progress-status-normal .ant-progress-text {
color: var(--main-default) !important;
font-weight: bold;
}
/* Graph filling color */
.ant-progress.ant-progress-status-normal path.ant-progress-circle-path {
stroke: var(--main-default) !important;
}
/* Color for graph section without filling */
.ant-progress path.ant-progress-circle-trail{
stroke: var(--main-gray-light) !important;
}
129
Figure 105: Stylized dashboard on root admin account
Figure 106: Stylized dashboard on user account
5.3.5.9. Listings and links stylization
/* Links color */
a {
color: var(--main-primary-dark);
}
/* :hover link color */
a:hover {
color: var(--main-secondary-dark);
}
/* Dropdown links color */
.ant-dropdown .ant-dropdown-menu-item:hover {
130
background-color: var(--main-primary-light) !important;
}
/* Dropdown :hover links color */
.ant-dropdown .ant-dropdown-menu-item:hover * {
color: #000;
}
/* :hover listings items color*/
.ant-table-tbody > tr:hover > td {
background: var(--main-secondary-light) !important;
}
Figure 107: Stylized listing and links
5.4. Redirection to external links
There are two properties in the config .json settins file that defines redirec-
tions to external links.
The property exter n a l L ink consists in a list containing value for the title, li n
k e icon attributes.
The external link redirection feature it's controlled by the following rules:
\begin{itemize}
\item if the properties are undefined, the redirection button will not be shown to the user
;
\item the attribute \code{link} from the \code{externalLinks} property is mandatory in this
context, and empty elements or those without the link property in the \code{
externalLinks} property will not be considered;
\item the \code{title} and \code{icon} properties are optional, and in case that the \code{
icon} attribute is undefined, the default icon will be applied;
\item when the \code{title} attribute is undefined, the \code{link} will be shown in its
place.
\end{itemize}
Once the \code{title}, \code{link} and \code{icon} are defined, two possible behaviours can be
observed:
\begin{enumerate}
\item Only one element is defined: in this context a button will be shown, and when clicked
will redirect the user to the link defined in the settings;
\Figure {0.5}{"ui-customization/one-link.png"}{Only one defined element}
131
\item More than one defined element: a dropdown list will be shown, containing all the
configured links.
\Figure{0.5}{"ui-customization/two-links.png"}{More than one defined element}
\end{enumerate}
\begin{lstlisting}[language=json]
"externalLinksIcon: "",
"externalLinks": [
{
"title": "",
"link": "http://apache.org",
"icon": "https://www.apache.org/icons/apache.png"
},
{
"title": "Nao sera mostrado",
"link": "",
"icon": ""
},
{
"title": "CloudStack",
"link": "http://cloudstack.org",
"icon": "myIcons/cloudstach.png"
}
]
The externalL ink sIc on property, also optional, defines a icon that will be
used to compose the button shown when informed more than one external
link. In the example above this property was ommited, therefore, the default
icon was displayed. It’s also possible to use local images in the icon attribute or
in a external link.
Figure 108: Link with an icon attribute
More information about UI customization can be accessed in github.
132
6. Resources consumption accounting
The module for account computational resources consumption is subdi-
vided in two mechanisms: usage collection (Usage Server) and usage account-
ing (Quota), with the second acting complementarily to the first.
The Usage Server mechanism periodically performs the identification and
collection of usage data from the environment resources.
The Quota mechanism is a plugin that allows the management of a tariff
model over the computational resources consumption, guided by an one-to-
many relation, making it possible to define many tariffs for the same kind of
resource. In each tariff are used the type of resource, consumption volume
and user characteristics to evaluate and calculate the cost estimates.
6.1. Usage Server
The Usage Server (cloudstack-usage service) is an optional CloudStack com-
ponent, responsible to generate records over used resources in the infrastruc-
ture, with those records being saved in a separated database, called cloud us-
age.
It’s used to monitor resource consumption from users, allowing the imple-
mentation of reports or billing services . It works collecting data from events
released from CloudStack and using this data to create resource usage reports.
6.1.1. Usage Server setup
The command to enable the Usage Server service is:
user@scclouds:~$ systemctl enable cloudstack-usage.service
user@scclouds:~$ systemctl start cloudstack-usage.service
After enabled, it’s necessary to setup it in the CloudStack:
Accessing the CloudStack settings:
133
Figure 109: Accessing the settings
The main settings are:
Setting Description Default Value
enable.usage.server Enable the service false
publish.usage.events Enable publishing us-
age events
usage.timezone Timezone used by Us-
age Server
GMT
usage.sanity.check.interval Interval, in days, be-
tween error checks in
the Usage Server
usage.snapshot.virtualsize.select Analyses the virtual
size (true) or physical
size (false) from snap-
shots
false
usage.stats.job.aggregation.range Interval, in minutes,
that the data will be
aggregated.
1440
usage.stats.job.exec.time Time scheduled to
start the data aggrega-
tion job.
00:15
134
Figure 110: Editing the settings
As soon as the desired settings are saved and applied, it’s necessary to
restart the Management Server and Usage Server services using the commands:
user@scclouds:~$ systemctl restart cloudstack-management.service
user@scclouds:~$ systemctl restart cloudstack-usage.service
6.1.2. Usage type
The Usage Server is utilized to monitor the consumption of various resources
and so, to differentiate records, each report will be sent along with the usage
type parameter to indicate the type of resource accounted during the aggrega-
tion period.
It’s possible to find all usage types via API, as follows:
(admin) > list usagetypes
Type Name Description
1 RUNNING_VM Verifies the execution time for a VM dur-
ing the usage record period . If the VM is
updated during the period, you’ll receive
a separated records for the updated VM.
2 ALLOCATED_VM Verifies the total time interval between
creating a VM until its destruction.
3 IP_ADDRESS Shows the public IP address used by the
account.
135
4 NETWORK_BYTES_SENT Verifies the number of bytes sent from
the VMs from a certain account. Individ-
ual traffic sent for each VM is not tracked.
5 NETWORK_BYTES_RECEIVED Verifies the number of bytes received
from the VMs from a certain account. In-
dividual traffic received for each VM is not
tracked.
6 VOLUME Verifies the total time interval between
creating a disk volume until its destruc-
tion.
7 TEMPLATE Verifies the total time interval between
creating a template (created from a snap-
shot or upload) until its destruction. The
template size is also returned.
8 ISO Verifies the total time interval between
uploading an ISO until its destruction.
The ISO size is also returned.
9 SNAPSHOT Verifies the total time interval between
creating a snapshot until its destruction.
10 SECURITY_GROUP Verifies security groups usage.
11 LOAD_BALANCER_POLICY Verifies the total time interval between
creating a load balancer rule until its re-
moval. It’s not tracked if any VM used this
rule.
12 PORT_FORWARDING_RULE Verifies the total time interval between
creating a port forwarding rule until its re-
moval.
136
13 NETWORK_OFFERING Verifies the total time interval between
designating a network offering to a VM
until its removal.
14 VPN_USERS Verifies the time interval between creat-
ing a VPN user until its removal.
21 VM_DISK_IO_READ Shows the amount of disk read opera-
tions from the VM.
22 VM_DISK_IO_WRITE Shows the amount of disk write opera-
tions from the VM.
23 VM_DISK_BYTES_READ Shows the amount of bytes read from the
VM disk.
24 VM_DISK_BYTES_WRITE Shows the amount of bytes written in the
VM disk.
25 VM_SNAPSHOT Shows the amount of storage used by the
VM snapshots.
26 VOLUME_SECONDARY Shows the amount of secondary storage
used by the VM volumes.
27 VM_SNAPSHOT_ON_PRIMARY Shows the amount of primary storage
used by the VM snapshots.
28 BACKUP Shows the amount of storage used by VM
backups.
29 VPC Verifies the total time interval between
creating a VPC until its destruction
30 NETWORK Verifies the total time interval between
creating a network until its destruction.
31 BACKUP_OBJECT Shows the storage used by backup ob-
jects.
137
Table 6: Usage types
A list of possible resource limitations settings may be accessed in 4.3.
6.2. Quota
The Quota plugin is a service that extends the Usage Server functionalities,
making it possible to assign monetary value to computational resources con-
sumption in the control reports.
6.2.1. Quota setup
The main settings for this service are:
Name Description Default
Value
quota.currency.symbol Currency symbol used to
measure the resource us-
age.
$
quota.enable.service Enable the Quota plugin. false
quota.statement.period Interval in which the Quota
details are sent via e-mail,
with possible values being:
bimonthly(0), monthly(1),
quarterly(2), half-yearly(3)
and annual(4).
1
quota.usage.smtp.connection.timeout Connection timeout with the
SMTP server.
60
quota.usage.smtp.host Host which holds the SMTP
server.
138
Name Description Default
Value
quota.usage.smtp.password SMTP server password.
quota.usage.smtp.port SMTP server port.
quota.usage.smtp.sender Issuing e-mail.
quota.usage.smtp.useAuth Use authentication in the
SMTP server.
quota.usage.smtp.user SMTP server user.
quota.enable.enforcement Makes resource manipula-
tion unavailable for the ac-
count when it reaches the
Quota limit.
false
After changing these settings, it’s necessary to restart the Management Server
and the Usage Server to apply them:
user@scclouds:~$ systemctl restart cloudstack-management.service
user@scclouds:~$ systemctl restart cloudstack-usage.service
After this restart, the Quota menu will be available in the UI:
Figure 111: Quota plugin
From them will be possible to manage tariffs, credits, Quota e-mail tem-
plates and visualize reports.
6.2.2. Tariffs management
In the submenu Tariff it’s possible to visualize and manage the Quota tariffs.
When accessing the menu, a list of active system tariffs will be shown:
139
Figure 112: List of active tariffs
It’s also possible to list tariffs already removed, just by selecting the Remov
ed option in the filter, or All to list everything:
Figure 113: Listing filters
The operator may create mew tariffs or edit/remove already existent ones.
6.2.2.1. Creating tariffs
To create a new tariff, use the Create Quota tar iff button, which will open the
following form:
140
Figure 114: Tariff creation form
The operator obligatorily must inform the Name, Us a ge ty pe and Tariff val
ue fields. The other fields are optional.
In the Processing period field, when choosing the Monthly option, a new
field named Ex e c ute on will appear, where it will be necessary to add a day of
the month between 1-28, indicating the date in which the tariff will be monthly
processed.
It’s possible to set rules to defined when a tariff must be applied. The docu-
mentation about activation rules may be found in the section Activation rules.
141
6.2.2.2. Tariff details
When selecting a tariff in the listing, it’s possible to visualize the its details and
any actions that may be executed:
Figure 115: Tariff details
6.2.2.3. Editing tariffs
After creating tariffs, the operator may change some of its information:
Figure 116: Possible actions for active tariffs
142
Figure 117: Tariff editing form
The changes made in the tariff will only take effect under processings after
the changes.
6.2.2.4. Removing tariffs
When the operator needs, they can remove tariffs so that they are no longer
considered on the Quota processing:
Figure 118: Removing a tariff
6.2.3. Activation rules
Activation rules are logical expressions used to define which tariffs are ap-
plied based in which resources are being used, possibly including specific tariffs
143
for specific clients.
Through activation rules it’s possible to use resource tags as a way to identify
different resource types within the same category and then apply unique tariffs
for wach one (for example, apply a greater tariff if the storage is of SSD type).
Due to accessibility for both understanding and writting, compared to other
programming languages, JavaScript was chosen for creating the activation rules.
Therefore, the expressions must be written specifically in JavaScript ECMAScript
5.1 code and need to follow, other than the language syntax, the following rules:
The expression processing engine is instantiated only once per processing
cycle, therefore, reserved words such as const, var or let must not be used
to declare variables when creating activation rules, cause this will cause
an Identifier has already error
been declared and won’t be possible to process such rules. Instead of
using const a = 1;, for example, should be used a = 1; instead.
The ACS expects that the return from these expressions are a boolean
value (true/false) or a numeric value. It will deduce the result type applying
the following rules:
if teh result is a number, such as 1, 2.5 and so on, the result will be
used as the tariff value, instead of the value defined in the Tariff valu
e field;
if the result isn’t a number, ACS will try to convert it to a boolean. If
the result is true, the value set in the Tariff value field will be used in
the calculation. Otherwise, if the result is false or if the expression
doesn’t result in a valid boolean, the tariff won’t be applied in the
calculation;
if the tariff don’t have an expression to be evaluated or if the expres-
sion is empty, the tariff will always be applied.
144
Some variables will be pre-created (referred as preset throughout the text)
in the expressions context to provide greater flexibility to the operators. Each
resource type will have a series of presets corresponding to its characteristics:
6.2.3.1. Default presets for all resource types
Variable Description
account.id uuid from the account that owns the resource.
account.name name from the account that owns the resource.
account.role.id uuid from the role of the account that owns the
resource (if exists).
account.role.name name from the role of the account that owns the
resource (if exists).
account.role.type type from the role of the account that owns the
resource (if exists).
domain.id uuid from the domain of the account that owns
the resource.
domain.name name from the domain of the account that owns
the resource (if exists).
domain.path path from the domain of the account that owns
the resource.
project.id uuid from the project that owns the resource (if
exists).
project.name name from the project that owns the resource (if
exists).
resourceType type of resource.
value.accountResources List containing the rest of the account resources
of the same type that are valid in the same period
that’s under processing.
zone.id uuid from the zone of the account that owns the
resource.
zone.name name from the domain of the account that owns
the resource.
processedData.id uuid of the resource.
processedData.name name of the resource.
processedData.startDate Start date from the processed data.
processedData.endDate End date from the processed data.
145
Variable Description
processedData.usageValue usage from the resources during the pe-
riod.
processedData.aggregatedTariffs Aggregation from all tariffs applied by
Quota over the resource during the pe-
riod.
processedData.tariffs.value Tariff value.
processedData.tariffs.id uuid of the tariff.
processedData.tariffs.name name of the tariff.
lastTariffs List of objects containing id and value at-
tributes for previous tariffs.
146
6.2.3.2. Presets for the RUNNING\_VM type
Variable Description
value.host.id uui d of the host which the VM is
running.
value.host.name name of the host which the VM is
running.
value.host.tags List of tags of the host which the
VM is running.
Example: ["a", "b"].
value.host.isTagARule Boolean indicating if the tag used
is a rule.
value.id uuid of the VM.
value.name name of the VM.
value.osName name of the OS in the VM.
value.computeOffering.customized Boolean indicating if the compute
offering of the VM is customizable.
value.computeOffering.id uuid of the compute offering of
the VM.
value.computeOffering.name name of the compute offering of
the VM.
value.computingResources.cpuNumber Current number of vCPUs in the
VM.
value.computingResources.cpuSpeed Current CPU speed in the VM (in
MHz).
value.computingResources.memory Current amount of memory in the
VM (in MiB).
value.tags VM tags, in the format key:valu e.
Example: {"a":"b", "c":"d"}.
value.template.id uuid of the VM template.
value.template.name name of the VM template.
value.hypervisorType type of hypervisor of the VM.
147
6.2.3.3. Presets for the ALLOCATED\_VM type
Variable Description
value.id uuid of the VM.
value.name name of the VM.
value.osName name of the OS in the VM.
value.computeOffering.customized Boolean indicating if the compute of-
fering of the VM is customizable.
value.computeOffering.id uuid of the compute offering of the
VM.
value.computeOffering.name name of the compute offering of the
VM.
value.tags VM tags, in the format key:value. Ex-
ample: {"a":"b", "c":"d"}.
value.template.id uuid of the VM template.
value.template.name name of the VM template.
value.hypervisorType type of hypervisor of the VM.
148
6.2.3.4. Presets for the VOLUME type
Variable Description
value.diskOffering.id uuid of the disk offering of the volume.
value.diskOffering.name name of the disk offering of the volume.
value.id uuid of the volume.
value.name name of the volume.
value.provisioningType Resource provisioning type. Values for this set-
ting may be: thin, sparse or fat.
value.storage.id uuid of the storage where the volume is located.
value.storage.isTagARule Boolean indicating if the tag used is a rule.
value.storage.name name of the storage where the volume is located.
value.storage.scope scope of the storage where the volume is located.
Values for this setting may be: ZONE or CLUSTE
R.
value.storage.tags List of tags of the storage where the volume is
located. Example: ["a", "b"].
value.tags Volume tags, in the format key:value. Example: {
"a":"b", "c":"d"}.
value.size size of the volume (in MiB).
value.volumeFormat Volume format. Values for this setting may be: R
AW, VHD, VHDX, OVA and QCOW2.
6.2.3.5. Presets for the TEMPLATE and ISO type
Variable Description
value.id uuid of the template/ISO.
value.name name of the template/ISO.
value.osName name of the template’s/ISO’s OS.
value.tags Template/ISO tags, in the format key:value. Example: {"a":"
b", "c":"d"}.
value.size size of the template/ISO (in MiB).
149
6.2.3.6. Presets for the SNAPSHOT type
Variable Description
value.id uuid of the snapshot.
value.name name of the snapshot.
value.size size of the snapshot (in MiB).
value.snapshotType type of snapshot. Values for this setting may be:
MANUAL, HOURLY, DAILY, WEEKLY or MONTHLY.
value.storage.id uuid of the storage where the snapshot is lo-
cated.
value.storage.isTagARule Boolean indicating if the tag used is a rule.
value.storage.name name of the storage where the snapshot is lo-
cated.
value.storage.scope scope of the storage where the snapshot is lo-
cated. Values for this setting may be: ZONE or
CLUSTER.
value.storage.tags List of tags of the storage where the snapshot is
located. Example: ["a", "b"].
value.tags Snapshot tags, in the format key:value. Example:
{"a":"b", "c":"d"}.
value.hypervisorType hypervisor in which the resource was deployed.
Values for this setting may be: XenServer, KVM, V
Mware, Hy per-V, BareMetal, Ovm, Ovm3 and L X
C.
Notes:
If the global setting snapshot.backup.to.secondary is set to false, the value
for the presets value.storage.id and value . s torage.name will be from the
primary storage. Otherwise, they will be from the secondary storage.
Hosts or storages using the flexible tags feature will have the isTagARule
variable set to true and will have their tag variable empty.
If the global setting snapshot.backup.to.secondary is set to false, the value
for the presets v alue.storage.scop e and value.storag e.tags will be from
the primary storage. Otherwise, they will not exist.
150
6.2.3.7. Presets for the NETWORK_OFFERING type
Variable Description
value.id uuid of the network offering.
value.name name of the network offering.
value.tag tag of the network offering.
6.2.3.8. Presets for the VM_SNAPSHOT type
Variable Description
value.id uuid of the VM snapshot.
value.name name of the VM snapshot.
value.tags VM snapshot tags, in the format key:value. Exam-
ple: {"a":"b", "c":"d"}.
value.vmSnapshotType type of the VM snapshot. Values for this setting
may be: Disk or DiskAndMemory.
value.hypervisorType Hypervisor in which the resource was deployed.
Values for this setting may be: XenServer, KVM, V
Mware, Hyper-V, BareMetal, Ovm, Ovm3 and LXC.
6.2.3.9. Presets for the BACKUP type
Variable Description
value.size size of the backup.
value.virtualSize virtual size of the VM.
value.backupOffering.id uuid of the backup offering.
value.backupOffering.name name of the backup offering.
value.backupOffering.externalId external id of the backup offering.
Notes:
The measurement unit of the presets value.size and value.virtualSize varies
for each backup provider. For example, values informed by Veeam are in
bytes.
151
6.2.3.10. Presets for the NETWORK USAGE type
Variable Description
value.id uuid of the network.
value.name name of the network.
value.state state of the network. Values for this setting may be: Allocated,
Configured, Implementing, Implemented, Shutdown and De-
stroyed.
6.2.3.11. Presets for the BACKUP OBJECT type
Variable Description
value.id uuid of the backup object.
value.name name of the backup object.
value.size size of the resource, in MiB.
value.virtualSize virtual size of the backup.
value.backupOffering.id uuid of the backup offering.
value.backupOffering.name name of the backup offering.
value.backupOffering.externalId external id of the backup offering.
value.virtualMachine.id uuid of the VM.
value.virtualMachine.name name of the VM.
6.2.3.12. Verifying presets via API
An API call may be done to verify all the presets for each resource, but only by
admin users have access to this command. For that, just choose a usageTy pe
to check its variable names and descriptions, as shown below:
(admin) > quota listpresetvariables usagetype=<usageType_id>
6.2.3.13. Presets for the other resources
The specific presets for each resource type were addressed based in use cases,
therefore, not all resource properties are present, as well as not all resources
have specific presets, only the default presets. Other presets may be added as
use cases appear.
152
6.2.3.14. Expressions examples
As it was described at the beginning of this section, the activation rules are
logical expressions, written in JavaScript, created to attend to specific use cases.
It’s advisible that the environment administrator create their own expressions
based on their needs, paying attention to the rules described at the beginning
of this section and also to the language syntax. The following examples are just
some demonstrations of what it’s possible to achieve with the activation rules.
1. Apply a tariff to only one account (available to all resource types):
if (account.id == 'b29e84da-ed2e-47dc-9785-49231de8ff07') {
true
} else {
false
}
Or simply:
account.id == 'b29e84da-ed2e-47dc-9785-49231de8ff07'
2. Apply a tariff if the account currently have more than 20 resources of the
same type (available for all resource types):
value.accountResources.filter(resource =>
resource.domainId == 'b5ea6ffb-fa80-455e-8b38-c9b7e3900cfd'
).length > 20
3. Return the tariff value based in the amount of resource that the account
currently owns (available for all resource types)
12
:
12
If the value isn’t given in the else, the expression may result in undefined and the tariff will
not be applied.
153
resourcesLength = value.accountResources.filter(resource =>
resource.domainId == 'b5ea6ffb-fa80-455e-8b38-c9b7e3900cfd'
).length
if (resourcesLength > 40) {
20
} else if (resourcesLength > 10) {
25
} else {
30
}
4. Apply the tariff for a certain OS (available for the RUNNING_VM and AL-
LOCATED_VM):
['Windows 10 (32-bit)',
'Windows 10 (64-bit)',
'Windows 2000 Advanced Server'].indexOf(value.osName) !== -1
5. Storage tags validation (available for VOLUME and SNAPSHOT):
value.storage.tags.indexOf('SSD') !== -1
&& value.storage.tags.indexOf('NVME') !== -1
6. Host tags validation (available for RUNNING_VM):
value.host.tags.indexOf('CPU platinum') !== -1
7. Public IPs validation
13
. If the first public IP is free of charge for the user, it’s
possible to prevent charging source NAT IPs (available for IP ADDRESS):
13
Public IPs are connected to the VPC or isolated networks (not directly to the user VM). Every
first public IP in a VPC or isolate network is a source NAT; if the public IP is allocated by the user,
the preset resourceType will be null.
154
resourceType !== 'SourceNat'
8. Return the tariff value if the storage in use is of HDD type (available for
VOLUME and SNAPSHOT):
useHdd = false
if (value.storage) {
for (i = 0; i < value.storage.tags.length; i++) {
if (value.storage.tags[i].indexOf('hdd') !== -1) {
useHdd = true
break
}
}
}
if (useHdd) {
0.3
} else {
0
}
9. For a more complex example, the following expression represents the li-
censing cost for Windows OS and have some peculiarities (available for
ALLOCATED_VM):
The amount charged will be based in the number of vCPUs assigned
to the VM;
A minimum of 4 vCPUs are charged;
For more than 4 vCPUs it’s charged incrementally for vCPUs pairs (2).
In other words, if the number of vCPUs are odd, the charge will be
rounded up to a even value. For example, the charge for 5 vCPUs is
equal to that for 6 vCPUs, for 7 vCPUs is equal to 8, ans so on;
155
The charge will be monthly.
TOTAL_CORES_PER_PACKAGES = 2;
MININUM_NUMBER_OF_PACKAGES_WINDOWS_LICENSES = 4;
OPERATING_SYSTEM_NAME = "windows";
windows_operating_system_monthly_price = 36;
calculate_number_of_license_packages = function (vcpus) {
return (vcpus + vcpus%TOTAL_CORES_PER_PACKAGES)/
TOTAL_CORES_PER_PACKAGES;
};
if (value.osName.toLocaleLowerCase().
indexOf(OPERATING_SYSTEM_NAME) >= 0) {
calculate_number_of_license_packages
(value.computingResources.cpuNumber) *
windows_operating_system_monthly_price
} else {
0
}
6.2.4. Credits management
In the Summary sub-menu it’s possible to check the Quota reports and man-
age the accounts credits.
156
Figure 119: List of active accounts
6.2.4.1. Adding/removing credits
To add/remove credits from an account, use the Add credits button, which will
open the following form:
Figure 120: Form for adding/removing credits
Notes:
157
To remove credits from an account, use the - operator before the tariff
value, in the Value field.
The Min Ba lance field indicates the minimum limit for the account bal-
ance.
Checking the Enforce Quota field will make accounts that reach their limits
to have their balances blocked.
6.2.5. Active accounts
In the Summary sub-menu are shown, by default, the current state of active
accounts (last balance + credits):
Figure 121: List of active accounts
It’s also possible to list the accounts already removed, just by selecting the
filter option Removed accounts, or All to list everything:
158
Figure 122: Listing filters
Via CloudMonkey, this query may be performed in the following way:
(admin) > quota summary account=admin domainid=52d83793-26de-11ec-8dcf-5254005dcdac listall=true
{
"count": 1,
"summary": [
{
"account": "admin",
"accountid": "af16aaed-26de-11ec-8dcf-5254005dcdac",
"accountremoved": false,
"balance": 124.49,
"currency": "$",
"domain": "/",
"domainid": "52d83793-26de-11ec-8dcf-5254005dcdac",
"domainremoved": false,
"enddate": "2023-10-06T12:00:00+0000",
"quota": 8.33871072,
"quotaenabled": true,
"startdate": "2023-10-01T12:00:00+0000",
"state": "ENABLED"
}
]
}
6.2.6. Managing e-mail templates from Quota
It’s possible to define notification templates for the Quota mechanism, which
will be sent to users based in four pre-defined situations. For each user it’s pos-
sible to define when the notifications will be sent.
In the Email Templa te sub-menu it’s possible to visualize and manage the
e-mail templates that the Quota plugin will send:
159
Figure 123: List of e-mail templates from Quota
By default, the Quota plugin will send e-mails to an account in the following
scenarios:
Low credits;
No credits;
Credits added;
Balance in use.
For each scenario there’s a e-mail template that will be sent. To edit one of
these templates, just select it in the listing:
Figure 124: Editing the e-mail template
160
6.2.6.1. Notes about using the Quota plugin
1. To add an account to the Quota plugin processing it’s necessary to add
credits to it at least once. Furthermore, the account level setting quo ta.
account.en abled must be defined as true The Quota state for a specific
account may be accessed through the Summary sub-menu in the Quota
state column.
Figure 125: Quota state for admin accounts and custom-account
2. For accounts with credits balance lower than zero to become blocked the
global setting quota.enable.enfo rce me nt must be set to true and the op-
tion Enforce Quota must be used when adding credits to the account.
3. A blocked account still have access to its resources that are already allo-
cated. It can deallocate them but can’t neither allocate new resources or
reallocate old ones. In other words, if the user have its account blocked,
they can destroy or stop their VMs but can’t start/restart them. If any is-
sue occurs with the VM (like a shutdown caused by lack of CPU or RAM)
the user will have to acquire more credits to restart the VM.
4. The check for credits on accounts, followed by a block (if applicable) is per-
formed in a interval defined by the variable usage.stats.job.
aggregation.range, with default value for performing once a day.
161
7. Operation
This section shows some of the main basic operations that operators need
to know how to do for troubleshooting problems in the ACS.
7.1. Debugging issues and troubleshooting process
There’s a series of situations that problems may occur in ACS. In here, some
recommendations on how to find the origin of most of the problems will be
presented, as frequently it’s possible to solve them without the need to open a
new issue.
7.1.1. Debugging via logs
Generally, when facing an error, the first step to problem solving is search
for log files from components relevant to the actions that caused the error.
Reading logs tends to reveal the source of most of the problems. Further-
more, if the problem isn’t treatable in the operation context, sending the logs
relevants to the problem simplifies and speed up their resolution when open-
ing an issue. Section 7.1.4 describes how to find the log files.
Eventually, errors happen because of resource shortage. For example, an
error when creating a virtual router may be caused by the shortage of available
IPs. This kind of situation is fixed increasing the current resources or reclying
other resources that are no longer used.
7.1.2. Debugging via web interface
When error occurs through the web interface, it’s possible to verify which
API commands are currently being made, and based on this, the search for the
error in the logs is facilitated, as the commands being called are known. The
steps for this kind of debugging are:
1. Open the development tools menu in your browser (F12 is usually the
shortcut);
2. Select the network tab;
162
3. Parallelly, open the Management Server’s log file, using the command tai
l -f;
4. Perform the action that causes the error in the web interface. You’ll see
all calls being made in the Network tab;
5. Select one of your interest;
6. In the GET section of the Network tab will be written the API command;
From that, it’s possible to follow the execution flow in the logs and debug the
problem.
7.1.3. Debugging network problems
Sometimes, some of the ACS components routes are changed, which may
prevent the proper communication between some of the structure compo-
nents. The command ip route is the main way to find out if there’s changes
in the routes, verifying if the IPs are coherent with the adopted topology. Al-
ternatively, it’s possible to use the command arp to verify if the packages are
travelling through the right interface. The SC Clouds team provides a document
with defined topology, that may be consulted to verify if the IPs are correct.
7.1.4. Log files path
The log file from the Management Server can be found in the following path:
/var/log/cloudstack/management/management-server.log
If the Usage Server is being used, its log file may be found in the following
path:
/var/log/cloudstack/usage/usage.log
If the hypervisor used is KVM, there’ll exist a CloudStack Agent. Its log file
can be found in the following path:
/var/log/cloudstack/agent/agent.log
163
The logs for the system VMs can be found in the following path:
/var/log/cloud.log
/var/log/cloud/cloud.out
7.1.5. Log level increase on Management Servers and KVM Agents
This section describes how to configure Log4j
14
in such way that prevents
the loss of important ACS logs. Log4j organizes the logs in the following way:
1. FATAL: Errors that causes a system stop (crash). It’s important that this
type of message is reported!
2. ERROR: Errors that interrupt a task, however without causing a complete
system stop. Generally it’s also important to report this type of message;
3. WARN: Warnings that don’t represent errors that happened but indicate
events that may cause them in the future;
4. INFO: Information relevant to the performed action. Messages of this
type describe normal system events, not errors;
5. DEBUG: Information that details the perfomed action;
6. TRACE: Detailed step by step of the performed action, being of a finer level
than the DEBUG level;
It’s already known that currently there is some issues with categorization
and clarity in the ACS logs. For example, there are logs that should be registered
at INFO or ERROR level but are registered at DEBUG level. The SC Clouds team
have been working on improvements on these and and others aspects referent
to ACS logs. However, until this task is completed, it’s necessary that the Loj4j
is adequately configured (at DEBUG level) to ease troubleshooting processes.
It’s possible to configure Log4j to register only logs of certain types, this way
Log4j will register all messages of that type and those that are more severe.
The hierarchy adopted is:
14
Tool used by ACS to register logs.
164
Setting Log levels registered
OFF No log registry.
FATAL FATAL
ERROR FATAL e ERROR
WARN FATAL, ERROR and WARN
INFO FATAL, ERROR, WARN and INFO
DEBUG FATAL, ERROR, WARN, INFO and DEBUG
TRACE FATAL, ERROR, WARN, INFO, DEBUG and TRACE
ALL FATAL, ERROR, WARN, INFO, DEBUG and TRACE
It’s possible to configure the appender
15
to accept a log level from the ones
shown above. The current default setting for the Management Servers is DEBU
G; for agents and system VMs is INFO. Therefore, the appender will only register
logs from the specified level or with greater severity, which may be insufficient
to detect or comprehend certain problems. If the hypervisor used is KVM, it’s
recommended to change this setting from INFO to DEBUG throughout agents.
For that, edit the files shown below in the respective hosts using a text editor:
In the agent:
sudo vim /etc/cloudstack/agent/log4j-cloud.xml
In the Management Server:
sudo vim /etc/cloudstack/management/log4j-cloud.xml
In the system VMs:
sudo vim /usr/local/cloud/systemvm/conf/log4j-cloud.xml
If the hypervisor used is the VMware, the process to access the system VMs
and edit the logj4 settings file is different. The section Accessing the System
VMs show this process in more detail.
15
Part of the tool responsible for delivering the logs to their destination.
165
Change the following setting:
...
<appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender">
...
<param name="Threshold" value="INFO"/>
...
To:
...
<appender name="FILE" class="org.apache.log4j.rolling.RollingFileAppender">
...
<param name="Threshold" value="DEBUG"/>
...
To limit the log sources, Log4j allows the creation of categories, defining
from where the logs will be collected, as well the accepted level. It’s important to
take note that the XML that defines the Log4j settings is serially read, therefore
the categories order it’s important cause if we want to limit the logs from a
package while keeping logs in a less restrictive level, we’ll need to first define
the less restrictive level for the wanted section and them define the restriction
level for the package as a whole.
With this in mind, we recommend adding the following category in the agents
XML:
<category name="org.apache.cloudstack">
<priority value="DEBUG"/>
</category>
This category must be above org.apache, for the reason previously explained.
Still, if instead of creating the category, the category org.apa c h e was edited to
DEBUG level, this would create a imense flood of unimportant log messages.
In the management servers this category already exists, but it is poorly po-
sitioned, therefore just move it above org.apache.
It’s also necessary to change the priority value of the category com.cloud to
DEBUG level:
<category name="com.cloud">
<priority value="DEBUG"/>
</category>
Finally, it’s necessary to change the root category setting, cause it dictates
166
the maximum level that the logger accepts, and without this change no other
previous changes will take effect. The default for this setting is INFO:
<root>
<level value="INFO"/>
<appender-ref ref="CONSOLE"/>
<appender-ref ref="FILE"/>
</root>
Just change the level value from INFO to DEBUG.
7.1.6. Troubleshooting process
Analyzing logs from CloudStack is extremaly useful and can be a key point
during investigations, for it contains information related to all the process steps
in ACS, errors, warnings, and others.
For an efficient log analysis in ACS it’s necessary to filter the entries of inter-
est, and this is achieve through identifying the log id or job id related to them.
If the analysis will be made based on the Management Server logs and there
is mroe than one Management Server in use, it’s necessary to search through
all of them. The reason for this is that command executions may be shared
between them.
To better illustrate the filtering and analysis process, we’ll follow with an
example of the troubleshooting process. In this example we’ll follow along the
creation of a new VM, but the steps used may also be followed to investigate
various occurences in CloudStack.
To begin the analysis a VM was created via UI and the steps below were
followed:
167
Figure 126: VM created through the UI
1. To identify the log id for the investigated process, firstly it’s necessary to
find a log entry that contains this information. This can be achieve follow-
ing in the browser for the HTTP requests forwarded to the back-end by the
UI when performing certain actions requested by the user. When check-
ing a HTTP request details it’s possible to identify the command sent. WE’ll
use this command to search for log entries:
Figure 127: Identifying the command sent.
To perform this search it’s necessary to access the location where the log
168
files are stored and execute the command
16
:
grep -r "<comando UI>" ./
In this example shown, the command used was:
grep -r "command=deployVirtualMachine&response=json" ./
Thus, we can identify some entries containing the searched command. In
them it’s possible to visualize the desired log id. In this example the ID is:
logid:ef4bf833.
Figure 128: Identifying the logid for the desired process.
2. Now with log id on hands we can filter the desired information:
grep -r "<logid>" ./
All log entries containing this log id will be returned.
./management-server.log:2022-03-15 12:38:40,334 DEBUG [c.c.a.ApiServlet]
(qtp1603198149-17:ctx-430e157a) (logid:ef4bf833) ===START=== 172.16.71.7 -- POST
command=deployVirtualMachine&response=json
[...]
./management-server.log:2022-03-15 12:38:41,471 DEBUG [c.c.v.UserVmManagerImpl]
(qtp1603198149-17:ctx-430e157a ctx-91e054dc) (logid:ef4bf833) Allocating in the DB for vm
./management-server.log:2022-03-15 12:38:41,545 INFO
[c.c.v.VirtualMachineManagerImpl] (qtp1603198149-17:ctx-430e157a ctx-91e054dc)
(logid:ef4bf833) allocating virtual machine from
template:b81aab72-2c37-466e-b53f-bdaf398322fa with hostname:i-2-119-VM and 1 networks
./management-server.log:2022-03-15 12:38:41,555 DEBUG
[c.c.v.VirtualMachineManagerImpl] (qtp1603198149-17:ctx-430e157a ctx-91e054dc)
(logid:ef4bf833) Allocating entries for VM: VM instance {id: "119", name:
"i-2-119-VM", uuid: "13fe0bce-a240-48e4-9d5b-081359fcf422", type="User"}
./management-server.log:2022-03-15 12:38:41,560 DEBUG
[c.c.v.VirtualMachineManagerImpl] (qtp1603198149-17:ctx-430e157a ctx-91e054dc)
(logid:ef4bf833) Allocating nics for VM instance {id: "119", name: "i-2-119-VM",
uuid: "13fe0bce-a240-48e4-9d5b-081359fcf422", type="User"}
./management-server.log:2022-03-15 12:38:41,568 DEBUG
[o.a.c.e.o.NetworkOrchestrator] (qtp1603198149-17:ctx-430e157a ctx-91e054dc)
(logid:ef4bf833) Allocating nic for vm VM instance {id: "119", name: "i-2-119-VM",
uuid: "13fe0bce-a240-48e4-9d5b-081359fcf422", type="User"} in network
Ntwk[211|Guest|11] with requested profile NicProfile[0-0-null-null-null]
./management-server.log:2022-03-15 12:38:41,653 DEBUG [c.c.n.NetworkModelImpl]
(qtp1603198149-17:ctx-430e157a ctx-91e054dc) (logid:ef4bf833) Service SecurityGroup
is not supported in the network id=211
./management-server.log:2022-03-15 12:38:41,660 DEBUG
16
It’s alo possible to replace "./" in the command with the log folder’s full path. That way it’s
possible to execute it without accessing the directory.
169
[c.c.v.VirtualMachineManagerImpl] (qtp1603198149-17:ctx-430e157a ctx-91e054dc)
(logid:ef4bf833) Allocating disks for VM instance {id: "119", name: "i-2-119-VM",
uuid: "13fe0bce-a240-48e4-9d5b-081359fcf422", type="User"}
./management-server.log:2022-03-15 12:38:41,660 INFO [o.a.c.e.o.VolumeOrchestrator]
(qtp1603198149-17:ctx-430e157a ctx-91e054dc) (logid:ef4bf833) adding disk object
ROOT-119 to i-2-119-VM
./management-server.log:2022-03-15 12:38:41,700 DEBUG
[c.c.r.ResourceLimitManagerImpl] (qtp1603198149-17:ctx-430e157a ctx-91e054dc)
(logid:ef4bf833) Updating resource Type = volume count for Account = 2 Operation =
increasing Amount = 1
./management-server.log:2022-03-15 12:38:41,729 DEBUG
[c.c.r.ResourceLimitManagerImpl] (qtp1603198149-17:ctx-430e157a ctx-91e054dc)
(logid:ef4bf833) Updating resource Type = primary_storage count for Account = 2
Operation = increasing Amount = (50.00 MB) 52428800
./management-server.log:2022-03-15 12:38:41,962 DEBUG
[c.c.v.VirtualMachineManagerImpl] (qtp1603198149-17:ctx-430e157a ctx-91e054dc)
(logid:ef4bf833) Allocation completed for VM: VM instance {id: "119", name:
"i-2-119-VM", uuid: "13fe0bce-a240-48e4-9d5b-081359fcf422", type="User"}
./management-server.log:2022-03-15 12:38:41,962 DEBUG [c.c.v.UserVmManagerImpl]
(qtp1603198149-17:ctx-430e157a ctx-91e054dc) (logid:ef4bf833) Successfully
allocated DB entry for VM instance {id: "119", name: "i-2-119-VM", uuid:
"13fe0bce-a240-48e4-9d5b-081359fcf422", type="User"}
[...]
We can see above the time which the command deployVirtualMachine was
received (2022-0 3-1 5 12:38:40,334), and also various entries related to
resource allocation for the VM creation.
./management-server.log:2022-03-15 12:38:42,646 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(qtp1603198149-17:ctx-430e157a ctx-91e054dc) (logid:ef4bf833) submit async job-848, details:
AsyncJobVO {id:848, userId: 2, accountId: 2, instanceType: VirtualMachine, instanceId: 119, cmd
:org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin, cmdInfo:
{"iptonetworklist[0].networkid":"f8ce6644-6478-4770-84a7-834bd8a717a2","boottype":
"BIOS","httpmethod":"POST","templateid":"b81aab72-2c37-466e-b53f-bdaf398322fa","ctxAccountId":
"2","uuid":"13fe0bce-a240-48e4-9d5b-081359fcf422","cmdEventType":"VM.CREATE","startvm":
"true","bootmode":"LEGACY","serviceofferingid":"ab647165-7a0a-4984-8452-7bfceb036528",
"response":"json","ctxUserId":"2","zoneid":"8b2ceb16-a2f2-40ea-8968-9e08984bdb17",
"ctxStartEventId":"1499","id":"119","ctxDetails":"{\"interface com.cloud.dc.DataCenter\":
\"8b2ceb16-a2f2-40ea-8968-9e08984bdb17\",\"interface com.cloud.template.VirtualMachineTemplate
\": \"b81aab72-2c37-466e-b53f-bdaf398322fa\",\"interface com.cloud.offering.ServiceOffering\"
:\"ab647165-7a0a-4984-8452-7bfceb036528\",
\"interface com.cloud.vm.VirtualMachine\":\"13fe0bce-a240-48e4-9d5b-081359fcf422\"}",
"affinitygroupids":""}, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0,
resultCode: 0, result: null, initMsid: 90520745551922, completeMsid: null,
lastUpdated: null, lastPolled: null, created: null, removed: null}
./management-server.log:2022-03-15 12:38:42,651 DEBUG [c.c.a.ApiServlet]
(qtp1603198149-17:ctx-430e157a ctx-91e054dc) (logid:ef4bf833) ===END===
172.16.71.7 -- POST command=deployVirtualMachine&response=json
We can also see above the time when the command was finished.
(2022-03-15 12:38:42,651). It’s important to highlight that the request pro-
cessing shown until now generates a job, which can be processed by any
ACS Management Server from the cloud environment. Through filtering
entries that contains the id from this job it’s possible to obtain more infor-
mation related to the VM creation (the same applies for others processes
170
in ACS). In the second to last log entry displayed we can see that the jog
was sent at 2022-03-15 12:38:42,646 and identify the job id that the re-
quest generated: job-848.
3. Then should be searched for logs containing the job id found, through the
command:
[...]
./management-server.log:2022-03-15 12:38:42,647 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-5:ctx-c454c9f5 job-848) (logid:2f55511b) Executing AsyncJobVO
{id:848, userId: 2, accountId: 2, instanceType: VirtualMachine, instanceId: 119, cmd:
org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin, cmdInfo:
{"iptonetworklist[0].networkid":"f8ce6644-6478-4770-84a7-834bd8a717a2","boottype":
"BIOS","httpmethod":"POST","templateid":"b81aab72-2c37-466e-b53f-bdaf398322fa",
"ctxAccountId":"2","uuid":"13fe0bce-a240-48e4-9d5b-081359fcf422","cmdEventType":
"VM.CREATE","startvm":"true","bootmode":"LEGACY","serviceofferingid":
"ab647165-7a0a-4984-8452-7bfceb036528", "response":"json","ctxUserId":"2","zoneid":
"8b2ceb16-a2f2-40ea-8968-9e08984bdb17","ctxStartEventId":"1499","id":"119","ctxDetails":
"{\"interface com.cloud.dc.DataCenter\":\"8b2ceb16-a2f2-40ea-8968-9e08984bdb17\",
\"interface com.cloud.template.VirtualMachineTemplate\":\"b81aab72-2c37-466e-b53f-bdaf398322fa
\", \"interface com.cloud.offering.ServiceOffering\":\"ab647165-7a0a-4984-8452-7bfceb036528\",
\"interface com.cloud.vm.VirtualMachine\":\"13fe0bce-a240-48e4-9d5b-081359fcf422\"}",
"affinitygroupids":""}, cmdVersion: 0, status: IN_PROGRESS, processStatus: 0,
resultCode: 0, result: null, initMsid: 90520745551922, completeMsid: null,
lastUpdated: null, lastPolled: null, created: null, removed: null}
[...]
4. Finally, after identifying it we search for the thread ID (logid:2f55511b)
found. It’s important to search for this ID because more entries will be
returned if compared to when searching for the job id, which makes the
analysis process more complete.
The command used is:
grep -r "<thread id>" ./
This way several informations related to the execution of the VM creation
process are obtained. Bellow we can see, firstly, the logs related to the
cluster identification and then to the identification of a host where the
VM may be allocated. The existing resources are verified ion the available
resources and compared with those defined during the Vm creation. This
same process is repeated for other hosts and, if there’s compatibility, they
are added to a list of potential hosts. The last log entry shown below,
where the host is added to such list, displays this action.
./management-server.log:2022-03-15 12:38:43,984 DEBUG [c.c.d.FirstFitPlanner]
171
(API-Job-Executor-5:ctx-c454c9f5 job-848 ctx-20458e85) (logid:2f55511b) Listing
clusters in order of aggregate capacity, that have (at least one host with) enough
CPU and RAM capacity under this Zone: 1
./management-server.log:2022-03-15 12:38:44,003 DEBUG [c.c.d.FirstFitPlanner]
(API-Job-Executor-5:ctx-c454c9f5 job-848 ctx-20458e85) (logid:2f55511b) Removing
from the clusterId list these clusters from avoid set: []
./management-server.log:2022-03-15 12:38:44,054 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-5:ctx-c454c9f5 job-848
ctx-20458e85) (logid:2f55511b) Checking resources in Cluster: 1 under Pod: 1
./management-server.log:2022-03-15 12:38:44,078 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (API-Job-Executor-5:ctx-c454c9f5 job-848
ctx-20458e85 FirstFitRoutingAllocator) (logid:2f55511b) Looking for hosts in dc: 1
pod:1 cluster:1
./management-server.log:2022-03-15 12:38:44,091 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (API-Job-Executor-5:ctx-c454c9f5 job-848
ctx-20458e85 FirstFitRoutingAllocator) (logid:2f55511b) FirstFitAllocator has 3
hosts to check for allocation: [Host {"id": "31", "name": "cloudstack-lab-host-3",
"uuid": "2fd584d8-9fc3-4666-98f8-17f3e43e4348", "type"="Routing"}, Host {"id":
"30", "name": "cloudstack-lab-host-2", "uuid":
"3d7d8532-d0cf-476c-a36e-1b936d780abb", "type"="Routing"}, Host {"id": "29",
"name": "cloudstack-lab-host-1", "uuid": "11662552-8221-4081-92b5-a3f2c852754a",
"type"="Routing"}]
./management-server.log:2022-03-15 12:38:44,145 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (API-Job-Executor-5:ctx-c454c9f5 job-848
ctx-20458e85 FirstFitRoutingAllocator) (logid:2f55511b) Looking for speed=500Mhz, Ram=512 MB
./management-server.log:2022-03-15 12:38:44,146 DEBUG [c.c.c.CapacityManagerImpl]
(API-Job-Executor-5:ctx-c454c9f5 job-848 ctx-20458e85 FirstFitRoutingAllocator)
(logid:2f55511b) Host {id: 31, name: cloudstack-lab-host-3, uuid:
2fd584d8-9fc3-4666-98f8-17f3e43e4348} is KVM hypervisor type, no max guest limit check needed
./management-server.log:2022-03-15 12:38:44,173 DEBUG [c.c.c.CapacityManagerImpl]
(API-Job-Executor-5:ctx-c454c9f5 job-848 ctx-20458e85 FirstFitRoutingAllocator)
(logid:2f55511b) Host: 31 has cpu capability (cpu:6, speed:2394) to support
requested CPU: 1 and requested speed: 500
./management-server.log:2022-03-15 12:38:44,189 DEBUG [c.c.c.CapacityManagerImpl]
(API-Job-Executor-5:ctx-c454c9f5 job-848 ctx-20458e85 FirstFitRoutingAllocator)
(logid:2f55511b) Hosts's actual total CPU: 14364 and CPU after applying
overprovisioning: 14364
./management-server.log:2022-03-15 12:38:44,189 DEBUG [c.c.c.CapacityManagerImpl]
(API-Job-Executor-5:ctx-c454c9f5 job-848 ctx-20458e85 FirstFitRoutingAllocator)
(logid:2f55511b) Free RAM: (14.38 GB) 15444189184 , Requested RAM: (512.00 MB) 536870912
./management-server.log:2022-03-15 12:38:44,190 DEBUG [c.c.c.CapacityManagerImpl]
(API-Job-Executor-5:ctx-c454c9f5 job-848 ctx-20458e85 FirstFitRoutingAllocator)
(logid:2f55511b) Host has enough CPU and RAM available
./management-server.log:2022-03-15 12:38:44,191 DEBUG [c.c.c.CapacityManagerImpl]
(API-Job-Executor-5:ctx-c454c9f5 job-848 ctx-20458e85 FirstFitRoutingAllocator)
(logid:2f55511b) STATS: Can alloc MEM from host: 31, used: (256.00 MB) 268435456,
reserved: (0 bytes) 0, total: (14.63 GB) 15712624640; requested mem: (512.00 MB)
536870912, alloc_from_last_host?: false , considerReservedCapacity?: true
./management-server.log:2022-03-15 12:38:44,191 DEBUG
[c.c.a.m.a.i.FirstFitAllocator] (API-Job-Executor-5:ctx-c454c9f5 job-848
ctx-20458e85 FirstFitRoutingAllocator) (logid:2f55511b) Found a suitable host,
adding to list: 31
In the next entries it’s possible to verify that a search for a compatible
storage is made, and also the verification for the existence of potential
hosts with access to the storage.
./management-server.log:2022-03-15 12:38:45,069 DEBUG [c.c.s.StorageManagerImpl]
(API-Job-Executor-5:ctx-c454c9f5 job-848 ctx-20458e85) (logid:2f55511b) Found
storage pool storage-1-iscsi of type SharedMountPoint
./management-server.log:2022-03-15 12:38:45,069 DEBUG [c.c.s.StorageManagerImpl]
(API-Job-Executor-5:ctx-c454c9f5 job-848 ctx-20458e85) (logid:2f55511b) Total
172
capacity of the pool storage-1-iscsi with ID 5 is (50.00 GB) 53687091200
./management-server.log:2022-03-15 12:38:45,079 DEBUG [c.c.s.StorageManagerImpl]
(API-Job-Executor-5:ctx-c454c9f5 job-848 ctx-20458e85) (logid:2f55511b) Checking
pool: 5 for storage allocation , maxSize : (50.00 GB) 53687091200,
totalAllocatedSize : (9.20 GB) 9878175856, askingSize : (50.00 MB) 52428800,
allocated disable threshold: 0.85
./management-server.log:2022-03-15 12:38:45,090 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-5:ctx-c454c9f5 job-848
ctx-20458e85) (logid:2f55511b) Trying to find a potenial host and associated
storage pools from the suitable host/pool lists for this VM
./management-server.log:2022-03-15 12:38:45,093 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-5:ctx-c454c9f5 job-848
ctx-20458e85) (logid:2f55511b) Checking if host: 31 can access any suitable storage
pool for volume: ROOT
./management-server.log:2022-03-15 12:38:45,096 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-5:ctx-c454c9f5 job-848
ctx-20458e85) (logid:2f55511b) Host: 31 can access pool: 5
After performing all these verifications, a host and storage compatible
with the VM are identified in the log entries below and then they are uti-
lized for its creation.
./management-server.log:2022-03-15 12:38:45,100 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-5:ctx-c454c9f5 job-848
ctx-20458e85) (logid:2f55511b) Found a potential host id: 31 name:
cloudstack-lab-host-3 and associated storage pools for this VM
./management-server.log:2022-03-15 12:38:45,103 DEBUG
[c.c.d.DeploymentPlanningManagerImpl] (API-Job-Executor-5:ctx-c454c9f5 job-848
ctx-20458e85) (logid:2f55511b) Returning Deployment Destination:
Dest[Zone(Id)-Pod(Id)-Cluster(Id)-Host(Id)-Storage(Volume(Id|Type-->Pool(Id))] :
Dest[Zone(1)-Pod(1)-Cluster(1)-Host(31)-Storage(Volume(117|ROOT-->Pool(5))]
[...]
./management-server.log:2022-03-15 12:38:59,918 DEBUG
[c.c.v.VirtualMachineManagerImpl] (Work-Job-Executor-11:ctx-b5892741
job-848/job-849 ctx-b5ac54d3) (logid:2f55511b) Start completed for VM, VM instance
{id: "119", name: "i-2-119-VM", uuid: "13fe0bce-a240-48e4-9d5b-081359fcf422",
type="User"}
./management-server.log:2022-03-15 12:38:59,923 DEBUG [c.c.v.VmWorkJobHandlerProxy]
(Work-Job-Executor-11:ctx-b5892741 job-848/job-849 ctx-b5ac54d3) (logid:2f55511b)
Done executing VM work job:
com.cloud.vm.VmWorkStart{"dcId":1,"podId":1,"clusterId":1,"hostId":31,"rawParams":
{"VmPassword":"rO0ABXQADnNhdmVkX3Bhc3N3b3Jk"},"userId":2,"accountId":2,"vmId":119,
"handlerName":"VirtualMachineManagerImpl"}
[...]
Finally we can see the moment when the process were finished by ACS (2
022-03-15 12:39:00,827).
[...]
./management-server.log:2022-03-15 12:39:00,585 DEBUG [o.a.c.f.j.i.AsyncJobManagerImpl]
(API-Job-Executor-5:ctx-c454c9f5 job-848
ctx-20458e85) (logid:2f55511b) Publish async job-848 complete on message bus
./management-server.log:2022-03-15 12:39:00,585 DEBUG
[o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-5:ctx-c454c9f5 job-848
ctx-20458e85) (logid:2f55511b) Wake up jobs related to job-848
./management-server.log:2022-03-15 12:39:00,585 DEBUG
[o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-5:ctx-c454c9f5 job-848
ctx-20458e85) (logid:2f55511b) Update db status for job-848
./management-server.log:2022-03-15 12:39:00,593 DEBUG
[o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-5:ctx-c454c9f5 job-848
173
ctx-20458e85) (logid:2f55511b) Wake up jobs joined with job-848 and disjoin all
subjobs created from job- 848
./management-server.log:2022-03-15 12:39:00,827 DEBUG
[o.a.c.f.j.i.AsyncJobManagerImpl] (API-Job-Executor-5:ctx-c454c9f5 job-848)
(logid:2f55511b) Done executing
org.apache.cloudstack.api.command.admin.vm.DeployVMCmdByAdmin for job-848
./management-server.log:2022-03-15 12:39:00,827 INFO [o.a.c.f.j.i.AsyncJobMonitor] (API-Job-
Executor-5:ctx-c454c9f5 job-848) (logid:2f55511b) Remove job-848 from job monitoring
Therefore, as it was shown above, the log analysis in ACS was used to investi-
gate and follow throughout the VM creation cycle. The same principles applied
here may be adapted and utilized to explore any event.
In CloudStack is pretty common to make use of this kind of analysis when
investigating errors, faults, etc. The following command can be used to search
for problematic logs.
grep -i -E 'exception|unable|fail|invalid|leak|warn|error' ./
With such analysis it’s possible to obtain a huge amount of information,
making the troubleshooting process very efficient.
7.2. Host failover
In ACS there’s a functionality called host HA, which has the goal to ensure
that when the host shows faulty behaviour, it will be turned off (via OOBM inter-
face) and the VMs that are running in it will be recreated (restarted) in another
available host in the environment.
The nomenclature host HA, or host high availability, is wrong used in ACS.
The concept of high availability dictates that when a node used to provide ser-
vices stops working, the user won’t notice this fault, because there’s a replica
ready to use in another node in the environment. However, that’s not how it’s
done with the host HA of ACS. The user VMs will be stopped and recrated in an-
other host, causing interruptions. This behaviour is characterized as failover,
and not HA.
Cauntion must be take when stopping KVM Agents when the host HA set-
ting is enabled, as ACS will consider that a failure has occurred and will do a
174
shutdown in the host (via OOBM interface), causing an unwanted behaviour. In
these cases, the recommended procedure is:
1. Enable maintenance mode in the host;
2. Migrate all VMs;
3. Disable the HA process for the host in maintenance;
4. Perform the desired operation;
5. Disable maintenance mode;
6. Renable the HA process.
7.3. Apache CloudStack services management
In this section, some basic operations related to managing cloudstack-management,
cloudstack-agent (only for KVM) and cloudstack-usage.
7.3.1. Managing the cloudstack-management
In some work flows, such as in updates, it’s necessary to stop and start again
the Management Servers for changes to be applied. In others, such as in set-
tings changes, just restarting them is enough. For such procedures, follow this
steps:
Start: after this the Management Server will start loading the modules. It
will only be available after the modules are loaded.
To start the Management Server service, execute the command:
systemctl start cloudstack-management.service
After started, it’s possible to verify the service status through the com-
mand:
systemctl status cloudstack-management.service
It’s possible to track the modules loading through logs:
tail -f /var/log/cloudstack/management/management-server.log
175
Stop: after perfomed, if there’s only one Management Server instance, it’ll
not be possible to interact with the orchestrator.
To stop the Management Server service, execute the command:
systemctl stop cloudstack-management.service
After the service stop it’s possible to verify its status using the com-
mand:
systemctl status cloudstack-management.service
Restart: stops and then starts the Management Server. Shares the same
behaviour as the "Start", related to modules loading.
To restart the Management Server service, execute the command:
systemctl restart cloudstack-management.service
After the restart it’s possible to verify the service status through the
command:
systemctl status cloudstack-management.service
It’s possible to track the modules loading through logs:
tail -f /var/log/cloudstack/management/management-server.log
This process should not be executed simultaneously on the Management
Servers, because if all of them are stopped will not be possible to use the Cloud-
Stack.
7.3.2. Managing cloudstack-agent (for KVM hypervisor)
In clusters that uses KVM as hypervisor, there’s a component called Cloud-
Stack Agent. In some work flows, such as updates, it’s necessary to stop and
start them again to apply changes on them. In other cases, such as in settings
changes, just restarting them it’s enough. If the host HA feature is turned on, it’s
recommended to turn it off before performing the stop and restart. For such
procedures, follow these steps:
176
Start: after this, the CloudStack Agent will try to connect to the Manage-
ment Server.
To start the CloudStack Agent service, execute the command:
systemctl start cloudstack-agent.service
After started, it’s possible to verify the service status through the com-
mand:
systemctl status cloudstack-agent.service
It’s possible to track the CloudStack Agent initialization through logs:
tail -f /var/log/cloudstack/agent/agent.log
Stop: when performed, all virtual machines in the host will keep work-
ing normally, however CloudStack won’t be able to manage them, neither
create new instances.
To stop the CloudStack Agent service, execute the command:
systemctl stop cloudstack-agent.service
After the service stop, it’s possible to verify its status through the com-
mand:
systemctl status cloudstack-agent.service
Restart: stops and then starts the CloudStack Agent. Shares the same
behaviour as the "Start", related to the connection to CloudStack.
To restart the CloudStack Agent service, execute the command:
systemctl restart cloudstack-agent.service
After restarted, it’s possible to verify the service status using the com-
mand:
systemctl status cloudstack-agent.service
It’s possible to track the CloudStack Agent initialization through logs:
tail -f /var/log/cloudstack/agent/agent.log
177
7.3.3. Managing cloudstack-usage
Sometimes, such as changing Usage Server settings, it’s necessary to restart
it for changes to be applied. For such procedures, follow these steps:
Start:
To start the Usage Server service, just execute the command:
systemctl start cloudstack-usage.service
After started, it’s possible to verify the service status through the com-
mand:
systemctl status cloudstack-usage.service
Stop:
To stop the Usage Server service, execute the command:
systemctl stop cloudstack-usage.service
After the service stop it’s possible to verify its status using the com-
mand:
systemctl status cloudstack-usage.service
Restart: stops and then starts the Usage Server.
To restart the Usage Server service, execute the command:
systemctl restart cloudstack-usage.service
After restarted, it’s possible to verify the service status using the com-
mand:
systemctl status cloudstack-usage.service
178
7.4. System VMs
In the Apache CloudStack architecture, there are instances created and man-
aged by ACS itself to assist some cloud services provisioning, called system VMs.
Each instance have a public IP.
As default, they have only the user root, with password as password. It’s
possible to configure ACS to define a random password when booting the sys-
tem VM. The section Randomizing system VMs passwords describes the details.
However, worth to highlight that it’s not possible to access the system VMs via
SSH with user and password. For more information go to 7.4.4.
There are three types of system VMs:
Console Proxy Virtual Machine(CPVM), Secondary Storage Virtual Machine(SSVM)
and Virtual Router(VR).
7.4.1. Console Proxy Virtual Machine
The CPVMs, identified as v-<id>-VM within ACS, grant access to the console
of VMs managed by the CloudStack. They work based in Virtual Network Com-
puting (VNC), which is a system for graphic user interface sharing, that uses a
protocol called Remote Frame Buffer (RFB) to transmit mouse and keyboard
from a computer to another by sending screen updates through a network.
The user connects to the CPVM, which then perfoms a proxy connection with
the selected Vm, allowing access to their console remotely.
Figure 129: Console Proxy Virtual Machine
179
7.4.2. Secondary Storage Virtual Machine
The SSVMs, identified as s-<id>-VM within ACS, make it possible to register
templates and ISOs, and also download and upload templates and ISOs.
Figure 130: Secondary Storage Virtual Machine
7.4.3. Virtual Router
The VRs,identified within ACS as r-<id>-VM, are interfaces between the VMs
and the world, managing networks and allowing the implementation of VPNs,
firewall rules, port-forwarding, among others features. Each isolated network,
shared network or VPS created in ACS will have a VR.
Figure 131: Virtual Router
7.4.3.1. Virtual Router health checks
CloudStack offers a structure to verify the virtual routers integrity. Theses health
checks are divided in basic and advanced, to make it possible to set different
execution intervals, with the advanced being more computationally expensive,
and therefore, executed less frequently.
Aside from periodically tests, it’s also possible to force a verification round
through the API call getRouterHealthCheckResults, as long as the global setting
router.health.checks.enabled is enabled.
180
Figure 132: Health checks display for a certain VR.
The points that are tested are:
Connectivity between the virtual router and Management Servers;
Connectivity to the virtual router interface gateways;
Free disk space;
CPU and memory usage;
Service status for SSH, DNS, HAProxy and HTTP.
Also, for the advanced tests:
Virtual router version;
Correspondence between the DHCP/DNS settings and the ACS metadata;
181
Correspondence between the HAProxy settings and the ACS metadata;
Correspondence between the iptables rules and the ACS metadata.
There’s no need to manually set the iptables tables, with the possibility to
use the API or even the web interface for that. Editing and updating the tables
will be handled by the CloudStack.
The health checks can be adjusted through the following global settings:
182
Name Description Value
Default
router.alerts.check.interval Interval, in seconds, to verify VR
alerts.
1800
router.health.checks.enabled Enables or disables the VR health
checks.
true
router.health.checks.basic.
interval
Interval, in minutes, between ba-
sic health checks executions. If 0,
no test will be scheduled.
3
router.health.checks.advanced.
interval
Interval, in minutes, between ad-
vanced health checks executions.
If 0, no test will be scheduled.
10
router.health.checks.config.
refresh.interval
Interval, in minutes, between set-
tings updates from the Manage-
ment Server for the VR health
checks.
10
router.health.checks.results.
fetch.interval
Interval, in minutes, between re-
sult updates from the Manage-
ment Server for health checks
In each update, the Manage-
ment Servers evaluate the need
to recreate the VR based in the ro
uter.h ealth.c hecks.f ailures .to.rec
reate.vr setting.
10
router.health.checks.failures.
to.recreate.vr
Failures during health checks
defined by this setting causes
restarting the VR. If empty, the
restart will not be performed for
any health checks failures.
router.health.checks.to.exclude Determines which health checks
must be ignored when executing
scheduled verifications. Comma
separated list containing script
names from the /ro ot/health_ch
ecks/ folder.
183
Name Description Value
Default
router.health.checks.free.disk.
space.threshold
Minimum free disk space on the
VR, in MB, before resulting in a
fault.
100
router.health.checks.max.
cpu.usage.threshold
Maximum CPU usage for the VR,
in percentage, before resulting in
a fault.
100
router.health.checks.max.
memory.usage.threshold
Maximum memory utilization for
the VR, in percentage, before re-
sulting in a fault.
100
This value must be sufficiently greater than router.health.checks.(basic/advanced).interv
al to provide enough time to generate results between each update.
7.4.4. Accessing the system VMs
There are three ways to access the System VMs via SSH, depending on which
hypervisor is used:
1. If the used hypervisor is KVM or XenServer, the access is made through
the host in which the System VM is running:
ssh -i /root/.ssh/id_rsa.cloud -p 3922 root@<systemvm_internal_ip>
2. If the the hypervisor is KVm there’s also an alias to the command above,
making its use easier:
cloudstack-ssh <systemvm_internal_ip>
3. Finally, if the hypervisor is VMware it’s necessary to make the access through
the Management Server using the command:
ssh root@<systemvm_internal_ip> -p 3922 -i /var/lib/cloudstack/management/.ssh/id_rsa
7.4.5. Randomizing the system VMs passwords
For security sake, might be interesting to randomize the System VMs pass-
words. It’s possible to configure ACS to perform such task whenever a system
VM is started. For that, follow these steps:
1. Change the setting system.vm.random.password to true;
184
2. Restart the Management Server;
3. For the system VMs to have their passwords changed, it’s necessary to
destroy them for the ACS to create them with the new password;
4. The generated key will be located at the system.vm.password parameter,
which can be consulted in the Global Settings.
The recovered password from the parameter system.vm.password will be
encrypted. To open the key, follow these steps:
1. Access the Management Server;
2. Recover the environment key using the command:
cat /etc/cloudstack/management/key
3. Verify the version of the jasypt file found in the /usr/share/cl o u d s t a ck-co
mmon/lib/ folder;
4. Decrypt the password with the command:
java -classpath /usr/share/cloudstack-common/lib/jasypt-<versão>.jar \
org.jasypt.intf.cli.JasyptPBEStringDecryptionCLI \
input="<password_to_decrypt>" password=<environment_password> verbose="false"
It’s important to notice that, once setup, the key won’t be changed automat-
ically by ACS. The decrypted password that shows in the Global Settings will
always be different each time a refresh in the web interface occurs, however,
the key will be the same.
7.4.6. URL for CPVM and SSVM consumption
After perfoming templates volumes upload/download or accessing a VM
console, the services will be consumed via an URL. These URLs are defined
based in the following global settings.
Setting Description
consoleproxy.url.domain Domain used by the console
proxy VMs.
secstorage.ssl.cert.domain Domain used by secondary stor-
age VMs.
185
Based com this settings, the URL for consuming the System VMs services
can have three formats:
Empty: the public IP of the System VM is used; for example, 172.16.200.100.
Static (demo.com.br): the domain its used; in this example, demo.com.br.
The approach works when there are only one CPVM or SSVM. When there
are more than one, it’s necessary to discriminate in which VM the access
will occur through the dynamic format.
Dynamic (*.demo.com.br): the public Ip of the VM is used in union with
the domain; for example, 172-16-200-100.demo.com.br.
7.5. Enabling VMs computational resources increase
Despite the functionality being supported for VMware, XenServer and KVM
virtualizers, this section will demonstrate only how to setup the environment
with the KVM virtualizer to enable changes to computational resources (RAM
and CPU) with the VM running. In the context of the KVM virtualizer, this func-
tionality exists with some caveats:
The VM must be configured as dynamically scalable;
The compute offering type of the VM must be of either custom constrained
or custom unconstrained type;
The VM’s operating system must support memory and CPU hot (un)plug,
i.e., must have the functionality to allocate and remove those physical re-
sources during runtime.
17
It’s not possible to change neither the vGPU or CPU families of the VM
with the same execution, as operating systems commonly don’t support
this kind of substitution while the VM is running.
17
By default, both Linux and Windows kernels supports this functionality.
186
It’s not possible to reduce the VM’s resources directly, only increase. How-
ever, if needed to perform a downgrade in a VM, the user may stop it and
change its compute offering to a lesser offering.
To activate this feature it’s necessary to change the global setting enable.dy
namic.scale.vm, which has the value false by default, to true.
7.6. Overprovisioning
Overprovisioning is the feature of offering more virtual resources than the
amount of physical resources that really exists. This exists due to how Cloud-
Stack accounts used resources and how virtualizers deal with memory and CPU.
Let’s consider a situation in which there’s a 100Gb (100%) storage and a VM
will be created with a disk offering of 20Gb (20%). For the CloudStack, the VM
disk will be consuming 20Gb (20%) and the storage will remain with 80Gb (80%)
free. However, for most cases (depending on the Provisioning Type of the disk
offering) the disks will be allocated dynamically (created with minimum size
possible and then increased on demand until reaching defined limits) Thus,
CloudStack will account for 20Gb (20%), but the disk will have a variable size
between 0Gb and 20Gb. In many cases the offered disk is not fully consumed
by the user, which leaves open space for others disks to be allocated.
CloudStack accounts 20Gb (20%) for the VM disk and 80Gb (80%) of free
storage, however the real value (for this example situation) is 10Gb (10%) in use
for the VM disk and 90Gb (90%) of free storage. In the end, there will be free
storage space but it won’t be used because CloudStack considers it allocated.
In the situation described above it’s possible to configure CloudStack to con-
sider the existence of more resources than there are physically. Setting it to 2x,
the total storage would be 200Gb (100%) and 20Gb (10%) would be used, re-
maining 180Gb (90%) of free storage.
For memory, KVM possess the KSM feature, that performs memory blocks
sharing. Thus, the more homogeneous that the workloads are (VMs), greater
187
the gain in memory optimization.
In a situation in which 10VMs exists with the same disk settings, operating
system, CPU and 2Gb of memory, for example, the memory consumption cal-
culation would result in 20Gb, however, due to KSM, many memory blocks are
shared and the final result will be lower, which will make it possible to perform
a memory overprovisioning.
Currently, there’s 5 settings for overprovisioning:
Setting Description Value
Deafult
cpu.overprovisioning.factor Value used as the over-
provisioning factor for CPU.
CPU available after calcula-
tions will be current * fac-
tor .
1.0
mem.overprovisioning.factor Value used as the overpro-
visioning factor for mem-
ory. Memory available after
calculations will be current
* factor .
1.0
storage.overprovisioning.factor Value used as the overpro-
visioning factor for storage.
Storage available after cal-
culations will be current *
factor .
2.0
vm.min.cpu.speed.equals.cpu.speed.divided.
by.cpu.overprovisioning.factor
When the overprovisioning
is utilized, ACS informs
a minimum CPU value,
with this value calculated
as CPU/overprovisioning,
regardless if the service
offering is scalable. This
setting controls if the be-
haviour will be performed.
true
188
Setting Description Value
Deafult
vm.min.memory.equals.memory.divided.by.
mem.overprovisioning.factor
When the overprovisioning
is utilized, ACS informs
a minimum RAM value,
with this value calculated
as RAM/overprovisioning,
regardless if the service
offering is scalable. This
setting controls if the be-
haviour will be performed.
true
For both vm.min.cpu.speed.equals.cpu.speed.divided.by.cpu.overprovision
ing.factor and vm.min.memory.equals.memory.divided.by.mem.overprovision
ing.factor settings the default value is true, which keeps the existing behaviour,
however we recommend to change it to false, cause we consider that the calcu-
lation done by CloudStack on deployment and migration for VMs incoherent.
Except for the storage.overprovisioning.factor setting, which operates at
zone level, all other settings operates at cluster level.
About CPU and memory overprovisioning, after changing the settings val-
ues, the VMs must be shutted down and started again to CloudStack to calcu-
late the proper allocated value.
7.7. Updating the Apache CloudStack
The LTS version of Apache CloudStack is periodically updated. This section
will address the steps needed to perfom updates. The changelog, specific re-
quirements and link for the release packages can be found in the Gitlab project,
in the section Deployments > Releases.
7.7.1. Major versions updates
After fulfilling the requirements and downloading the files, these are the
steps for updating the Management Server between major versions:
1. Stop the Management Server service:
systemctl stop cloudstack-management
189
2. Stop the Usage Server service:
systemctl stop cloudstack-usage
3. Dump the cloud database:
mysqldump -u root -p cloud > cloud-backup-before-update-`date '%Y-%m-%d-%H:%M:%S.%3N'`.sql
4. Dump the cloud_usage database:
mysqldump -u root -p cloud_usage > \
cloud_usage-backup-before-update-`date +'%Y-%m-%d-%H:%M:%S.%3N'`.sql
5. Install the new ACS Management Server packages (download files will have
a version suffix):
For .deb:
apt install cloudstack-common~bionic_all.deb cloud-stack-management~bionic_all.deb \
cloudstack-usage~bionic_all.deb cloudstack-ui~bionic_all.deb
For .rpm:
yum install cloudstack-common.1.el7.centos.x86_64.rpm \
cloudstack-management.1.el7.centos.x86_64.rpm \
cloudstack-usage.1.el7.centos.x86_64.rpm cloudstack-ui.1.el7.centos.x86_64.rpm
6. Start the Management Server:
systemctl start cloudstack-management
7. Start the Usage Server:
systemctl start cloudstack-usage
If the hypervisor in use is KVM, follow these steps to update the CloudStack
Agents:
1. Stop the CloudStack Agent service:
systemctl stop cloudstack-agent
2. Install the new CloudStack Agent packages (download files will have a ver-
sion suffix):
For .deb:
apt install cloudstack-common~bionic_all.deb cloudstack-agent~bionic_all.deb
190
For .rpm:
yum install cloudstack-common.1.el7.centos.x86_64.rpm \
cloudstack-agent.1.el7.centos.x86_64.rpm
3. Start the CloudStack Agent:
systemctl start cloudstack-agent
To finish the Management Server update:
8. Force restart of the system VMs, to apply the new template when deploy-
ing them. Instructions on how to force restart the system VMs are found
in the official documentation;
9. Restart the Management Server service:
systemctl restart cloudstack-management
7.7.2. Updates within the same major version
To install an update within the same major version, it’s crucial to carefully
read the releases to verify if the updated is related to the Management Server
or to the CloudStack Agent. When the update is related to the Management
Server, just follow through the steps 1 to 7 from Management Server update.
In turn, when the update is related to the CloudStack Agent, follow the steps
described in updating the KVM agent.
It’s always important to check the Gitlab releases, because, in addition to
important information about requirements and changes, it’s also possible that
notes about the version are posted, which must be carefully read before pro-
ceding with the specific ACS version update process.
7.8. SSL certificate update in the environment (nginx and ACS)
The following steps show the necessary actions to update the SSL certifi-
cates in the nginx and upload them to the ACS.
191
7.8.1. Root and intermediary certificates extraction
Note: This is step is necessary if the client only have the final certificate.
To perform the certificates extraction it’s necessary to divide the final certifi-
cate in two files: one containing the certificate itself and other containing only
the private key. Aftter this, the intermediary certificate is extracted with the
following commands:
user@scclouds :~ $ openssl x509 -in <domain>.crt -text -noout | grep -i 'issuer'
Issuer: C = US, ST = TX, L = Houston, O = "cPanel, Inc.", CN = "cPanel, Inc.
Certification Authority"
CA Issuers - URI:http://crt.comodoca.com/cPanelIncCertificationAuthority.crt
user@scclouds :~ $ curl -o caIntermediate
http://crt.comodoca.com/cPanelIncCertificationAuthority.crt
user@scclouds :~ $ open x509 -inform DER -in caIntermediate -outform PEM -out
caIntermediate.crt
With the intermediary certificate extracted, it’s necessary to extract the root
certificate:
user@scclouds :~ $ openssl x509 -in caIntermediate.crt -text -noout | grep -i 'issuer'
Issuer: C = GB, ST = Greater Manchester, L = Salford, O = COMODO CA Limited, CN =
COMODO RSA Certification Authority
CA Issuers - URI:http://crt.comodoca.com/COMODORSAAddTrustCA.crt
user@scclouds :~ $ curl -o caRoot http://crt.comodoca.com/COMODORSAAddTrustCA.crt
user@scclouds :~ $ openssl x509 -inform DER -in caRoot -outform PEM -out caRoot.crt
7.8.2. Key conversion to PKCS#8:
user@scclouds :~$ openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in
<domain>.key -out <domain>.pkcs8.key
7.8.3. Adding certificates in nginx
It’s necessary to change the files /etc/nginx/certificates/domain.com.br.crt and
/etc/nginx/certificates/domain.com.br.pem with the contents of the certificate and
the desired private key. Then, just restart the nginx service with the command:
systemctl restart nginx.service
7.8.4. Adding certificates in the Apache CloudStack
The certificate addition may be done via UI or via CloudMonkey:
Via UI:
192
Figure 133: Adding SSL certificate via UI
Via CloudMonkey the certificates are added with theuploadCustomCertificate
API.
Root certificate
In a terminal, call the API uploadCustomCertificate using the parameters id
=1, name=root, domainsuffix and certificate. The certificate will be directly
loaded from the file through the command cat:
cloudmonkey upload customcertificate id=1 name=root domainsuffix="domain.com.br"
certificate="$(cat <root certificate's absolute path>)"
Intermediary Certificates
In a terminal, do a API call to uploadCustomCertificate using the parameters
id, name, domainsuffix e certificate:
cloudmonkey upload customcertificate id=n name="intermediate<n-1>" domainsuffix="domain.com.br"
certificate="$(cat <intermediary certificate's absolute path>)"
193
This step must be repeated for each intermediary certificate, increasing
the n variable by 1 on each operation.
Note: the intermediary certificates name is n-1, cause the first intermedi-
ary certificate begins with ID 2 (root is the 1º).
Website certificate
In a terminal, call the uploadCustomCertificate API using the parameters i
d, domainsuffix, certificate e privatekey:
cloudmonkey upload customcertificate id=n domainsuffix="domain.com.br"
certificate="$(cat <website certificate's absolute path>)" privatekey="$(cat <certificate's private key absolute path>)"
After sending the request with the private key in the final certificate, ACS
will automatically restart the system VMs, both via API and via UI.
7.9. SSH key pairs
ACS have a functionality that allows injecting pre-defined SSH keys on the
VMs. To create them, go to the Compute section, then click in SSH key pairs
and finally click in Create a SSH Key Pair.
Figure 134: Accessing the SSH key pairs sections
194
Figure 135: Creating a SSH key pair
When creating a SSH key pair using the SSH key pairs, it’s necessary to take
caution with the following details:
For this feature to be available, the cloudinit ?? must be configured in the
VM.
195
If the operator don’t specify a public key, CloudStack will automatically
generate a key pair. The generated private key will immediately be avail-
able and won’t be stored by the CloudStack. For that, it’s recommend that
the operator downloads the given file.
Figure 136: SSH key pair automatic creation
If a domain isn’t specified during the creation, the key pair will be created
196
within the domain of the logged in user.
When specifying an account in the account field it’s necessary to inform
the domainid for the domain where it belongs.
There are two possibilities when creating SSH key pairs via CloudMonkey:
If the operator wants the keys to be generated automatically:
create sshkeypair name=<key_name> domainid=<domain_id> account=<account_name>
projectid=<project_id>
If the operator wants to define their own key pair:
register sshkeypair name=<key_name> publickey=<key_value> domainid=<domain_id>
account=<account_name> projectid=<project_id>
After creating the key pair, to inject the public key in a VM follow these steps:
197
Figure 137: Creating the SSH key
198
Figure 138: Creating instance and implementing the SSH key
199
8. KVM virtualizer
Kernel-based Virtual Machine (KVM) is an open source virtualization technol-
ogy under the GPL license that allows to transform the Linux in a virtualizer,
transforming it into a bare-metal hypervisor, making it possible for the host to
execute many VMs and also execute guest calls.
The KVM is responsible for managing the guest access to the host’s CPU and
RAM. To emulate other VM components, such as disks, graphic cards and USBs,
the QEMU is used.
Figure 139: KVM virtualization
Designed for cloud solutions, KVM have some advantages over other vir-
tualizers. For example, it gives access directly to each host, with no need of
a centralizing element, such as vCenter from VMware or the master node in
XenServer. Easing the their management for cloud orchestrators, such as Open-
Stack and CloudStack, while also providing a more satisfactory performance.
There are many pros and cons between the hypervisors supported by ACS:
200
Figure 140: Virtualizers comparison
Another tool used with KVM it’s the Libvirt, a set of softwares that ease virtual
machines management, including an API, a daemon (libvirtd) and a CLI (virsh).
Libvirt is just and facilitator and mus not be confused with KVM itself.
8.1. KVM installation and CloudStack Agent
The first step is to install the following softwares: NFS
18
, NTP
19
, QEMU, KVM
and Libvirt:
apt-get update
apt-get install nfs-common
18
NFS is the protocol used for storage.
19
The NTP is necessary to synchronize server clocks in the cloud.
201
apt-get install openntpd
apt-get install qemu-kvm libvirt-bin python3-libvirt virtinst bridge-utils cpu-checker
Then, download the latest release package of ACS (those are found in sec-
tion Deployments > Releases in the GitLab project). After that, install the Cloud-
Stack Agent (there’ll be a version suffix on the downloaded files):
For .deb:
apt install cloudstack-common_<acs-version>-scclouds~bionic_all.deb \
cloudstack-agent_<acs-version>-scclouds~bionic_all.deb
For .rpm:
yum install cloudstack-common-<acs-version>-scclouds.1.el7.centos.x86_64.rpm \
cloudstack-agent-<acs-version>-scclouds.1.el7.centos.x86_64.rpm
8.2. KVM and CloudStack Agent setup
CloudStack allows to configure the CPU model exposed to KVM instances.
For that, edit the file /etc/cloudstack/agent/agent.properties, setting guest.cpu
.mode, guest.cpu.model and guest.cpu.features.
There are three possible values for guest.cpu.mode:
1. custom: Allows the operator to customize the CPU model. In this case,
the model must be specified in the guest.cpu.model setting. For example:
guest.cpu.mode=custom
guest.cpu.model=SandyBridge
The available models list for a certain architecture can be obtained through
the command:
virsh cpu-models <architecture>
For example:
$ virsh cpu-models
x86_64
pentium
pentium2
pentium3
pentiumpro
coreduo
n270
core2duo
qemu32
202
kvm32
cpu64-rhel5
cpu64-rhel6
qemu64
(...)
Furthermore, should be highlighted that the file /usr/share/libvirt/cpu_m
ap.xml contains a list with all CPU models and flags supported.
2. host-model: Now the Libvirt will identify which model in /usr/share/libvir
t/cpu_map/ is more similar with the host’s CPU model. This options offers
a good performance, keeping the possibility for migrations to hosts with
the same CPU architecture.
3. host-passthrough: Libvirt will communicate to KVM the exact character-
istics from the host’s CPU. The difference between host-model is that in-
stead of pairing CPU flags, all the CPU details from the host are paired.
This offers a better performance, but at cost related to the migration: the
guest may only be migrated to a host with the same exactly CPU.
The g uest.cpu.features setting, on the other hand, represents CPU flags to
be applied, separated with spaces. For example:
guest.cpu.mode=custom
guest.cpu.model=SandyBridge
guest.cpu.features=aes mmx avx
For configuring the Libvirt, edit the file found in /etc/libvirt/ libvirtd.con f.
Here’s a configuration example using TCP protocol, without authentication and
with default ports:
listen_tls=0
listen_tcp=1
listen_addr = "0.0.0.0"
mdns_adv = 0
auth_tcp="none"
tcp_port="16509"
tls_port="16514"
auth_tls="none"
Also, KVM needs to communicate with many other components through the
network, because of this, the following TCP ports should be open in the firewall:
1. 22 (ssh)
203
2. 16509 (libvirt)
3. 16514 (libvirt tls)
4. 8250 (ACS system VMs)
5. 5900-6100 (VNC)
6. 49152-49216 (migration traffic)
8.3. KVM operation
Libvirt offers an array facilities for the KVM operation. For example, using
Libvirt it’s possible to describe the VMs using XML, making manual creation and
VM checking easier. To obtain the XML from a running VM, run the command:
virsh dumpxml <instance> >dump.xml
The resulting file will look something like this:
<domain type='kvm' id='108'>
<name>r-1896-VM</name>
<uuid>c85d71db-3ce9-4c6f-98d3-b22f7e84058d</uuid>
<description>Debian GNU/Linux 5.0 (64-bit)</description>
<memory unit='KiB'>262144</memory>
<currentMemory unit='KiB'>262144</currentMemory>
<vcpu placement='static'>1</vcpu>
(...)
<sysinfo type='smbios'>
<system>
<entry name='manufacturer'>Apache Software Foundation</entry>
<entry name='product'>CloudStack KVM Hypervisor</entry>
<entry name='uuid'>c85d71db-3ce9-4c6f-98d3-b22f7e84058d</entry>
</system>
</sysinfo>
<os>
<type arch='x86_64' machine='pc-i440fx-2.11'>hvm</type>
(...)
If you want to create a new VM from a XML, run the command:
virsh create <VM_XML>
To edit a VM’s settings, the following command can be used:
virsh edit <instance>
It should be noted that, however, most of modifications will only be applied
after restarting the VM.
Moreover, if XenServer is used, the following Libvirt commands are equiva-
lent:
204
virsh list = xe vm-list
virsh start = xe vm-start
virsh shutdown = xe vm-shutdown
Finally, if it feels necessary for the user, it’s possible to use the Virtual Machine
Manager, which is a graphic interface for working with Libvirt.
8.4. KVM’s CPU topology
When defining the CPU for a VM, it’s possible to define the amount o vCPUs
that the VM will have to start (current amount) and also how many more vCPUs
is possible to add to the VM without stopping it (maximum amount). When
working with fixed offerings (Section 3.11.1), both values are equal and in the
VM’s XML there’s only de definition for the current amount. Exemple of a fixed
offering definition with 4 vCPUs:
<vcpu placement='static'>4</vcpu>
When working with custom offerings (Section 3.11.1), the current amount
and the maximum amount are different, with them defined in the XML through
the property current and the tag value vcpu, respectively. Example for XML
definition for a VM, with a custom offering with initial value of 4 vCPUs and
maximum value of 8:
<vcpu placement='static' current='4'>8</vcpu>
When defining the CPU topology for a VM in KVM, it’s possible to inform the
quantity of sockets, cores and threads. ACS keeps the thread count fixed in 1
and derives the sockets and cores quantity based in the amount of vCPUs. The
multiplication of th equantity of sockets, cores and threads should result in the
maximum quantity of vCPUs that a VM may have (value of the tag vcpu).
As default, ACS derives the sockets quantity seeking to divide the core amount
by 6 or 4.
Example 1 : In a scenario with a custom offering with 4 vCPUs initially and
a maximum of 24, ACS tries to divide the maximum amount (24) firstly by 6. If
205
the remainder is 0, it uses the quotient and the divisor to define the topology;
in this case, the topology would be defined with 4 sockets and 6 cores each.
Example 2: In the other hand, in a scenario with a custom offering starting
with 4 vCPUs and a maximum of 16, ACS tries to divide the maximum amount
(16) firstly by 6. As the remainder is not 0, it tries the division by 4 instead; then,
the quotient and divisor are used to defined the topology; which in this case
would be defined with 4 sockets and 4 cores each. If the remainder for both
divisions are not 0, the topology isn’t defined.
If the topology definition through the division by 6 or 4 isn’t adequate for the
context, it’s also possible to overwrite this behaviour using the setting cpu.core
spersocket for each VM, that will replace the divisor for the definition process.
Some usage examples for the cpu.corespersocket setting:
Example cpu.corespersocket max_vCPUs Result
1 12 24 2 sockets with 12 cores each
2 8 32 4 sockets with 8 cores each
3 1 12 12 sockets with 1 core each
4 2 20 10 sockets with 2 cores each
8.5. CPU control with KVM
In CloudStack, the CPU control of the VMs in KVM can be done using the
following Libvirt parameters:
shares:
Weight that indicates the priority for each vCPU to access the host’s CPU
when there’s access concurrence in it. Forexample, if all VMs have sh a re
s defined as 1000, all will have the same priority. However, if a VM have
a value of 2000 while the others have a value of 1000 for this setting, the
VM with value 2000 will have twitch the priority for accessing the CPU. The
interval accepted by this parameter depends on the cgroups version in the
kernel of the current operating system of the host. When using cgroups
206
on version 1, the value of the flag shares must be between 2 and 262144;
while in version 2, the interval range goes from 1 to 10000.
period:
Specifies the time interval (in microseconds) that each vCPU must be fol-
lowing the limit defined by the quota parameter. The value must be in the
range of 1000 to 1000000 microseconds.
quota:
Defines a bandwidth limit for each vCPU. period and quota must be used
together with a limit to be applied to the vCPUs of the VM. The value must
be within the 1000 to 17592186044415 range (in microseconds). The quot
a value is bigger than the one defined by the period means that the vCPUs
will be allowed to consume a bandwidth from more than one physical
core. A negative value means that there’s no limitation.
To list the values for these parameters in a certain instance, the following
command can be used:
virsh schedinfo <VM>
Figure 141: Output example for the command virsh schedinfo
The value of this parameters can be altered via ACS during the service offer-
ing creation, enabling CPU cap, which allows to limit the amount o CPU usage
for a VM, regardless of the amount available to the server.
207
With this parameter enabled, CloudStack calculates cpuQuotaPercentage =
Host.MaxSpeed
Vm.MaxSpeed
, and then the quota value is calculated multiplying cpuQuotaP
ercentage with the variable DEFAULT_PERIOD that have a fixed value of10.000
in CloudStack. It’s worth to highlight that the quota parameter have a minimum
value of 1000, if the multiplication mentioned above returns a value lower that
this (equivalent to 10% of the DEFAULT_PERI OD), CloudStack will replace the
result of the multiple with the minimum value.
The period value is calculated in the following manner: period =
quota
cpuQuotaPercentage
,
this value is then compared with MAX_PERIOD, which have the value of 10000
00. If the division result is higher than the value of this variable, period will be
defined as MAX_PERIOD, otherwise, the division result will be used.
The value of shares is determined using the values of vm.speed, vm.minSpe
ed and vm.cpus, that are defined in the service offering. If the value of vm.minS
peed is null, the parameter value will be defined as shares = vm.cpus×vm.speed.
Otherwise, if vm.minSpeed have a defined value, then the parameter will be
calculatede in the following manner: shares = vm.cpus × vm.minSpeed.
208
9. VMware virtualizer
This section will show how to add the VMware virtualizer to the Apache
CloudStack infrastructure.
The installation and setup of the ESXi hosts, as well as the vCenter, here
shown, where performed in a local laboratory, using the KVM virtualizer, there-
fore, if using this section to guide the installation of real hosts, some procedures
may differ.
Furthermore, the installed versions were:
VMWare ESXi 6.5.0 Build 14320405
Ubuntu 20.04.2 LTS
9.1. Creating datastores
Because a local laboraty was used, a VM utilized as storage already existed.
This VM utilizes the NFS protocol, so two new paths were exported to serve as
primary datastores to the ESXi hosts and as datastore for deploying the vCen-
ter.
The new paths created were:
Primary Datastore: /mnt/vmware-primary-storage
Deploy Datastore: /mnt/vmware-deply-vcenter
And they were also added to the settings file /etc/exports:
/mnt/vmware-primary-storage 192.168.31.0/24(rw,sync,no\_subtree\_check,no\_root\_squash)
/mnt/vmware-deply-vcenter 192.168.31.0/24(rw,sync,no\_subtree\_check,no\_root\_squash)
After that, the NFS service was restarted with the command:
systemctl restart nfs-server.service
209
9.2. Installing the ESXi hosts
The VMware installer 6.5 was utilized and the created VMs had five NICs, as
such:
Public network: used to access the hosts through external network. This
network was added to the laboratory to help setting up the hosts, however
its usage must be well thought for production environments, keeping in
mind that the ESXi hosts will require access to the external network to
search and apply updates to VMware.
Management network: used for the communication with ACS.
Primary storage network: used for the communication with the primary
datastore.
Secondary storage network: used for the communication with the sec-
ondary datastore.
Guest network: used for the VMs side communication.
The command used, in KVM, was:
virt-install --name esxi1 --vcpus 2 --memory 11000 --disk size=20 --graphics \
vnc,port=9999,listen=0.0.0.0 --network network:public --network bridge:br-management \
--network bridge:br-pri-storage --network bridge:br-sec-storage --network bridge:br-guests --cdrom \
~/isos/VMware-VMvisor-Installer-201908001-14320405.x86_64.iso
20
Net, the hosts setup were done using Remmina, following through only the
default installation, described in the subsection below.
20
Here’s important to highlight that a VM with 10 GB RAM was created because the VMware
vCenter have limitations when adding ESXi hosts, as it blocks adding hosts with less than 10GB
RAM memory. This limitation, besides being noted in version 6.5 of VMware, can still be present
in its more recent versions.
210
9.2.1. Default ESXi hosts installation
1. When starting to install the ESXi host, just press Enter:
Figure 142: Initial ESXi installation screen
2. After reading and agreeing, accept the terms pressing F11:
Figure 143: Accept terms of use
3. A scan will be perfomed in the existing disks on the host. In the host used
in this document, there was only one disk used for the installation. For
environments with more than one disk, attention must be paid to choose
the correct disk:
211
Figure 144: Choosing the disk to install the system
4. Select the keyboard layout. In this example, the brazilian layout was used,
but the standard keyboard is from USA:
Figure 145: Selecting the keyboard layout
5. Create a password for the root user. By default, the ESXi installation doesn’t
allow creating new users:
Figure 146: Creating the root password
212
6. This warning occurred because the installation was performed within a
VM, in case it happens in production environment it’s possible to proceed
with the installation by pressing Enter:
Figure 147: Potential warning
7. Confirm the installation by pressing F11:
Figure 148: Confirm installation
8. The installation process will then start:
Figure 149: Host installation
9. After the process is finished, it’ll be necessary to restart the host, by press-
ing Enter:
213
Figure 150: Restarting the host
9.2.2. ESXi hosts basic settings
After finishing the installation, it’s recommended to login in the hosts and
configure the following settings:
Logging in the host, by pressing F2:
Figure 151: Initial login
214
Figure 152: Root user is the default, and its password is the same set during installation
Setting up static IP:
Go to the section Configure Management Network.
Figure 153: Section Configure Management Network
Then, go to the section IPv4 Configuration.
215
Figure 154: Section IPv4 Configuration
Enable the use of static IPv4. Here is possible to change the IP used.
Figure 155: Static IPv4
By pressing ESC to go back, it’ll be asked to confirm changes. Accept them.
216
Figure 156: Applying changes
Enable SSH and shell access:
Go to the section Troubleshooting Options.
Figure 157: Section Troubleshooting Options
Then, enable the shell access on the host by pressing Enter.
217
Figure 158: Shell access on the host
Also enable the SSH access on the host by pressing Enter.
Figure 159: SSH access on the host
218
9.2.3. Advanced ESXi hosts settings
By default, ESXi hosts have only one IP, even having multiple NICs, in addi-
tion to the need to have their licenses manually activated, therefore, it’s nec-
essary to perform some advanced settings that are only available in the web
interface provided by the hosts.
The link for accessing those web interfaces can be found on their login screen:
Figure 160: URL for accessing the web interface
Login using the password creating during the setup.
Figure 161: Login screen
219
9.2.3.1. Adding new IPs
1. Go to Networking Physical NICs to visualize the available NICs:
Figure 162: Available NICs
2. Go to Networking Virtual switches and click in Add standard virtual
switch:
Figure 163: Virtual switches
3. Configure the new switch:
220
Figure 164: Configuring the new virtual switch
Repeat the steps 2 and 3 in case the host have more NICs.
4. Go to Networking VMkernel NICs and click in Add VMkernel NIC:
Figure 165: VM Kernel NICs
5. Configure the new VMkernel NIC:
221
Figure 166: Configuring the new VM Kernel NIC
Repeat the steps 4 and 5 in case the host have more switches.
9.2.3.2. Adding new datastores
For this it was used the export mount points described in section 9.1.
1. Go to Storage Datastores and click in New datastore:
Figure 167: Datastores
222
2. Select the datastore type that will be added
21
:
Figure 168: Datastore types supported.
3. Configure the new datastore:
Figure 169: Configuring the new datastore
4. Finish configuring the new datastore:
21
Depending the chosen type, some procedures may differ.
223
Figure 170: Finish the datastore setup
5. Repeat this process to all datastores that must be added in the current
host.
9.2.3.3. Adding the license key
1. Go to Manage Licensing and click inAssign license:
Figure 171: Accessing the license
2. Type in your license and verify it:
224
Figure 172: Verifying if the license is valid
3. After validating the license, add it:
Figure 173: Adding the license
Repeat this process for all ESXi hosts in your infrastructure.
9.3. vCenter installation
To install the vCenter was necessary to create a Ubuntu VM with graphical
interface. Despite that in the VMware documentation is written that is possible
to install vCenter in Linux hosts using the cli-installer, in practice, whenever an
attempt to perform the installation on Linux was made, an error occurred and a
generic error message was displayed. After much research, the solution found
was to use Linux with a graphical interface and, after install the vCenter, remove
the graphical interface to save resources.
The steps to install the vCenter were:
1. Linux graphical interface installation:
225
Update the Ubuntu repositories:
apt update &&apt upgrade
22
lightdm installation:
apt install lightdm
Desktop environment installation
23
:
apt install ubuntu-desktop
(OPTIONAL) Restart the host:
reboot
2. Copy the vCenter .ISO to the machine:
scp VMware-VCSA-all-6.5.0-18711281.iso user@vcenter-IP:~/
3. Mount the ISO
24
.
4. Access the files on the mounted ISO, browse to the directory
ISO-MOUNT-POINT/vcsa-ui-installer/lin64/ and execute the installer script
25
:
5. In the graphical interface, execute the script from the previous step and
the vCenter installation will be started:
22
The upgrade is optional.
23
Other desktop environments may be used in this step, such as XFCE, Mate, etc.
24
This can be made through command line or graphical interface.
25
This can be made through command line or graphical interface.
226
Figure 174: Possible operation types
26
6. Start the vCenter deploy:
Figure 175: Introduction
26
In this document, only the installtion process will be used.
227
7. Accept the terms of use:
Figure 176: Terms of use
8. Select the type of structure that the deploy will be made:
Figure 177: Available deploy types
9. Add at least one ESXi host:
228
Figure 178: Adding ESXi host
A warning may occur mentioning the host certificate validity. Just ignore
it.
10. Defined the root password in vCenter:
Figure 179: Adding the root password
11. Choose the size of the infrastructure that the deploy will be made:
229
Figure 180: Infrastructure size
12. Select one of the host datastores:
Figure 181: Available datastores
13. Configure the network used by vCenter:
230
Figure 182: Configuring the vCenter network
14. Confirm installation info and finish the process:
Figure 183: Finish configuring the vCenter
The vCenter installation process will begin, which may take several min-
utes to finish. If any error occurs, it’s recommend to change the network
used by vCenter so that it use DHCP instead of static IPv4 and repeat the
installation process.
231
15. Now, start configuring the PSC:
Figure 184: Start configuring the PSC
16. Either enable or disable the SSH and define the synchronization mode
that will be used, based in your preferences:
Figure 185: Applying appliance basic settings
17. Create and setup the new SSO:
232
Figure 186: Creating and setting up a new SSO
18. Opt in, or not, in the VMware improvement program:
Figure 187: VMware improvement program
19. Review the settings applied and, if every is ok, proceed with the installa-
tion:
233
Figure 188: Finishing the setup
20. This process will take some minutes and after finishing, it’ll be possible to
access the vCenter web interface in the addess provided at the end of the
installation:
Figure 189: Finishing the installation
The default login is administrator@<domain-defined-during-installation>,
with the same password defined during the installation.
After login in, this will be the screen shown:
234
Figure 190: vSphere home screen
9.3.1. Adding license key
1. Go to Menu Administration Licensing Licenses and click in Add
New Licenses:
Figure 191: Licenses screen
2. Type in the license:
235
Figure 192: Adding the license
3. Rename the license:
Figure 193: Changing the license name
4. Finish adding the license:
236
Figure 194: Finish adding the license
9.3.2. Adding multiple ESXi hosts
In this document, the vCenter deploy was performed in one of the ESXi
hosts, therefore it’ll be only possible to add a second host. In a production
environment, it’s recommend that the vCenter deploy is done in a dedicated
VM.
With that being said, to add ESXi hosts to vCenter, it’s necessary to follow
these steps:
1. Create a datacenter:
In the sidebar, right click in photon-machine.public or similar and then
select New Datacenter:
237
Figure 195: Starting to add a new datacenter/folder
Name the new datacenter:
Figure 196: Naming the new datacenter
2. Create a new cluster: right click in the new datacenter created and then
click in New Cluster. The procedure is the same as adding a datacenter.
3. With the new cluster created, right click in it and then click in Add Host:
238
Figure 197: Adding a new host to the datacenter
4. Inform the IP or hostname of the new host:
Figure 198: Adding the IP to the host
5. Inform user and password for this host:
Figure 199: User and password of the host
If any warning about certificate comes up, accept it.
6. Confirm the host details:
239
Figure 200: Host details overview
7. Choose/confirm the host license:
Figure 201: Host license
8. Choose the host lockdown mode:
Figure 202: Host lockdown mode
9. COnfirm the location:
240
Figure 203: VM location
10. Confirm the final details:
Figure 204: Final details
Then repeat these steps for each desired available host.
9.3.3. Removing the Linux graphical interface
Once the installation process is finished, it’s no longer necessary the use of
a graphical interface in Linux. The steps to remove it are:
Removing lightdm:
apt remove lightdm
Removing the desktop environment:
apt remove ubuntu-desktop
Removing possible orphan packages from the graphical interface removal:
apt autoremove
241
(OPTIONAL) Restart the host:
reboot
9.4. Adding VMware cluster in the Apache CloudStack
To add a VMware cluster in the Apache CloudStack it’s necessary that the
ACS have access to the vCenter used via management network. Furthermore,
depending on the structure implemented in VMware, it can be necessary to
create, in VMware, new switches to the guest and storage networks used by
ACS. Finally, it can also be necessary to add, in ACS, the datastores used by
VMware.
Once the deploy of the VMware structure is finished, it’s necessary to add
such structure as a cluster in ACS. For that, follow these steps:
1. Login in ACS, browse to Infrastructure Zones and access the zone
which the VMware cluster will be located.
2. In this zone, click in Add VMware datacenter:
Figure 205: Adding a VMware datacenter
3. Configure the datacenter:
242
Figure 206: Configuring a VMware datacenter in Apache CloudStack
4. Still in the zone, browse to the Physical Network tab:
Figure 207: Accessing the physical networks details in Apache CloudStack
5. Access each of the networks and under their details go to the Traffic Types
tab and click in Update Traffic Labels:
243
Figure 208: Updating networks
6. Update the VMware virtualizer label:
The standard syntax for this is:
[[“vSwitch Name/dvSwitch/EthernetPortProfile”][,“VLAN ID”[,“vSwitch
Type”]]]
Possible examples:
Empty;
dvSwitch0;
dvSwitch0,200;
dvSwitch1,300,vmwaredvs;
myEthernetPortProfile„nexusdvs;
dvSwitch0„vmwaredvs;
vSwitch0„vmwaresvs
In which each field is:
vSwitch Name/dvSwitch/EthernetPortProfile:
Virtual switch name in the vCenter (distributed or not), in which the
default values depends on the virtual switch type:
244
vSwitch0: If the switch is of the VMware vNetwork Standard virtual
switch type;
dvSwitch0: If the switch is of the VMware vNetwork Distributed vir-
tual switch type and
epp0: If the switch is of the Cisco Nexus 1000v Distributed virtual
switch
VLAN ID: VLAN ID in case it exists. Must be only used in public net-
work.
vSwitch type: the possible values are:
vmwaresvs: When using the VMware vNetwork Standard virtual
switch.
vmwaredvs: When using the VMware vNetwork distributed virtual
switch.
nexusdvs: When using the Cisco Nexus 1000v distributed virtual
switch.
245
Figure 209: Adding the VMware network mapping in the Apache CloudStack
7. With the network properly updated, browse to the Cluster section, click in
Add Cluster and select the VMware virtualizer. This will enable a new set
of options, where the vCenter IP and other data must be supplied, such
as Datacenter name, user and password:
246
Figure 210: Configuring the VMware cluster in Apache CloudStack
8. Finally, it can be necessary to change the value for some global settings,
such as:
(a) vmware.management.portgroup;
(b) vmware.service.console.
It’s important to emphasize that the Datacenter name, either Cluster or Data-
stores, in VMware need to be the same in ACS, since the name is used during
the communication between them.
9.5. Problems when adding a VMware cluster
When adding a VMware cluster for the fisrt time in a zone, ACS includes a
customizable tag in the vCenter indicating that it’s already been managed by
ACS. However, if any error occurs during the process of adding such vCenter,
the tag won’t be removed, resulting in an error during the next addition tries,
since a vCenter that already have a tag can’t re-add it to the ACS management.
247
In the discussed case, the following error message is shown:
Figure 211: Tag error messagen on the UI when trying to add the vCluster
The process for removing a tag to add a vCluster is the following:
1. Access the vCenter, select the zone, click in Summary and under the sec-
tion Custom Attributes click in Edit....
248
Figure 212: vCenter zone selection
2. Select the attribute cloud.zone, then click on the X button above to delete
the attribute and click in OK when prompted.
249
Figure 213: Removing the cloud.zone attribute
3. Removing the VMware zone registry from the table vmware_data_center.
250
Figure 214: Selecting and erasing the vCluster from the database
After the previous steps, it’s necessary to recreate the zone or add the clus-
ters and hosts manually.
Information in more detail about adding VMware to CloudStack may be
found in the Apache CloudStack official documentation.
9.6. Importing VMware VM to the Apache CloudStack
Once the VMware cluster is correctly added to the ACS infrastructure, it’ll
be possible to import (either via UI or via API) the existing VMs in VMware to
become managed by ACS.
9.6.1. Importing VMs via UI
The new UI, with an existing VMware cluster in the ACS infrastructure, will
show a new view called Tools. By accessing this view, it’s possible to import VMs
from the vCenter to the ACS:
Access the Tools view:
251
Figure 215: Accessing the Tools view to import the VMs
In this view, select the VMware zone, pod and cluster that were recently
added. After this, existing VMware VMs that aren’t managed by ACS will
be listed, as well as the VMs managed by ACS:
Figure 216: Checking VMware cluster VMs
To perform the Vms import, just select them and then click in the Import
Instance button:
252
Figure 217: Importing a VM to the Apache CloudStack
A pop-up will show up requiring some details, fill them accordingly with
your needs:
Figure 218: Details for VM import
At the end of the procedure, if successful, the VM will be shown in the
Managed Instances column:
253
Figure 219: VM successfully imported
9.6.2. Importing VMs via API
To perform a import via CloudMonkey, the importUnmanagedInstance API
will be used:
This API will import a Vm from VMware to the CloudStack. In the API call
the cluster ID in which the VM will be inserted must be informed, as well
as its name under VMware and the service offering that it will represent.
It’s also possible to map disk offerings, network offerings, account and
domain, and other properties.
Examples:
import unmanagedinstance clusterid=55d19ae9-a26d-43e3-9cb4-83d989cc882c name=test1
serviceofferingid=7fd97738-57b9-4240-9680-3dca279fa9a6
import unmanagedinstance clusterid=55d19ae9-a26d-43e3-9cb4-83d989cc882c name=test2
serviceofferingid=7fd97738-57b9-4240-9680-3dca279fa9a6 datadiskofferinglist[0].disk=disk1
datadiskofferinglist[0].diskOffering=1ef110db-1247-463b-b2ed-14ba1582c075
datadiskofferinglist[1].disk=disk2 datadiskofferinglist[1].diskOffering=07b97faf-740f-4c71-aae5-
b8a42e465617
import unmanagedinstance clusterid=55d19ae9-a26d-43e3-9cb4-83d989cc882c name=test3
serviceofferingid=7fd97738-57b9-4240-9680-3dca279fa9a6 datadiskofferinglist[0].disk=disk1
datadiskofferinglist[0].diskOffering=1ef110db-1247-463b-b2ed-14ba1582c075
nicNetworkList[0].nic="nic1" nicNetworkList[0].network=b9683956-8e1d-4128-8063-d3a0d8a4ceae
The disks are informed in the form of a vector in the option datadiskof-
feringlist. The value of the disk must match the disk name under the VM
254
settings, which can be accessed in its details in the vCenter. Only the VM
extra disks must be informed, because the root volume is automatically
imported (CloudStack always considers that the first disk retrieved from
the VM is the root). The disk option is the name of the disk in the VM and
the diskOffering is the ID for a compatible disk offering (can’t be lower than
the disk capacity). For importing a VM with multiple disks, it’s possible to
provide a list of disk to be imported, for example:
import unmanagedinstance name=test-unmanaged ... datadiskofferinglist[0].disk=51-2000
datadiskofferinglist[0].diskOffering=e9eb069f-2946-4e26-8d81-07aebba868d1
datadiskofferinglist[1].disk=52-2000 datadiskofferinglist[0].diskOffering=...
The networks are informed through a vector in the nicNetworkList option,
where the nic option is the ethernet port and the network option is the net-
work UUID. If the VM network, in VMware, have a VLAN, it’s necessary that
the informed network in the ACS have the same VLAN. It’s also possible
to inform the IPs for each network, through the nicIpAddressList option,
where the nic option is the ethernet port and the ip4Address the reserved
IP in CloudStack.
When importing a VM from VMware to ACS it’s important to be careful to
use a service offering that don’t surpass the memory limitations from the
VMware VM. More details about this issue can be found in this link.
255
10. Conclusion
This document addressed, in general, the main concepts and recurring doubts
about the Apache CloudStack and auxiliary tools. Both are vast and complex,
therefore it’s not possible to adress them in their entirety, however, in case of
any doubts or suggestions for improvement in this documentation, a issue can
be created on GitLab.
256
Apêndices
257
Appendix A. Terminology
Infrastructure-as-a-Service:
Infrastructure-as-a-Service, also known as IaaS, consists in the act of of-
fering computational resources, such as processing, storage and network
access, on demand, usually related to a pricing related to the usage of
those resources.
Hypervisor:
Software responsible for the creation and execution of virtual machines
(VMs). A hypervisor allows that a host computer provides support to sev-
eral guest VMs, virtually sharing its resources, such as memory, storage
and processing, and thus allowing a better usage for the available re-
sources.
VM:
Acronym for Virtual Machine. It’s a software that emulates a real com-
puter. Also called as guest, it’s created on another computer, known as
the host, and uses part of their resources. The advantages of using VMs
are:
Allow using different operating systems in a single computer.
Extremely easy to manage and maintain.
VMs can be easily created or replicated with an operating system pre-
viously installed and configured.
VMs can also be easily migrated from one host to another without
data loss.
Virtual network card:
Commonly refferred to as NIC, VNIC or VIF. It’s a software that fulfills the
role of a network card is a virtual system.
258