Server and Network Recommendations¶
DivvyCloud is flexible enough to support different implementation options, but these are our recommendations for server and network settings.
DivvyCloud runs on Ubuntu and CentOS variants. We recommend using the following:
- Ubuntu 16.04+
- CentOS 7+ (see note below)
For evaluation purposes, DivvyCloud can run on a standalone instance, but most enterprise deployments require at least two instances with at least:
- 4 cores
- 8 GB of memory
- 30 GB root volume
In addition to a frontend layer, DivvyCloud has a backend that consists of MySQL 5.7 and Redis 3.x. These services can be fulfilled by dedicated virtual machines or public, cloud-based services such as AWS RDS and Elasticache.
The MySQL database instance should have at least:
- 4 cores
- 16 GB of memory
- 100 GB volume
The Redis memcache instance should have at least:
- 1 core
- 2 GB of memory
DivvyCloud’s platform needs access to some public Internet services in order to function properly. All of these network connections are HTTPS traffic on outbound TCP port 443. The specific list of network connections varies based upon requirements, but commonly include:
|divvycentral.divvycloud.com||DivvyCloud licensing server|
|backoffice.divvycloud.com||DivvyCloud Insight distribution|
|*.amazonaws.com||Amazon Web Services API endpoints|
|management.azure.com||Microsoft Azure API endpoints|
|www.googleapis.com/*||Google Cloud Platform API endpoints|
|*.zopim.com||Zendesk support widget|
|divvycloud.zendesk.com||Zendesk support widget|
|*.sentry.io||Sentry (optional error reporting)|
In addition, DivvyCloud requires customer-defined API endpoints when connecting to VMWare vSphere or OpenStack cloud platforms.
For end-user access, DivvyCloud runs on port 8001 but can be mapped to port 80 or 443 using any number of proxy services including Apache2, Nginx, AWS ELB or others.
Many customers have network security requirements that prohibit all outbound traffic from VPCs. DivvyCloud customers have successfully implemented DivvyCloud using proxy servers. The following describes a typical two-part approach: pre-install and post-install. Pre-install, you must set system environment variables. Post-install, you must set DivvyCloud environment variables. To update system environment variables:
For Ubuntu, log into each instance via SSH and append the following to
http_proxy="http://<PROXYSERVERIP:PORT>" https_proxy="https://<PROXYSERVERIP:PORT>" no_proxy="mysql,redis,169.254.169.254"
For CentOS, log into each instance via SSH and append the following to
export http_proxy="http://<PROXYSERVERIP:PORT>" export https_proxy="https://<PROXYSERVERIP:PORT>" export no_proxy="mysql,redis,169.254.169.254"
Where you replace
PORT values with the actual IP and port values of your proxy servers. If
your proxy server requires a username and password, you can format the proxy server variable as follows:
If you are installing a Test Drive deployment, then update your
no_proxy variable further by adding these local and
After configuring the proxy, change to the user,
divvy, and verify the change:
sudo su - divvy env | grep proxy
The proxy configuration variables should be displayed. If not, log out of the system and log back in so that the environment variables take effect.
Post-install, after stopping DivvyCloud, update DivvyCloud environment variables, which are located in
/divvycloud/prod.env. You will need to uncomment and update the following lines in the
prod.env file on each
# Uncomment and adjust the below values if behind a proxy. Please note that # 169.254.169.254 are used for AWS Instance/STS AssumeRole. #http_proxy=http://proxy.acmecorp.com #https_proxy=http://proxy.acmecorp.com #no_proxy=mysql,redis,169.254.169.254
As before, replace
proxy.acmecorp.com with the actual IP and port values of your proxy servers. And, as before, if
you are following the Test Drive deployment, add these local and loopback IPs to your
no_proxy to have the following:
CentOS and MySQL¶
Note about CentOS with SE Linux – SE Linux prevents Docker from writing MySQL data to the host system. The workaround is to run this command from the ‘divvycloud’ directory on each instance:
chcon -Rt svirt_sandbox_file_t data