This page provides a high level overview of the DivvyCloud product architecture and workflow. The diagram below offers a summary of the components required to deploy and run a DivvyCloud platform installation.
Contact Us Before You Deploy
Since every environment is different we recommend you reach out to us through Getting Support to connect. It will allow us to review the specifics of your requirements and any project goals you have to ensure that you get the information and recommendations that best meet your needs.
This section includes some basic details about other DivvyCloud platform components. This list is not exhaustive and may vary based on your deployment needs. For specifics reach out to [email protected]
DivvyCloud runs on Ubuntu and CentOS variants. We recommend using Ubuntu 18.04+, CentOS 7+ may also be used.
In DivvyCloud, workers refer to a process that does something other than serve web requests. A “worker” may process a queue, run scheduled jobs using process files, or any number of other support-type activities. It generally does not interface with users.
In DivvyCloud the “scheduler” is responsible for delegating tasks to the Workers. It maintains a job queue in Redis and delegates tasks to the workers. Jobs fall into three categories: High (p0), Medium (p1), and Low Priority (p2).
- High Priority Jobs include any immediate on-demand jobs such as stopping or starting an instance, or a User enqueueing a harvest job.
- Medium priority jobs include recurring background jobs, such as calculating Insight compliance.
- Low Priority jobs are regularly scheduled harvesting Jobs.
In DivvyCloud, the web interface is how end users access DivvyCloud’s GUI. The Interface Server is generally accessible over port 8001, but this can be modified to a different value (e.g. https://customername.divvycloud.com).
In addition to a frontend layer, DivvyCloud has a backend that consists of MySQL 5.7.x and Redis >= 3.x and <= 5.0.6. These services can be fulfilled by dedicated virtual machines or public, cloud-based services such as AWS RDS and ElastiCache.
Note: We recommend you verify the requirements that apply to your individual cloud provider, for example AWS requirements are specified here.
For the best experience, we recommend using the latest version of Google Chrome. Additionally while some other browsers may be used we only currently support Google Chrome.
DivvyCloud’s platform needs access to some public Internet services in order to function properly. All of these network connections are HTTPS traffic on outbound TCP port 443. The specific list of network connections varies based upon requirements, but commonly include:
|divvycentral.divvycloud.com||DivvyCloud licensing server|
|backoffice.divvycloud.com||DivvyCloud Insight distribution|
|*.amazonaws.com||Amazon Web Services API endpoints|
|management.azure.com||Microsoft Azure API endpoints|
|www.googleapis.com/*||Google Cloud Platform API endpoints|
|*.aliyuncs.com||Alibaba Cloud API endpoints|
|*.zopim.com||Zendesk support widget|
|divvycloud.zendesk.com||Zendesk support widget|
|*.sentry.io||Sentry (optional error reporting)|
In addition, DivvyCloud requires customer-defined API endpoints when connecting to VMWare vSphere or OpenStack (deprecated) cloud platforms.
For end-user access, DivvyCloud runs on port 8001 but can be mapped to port 80 or 443 using any number of proxy services including Apache2, Nginx, AWS ELB, or others.
Many customers have network security requirements that prohibit all outbound traffic from VPCs. DivvyCloud customers have successfully implemented DivvyCloud using proxy servers. The following describes a typical two-part approach: pre-install and post-install.
- Pre-install, you must set system environment variables.
- Post-install, you must set DivvyCloud environment variables.
To update system environment variables:
For Ubuntu, log into each instance via SSH and append the following to
http_proxy="http://<PROXYSERVERIP:PORT>" https_proxy="https://<PROXYSERVERIP:PORT>" no_proxy="mysql,redis,169.254.169.254"
For CentOS, log into each instance via SSH and append the following to
export http_proxy="http://<PROXYSERVERIP:PORT>" export https_proxy="https://<PROXYSERVERIP:PORT>" export no_proxy="mysql,redis,169.254.169.254"
You should replace
PORT values with the actual IP and port values of your proxy servers. If your proxy server requires a username and password, you can format the proxy server variable as follows:
If you are installing a Test Drive deployment, then update your
no_proxy variable further by adding these local and loopback IPs:
After configuring the proxy, change to the user,
divvy, and verify the change:
sudo su - divvy env | grep proxy
The proxy configuration variables should be displayed. If not, log out of the system and log back in so that the environment variables take effect.
Next, install DivvyCloud using a Test Drive or Regular deployment.
Post-install, after stopping DivvyCloud, update DivvyCloud environment variables, which are located in
/divvycloud/prod.env. You will need to uncomment and update the following lines in the
prod.env file on each instance:
# Uncomment and adjust the below values if behind a proxy. Please note that # 169.254.169.254 are used for AWS Instance/STS AssumeRole. #http_proxy=http://proxy.acmecorp.com #https_proxy=http://proxy.acmecorp.com #no_proxy=mysql,redis,169.254.169.254
As before, replace
proxy.acmecorp.com with the actual IP and port values of your proxy servers. And, as before, if you are following the Test Drive deployment, add these local and loopback IPs to your
no_proxy to have the following:
Updated 2 months ago