This page provides a high level overview of the DivvyCloud product architecture and workflow. The diagram below offers a summary of the components required to deploy and run a DivvyCloud platform installation.
Notes on Deployment
Since every environment is different, we recommend you reach out to us through Getting Support to connect. It will allow us to review the specifics of your requirements and any project goals you have to ensure that you get the information and recommendations that best meet your needs.
Note: The content on this page applies to self-hosted customers. For hosted customers we recommend that you contact your CSM or [email protected] with any questions or concerns.
This section includes some basic details about other DivvyCloud platform components. This list is not exhaustive and may vary based on your deployment needs. For specifics reach out to [email protected].
DivvyCloud runs on Ubuntu and CentOS variants. We recommend using Ubuntu 18.04+; CentOS 7+ may also be used.
In DivvyCloud, workers refer to a process that does something other than serve web requests. A “worker” may process a queue, run scheduled jobs using process files, or act on any number of other support-type activities. It generally does not interface with users.
In DivvyCloud, the “scheduler” is responsible for delegating tasks to the Workers. It maintains a job queue in Redis and delegates tasks to the workers. Jobs fall into three categories: High (p0), Medium (p1), and Low (p2) priority.
- High priority jobs include any immediate on-demand jobs such as stopping or starting an instance, or a User enqueueing a harvest job.
- Medium priority jobs include recurring background jobs, such as calculating Insight compliance.
- Low priority jobs are regularly scheduled harvesting jobs.
CloudWatch and StackDriver Priorities
If you're using AWS CloudWatch or GCP StackDriver, the DivvyCloud job categories will display differently within CloudWatch/StackDriver:
- Immediate (p0)
- High Priority (p1)
- Standard (p2)
In DivvyCloud, the web interface is how end users access DivvyCloud’s GUI. The Interface Server is generally accessible over port 8001, but this can be modified to a different value (e.g., https://customername.divvycloud.com).
In addition to a frontend layer, DivvyCloud has a backend that consists of MySQL 5.7.x and Redis 5.x. These services can be fulfilled by dedicated virtual machines or public, cloud-based services such as AWS RDS and ElastiCache.
Note: We recommend you verify the requirements that apply to your individual cloud provider, for example AWS requirements are specified here.
Redis upgrade required
For customers on versions of DivvyCloud prior to 21.4.x, upgrading to DivvyCloud 21.4.x (or higher) we recommend upgrading to Redis 5.x. Contact [email protected] with any questions or concerns.
For the best experience, we recommend using the latest version of Google Chrome. While some other browsers may be used we only currently support Google Chrome.
DivvyCloud’s platform needs access to some public Internet services in order to function properly. All of these network connections are HTTPS traffic on outbound TCP port 443. The specific list of network connections varies based upon requirements, but commonly include:
DivvyCloud licensing server
DivvyCloud Insight distribution
Amazon Web Services API endpoints
Microsoft Azure API endpoints
Google Cloud Platform API endpoints
Alibaba Cloud API endpoints
Zendesk support widget
Zendesk support widget
Sentry (optional error reporting)
For end-user access, DivvyCloud runs on port 8001 but can be mapped to port 80 or 443 using any number of proxy services including Apache2, Nginx, AWS ELB, or others.
Many customers have network security requirements that prohibit all outbound traffic from VPCs. DivvyCloud customers have successfully implemented DivvyCloud using proxy servers. The following describes a typical two-part approach: pre-install and post-install.
- Pre-install, you must set system environment variables.
- Post-install, you must set DivvyCloud environment variables.
To update system environment variables:
For Ubuntu, log into each instance via SSH and append the following to
http_proxy="http://<PROXYSERVERIP:PORT>" https_proxy="https://<PROXYSERVERIP:PORT>" no_proxy="mysql,redis,169.254.169.254"
For CentOS, log into each instance via SSH and append the following to
export http_proxy="http://<PROXYSERVERIP:PORT>" export https_proxy="https://<PROXYSERVERIP:PORT>" export no_proxy="mysql,redis,169.254.169.254"
You should replace
PORT values with the actual IP and port values of your proxy servers. If your proxy server requires a username and password, you can format the proxy server variable as follows:
If you are installing a Test Drive deployment, then update your
no_proxy variable further by adding these local and loopback IPs:
After configuring the proxy, change to the user,
divvy, and verify the change:
sudo su - divvy env | grep proxy
The proxy configuration variables should be displayed. If not, log out of the system and log back in so that the environment variables take effect.
Next, install DivvyCloud using a Test Drive or Regular deployment.
Post-install, after stopping DivvyCloud, update DivvyCloud environment variables, which are located in
/divvycloud/prod.env. You will need to uncomment and update the following lines in the
prod.env file on each instance:
# Uncomment and adjust the below values if behind a proxy. Please note that # 169.254.169.254 are used for AWS Instance/STS AssumeRole. #http_proxy=http://proxy.acmecorp.com #https_proxy=http://proxy.acmecorp.com #no_proxy=mysql,redis,169.254.169.254
As before, replace
proxy.acmecorp.com with the actual IP and port values of your proxy servers. And, as before, if you are following the Test Drive deployment, add these local and loopback IPs to your
no_proxy to have the following:
Updated 17 days ago