Installing Cloud Foundry
In previous blogs, we set up a micro BOSH and a BOSH. We are ready to start our installation of Cloud Foundry. First thing first, we create a resource plan for our deployment.
As we are writing this document, a complete installation of Cloud Foundry contains about distinct 34 jobs (VMs). Some of the jobs are core components and at least one instance must be installed, such as Cloud Controller, NATS and DEAs. Some jobs should have multiple instances depending on the actual need, such as DEAs and routers. Some jobs are optional, such as service gateways and service nodes. Therefore, before we install Cloud Foundry, we should decide which components are included in a deployment. Once we have a list of components we want to deploy, we can plan for resources needed by each job. Typically, this includes IP address, CPU, memory and storage. Below is an example of a deployment plan.
Job |
Instances |
IP
|
Memory |
CPU |
Disk(GB) |
Required? |
debian_nfs_server |
1 |
xx.xx.xx.xx |
2GB |
2 |
16 |
required |
nats |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
required |
ccdb_postgres |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
required |
uaadb |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
required |
vcap_redis |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
required |
uaa |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
required |
acmdb |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
required |
acm |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
required |
cloud_controller |
1 |
xx.xx.xx.xx |
2GB |
2 |
16 |
required |
stager |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
required |
router |
2 |
xx.xx.xx.xx |
512MB |
1 |
8 |
required |
health_manager |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
required |
dea |
2 |
xx.xx.xx.xx |
2GB |
2 |
16 |
required |
mysql_node(*) |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
mysql_gateway |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
mongodb_node |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
mongodb_gateway |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
redis_node |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
redis_gateway |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
rabbit_node |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
rabbit_gateway |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
postgresql_node |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
postgresql_gateway |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
vblob_node |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
vblob_gateway |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
backup_manager |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
service_utilities |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
serialization_data_server |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
services_nfs |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
syslog_aggregator |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
services_redis |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
opentsdb |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
collector |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
dashboard |
1 |
xx.xx.xx.xx |
1GB |
1 |
8 |
optional |
Total: |
36 |
|
39GB |
40 |
320 |
|
From the above table, we can come up with required resource pools:
Pool Name |
Size |
Configuration |
Jobs |
small |
30 |
RAM:1GB, CPU: 1, DISK: 8GB |
nats, ccdb_postgres, uaadb,
vcap_redis, uaa, acmdb, acm, stager, health_manager, mysql_node, mysql_gateway, mongodb_node, mongodb_gateway, redis_node, redis_gateway, postgresql_node, postgresql_gateway, vblob_node
vblob_gateway, backup_manager ,service_utilities, collector, dashboard, serialization_data_server
services_nfs, syslog_aggregator, services_redis, opentsdb |
medium |
4 |
RAM:2GB, CPU: 2, DISK: 16GB |
debian_nfs_server
cloud_controller , dea |
router |
2 |
RAM:512M, CPU: 1, DISK: 8GB |
router |
From the above two tables, we can start to modify the manifest file. We name the manifest file as cf.yml. The following sections explain the fields in details.
name
This is the cloud foundry deployment name. We can name it arbitrarily.
director_uuid
The director uuid is the uuid of bosh director we just deployed in Part III. We can retrieve this value by command:
$ bosh status
release
The release name should be the same as the name you entered when creating the cf-release. The version was generated automatically when the release was created.
compilation, update, networks, resource_pools
These fields are similar to those in bosh.yml file. Refer to the previous part for more information.
jobs
Jobs are the components of cloud foundry. Each job runs on a virtual machine. Jobs are described as below.
debian_nfs_server, services_nfs: these two jobs are used as nfs server in Cloud Foundry. As they serve as file servers, we should make sure that the “persistent_disk” property indeed exists.
syslog_aggregator: this job is uses to collect system logs and store them in the database.
nats: NATS is the message bus of Cloud Foundry. It’s a core component in Cloud Foundry.
opentsdb: this is a database that stores the log information. Since it is a database, it also requires a “persistent_disk” property.
collector: this job collects system information and stores them in databases.
dashboard: this is a web based tool for monitoring and reporting of Cloud Foundry Platform.
cloud_controller, ccdb: cloud_controller controls all the Cloud Foundry components. “ccdb” is the database for cloud controller. “persistent_disk” property is required in ccdb.
uaa, uaadb: uaa is used for user authentication and authorization. uaadb is the database that stores the user information. “persistent_disk” property is required for uaadb.
vcap_redis, services_redis: these two jobs are used to store the internal key-value pairs for Cloud Foundry.
acm, acmdb: acm is short for Access Control Manager. The ACM is a service that allows cloud foundry components to implement access control features. “acmdb” is the database for acm. “acmdb” also requires a “persistent_disk” property.
stager: stager is a job that packs the source code and all the required packages of user’s application. When staging is completed, the app is passed to dea for execution.
router: router is used to route user’s request to proper destination in Cloud Foundry.
health_manager, health_manager_next: health_manager is the job that monitors the health status of all users’ apps. health_manager_next is the next-generation version of health_manager. They will be co-existing for some time.
dea: “dea” is short for droplet execution agent. All users’ apps are executed in dea.
mysql_node, mysql_gateway, mongodb_node, mongodb_gateway, redis_node, redis_gateway, rabbit_node, rabbit_gateway, postgresql_node, postgresql_gateway, vblob_node, vblob_gateway: these jobs are all services that Cloud Foundry supplies. Each service has a node that provisions resources. The corresponding gateway lies between the cloud_controller and a service node and it acts as the gateway for each service.
backup_manager: used to backup users’ data and databases.
service_utilities: Utilities of service management.
serialization_data_server: a server used to serialize data in Cloud Foundry.
properties:
This is another important part in cf.yml file. We should pay attention that the IP addresses in this section should be in sync with those in the jobs field. You should replace the password and tokens with your private secure password and tokens.
domain: this is the domain name for user’s access. We should also create a DNS server to resolve the domain to the load balancer’s IP address. In our example, we set the domain name as cf.local, so users can use vmc target api.cf.local when pushing apps.
cc.srv_api_uri: This property usually takes the format of http://api.<yourdomain>. For example, we set domain as cf.local, the srv_api_uri would be http://api.cf.local.
cc.password: this password must have at least 16 characters.
cc. allow_registration: if it is true, users can register an account by using vmc command. Set this to false to disable this behavior.
cc.admins: a list of admin users. Admin users can register through vmc command even the flag allow_registration is set to false.
Most of the ‘nfs_server’ in the properties should be set to the IP address of the job ‘services_nfs’.
mysql_node.production: If it is true, the memory of mysql_node must be at least 4GB. In an experimental environment, we can set it to false so that the memory of mysql_node can be set to less than 4GB.
Because the yml file may evolve as the new release of Cloud Foundry, there is an option of bosh command to validate the yml file. Type “bosh help”, you can see the usage and explanation of “bosh diff”:
$ bosh diff [<template_file>]
This command compares your current deployment manifest against the specified deployment manifest template. It helps you to keep your deployment configuration file up to date. A dev template can be found in deployments repos.
For example, you can run the following command to compare your yml file to the template file. Firstly, you must cd into the directory where your cf.yml file and the template file reside, and then use this command:
$ bosh diff dev-template.erb
This command will help you find any mistakes in the cf.yml file. If there are some fields missing, the command helps fill in it automatically. If there is a spelling mistake or other errors, the command reports a syntax error.
You can download a sample yml file from here:
https://github.com/vmware-china-se/bosh_doc/blob/master/cf.yml
When the manifest file is completed, we can now start to install Cloud Foundry.
1) In Part III, we have cloned CF repository from Gerrit by:
$ gerrit clone ssh://<your username>@reviews.cloudfoundry.org:29418/cf-release.git
2) Go to the directory and create a CF release.
$ cd cf-release
$ bosh create release
This will download all the packages, blob data and other resources needed. It will take several minutes depending on your network speed.
NOTE:
1. If you have edited code in cf-release, you may have to add –force option to bosh create release.
2. It is important to have direct internet connection when running this command.
3. If your network is slow or you do not have a direct connection to the internet, you may want to do this in a better environment. You can create the release on a machine with a good internet connection the option –with-tarball. Then you copy the generated tarball back to the system you want.
If nothing goes wrong, you can see a summary of this release like this:
Generating manifest...
----------------------
Writing manifest...
Release summary
---------------
Packages
+---------------------------+---------+-------+------------------------------------------+
| Name | Version | Notes | Fingerprint |
+---------------------------+---------+-------+------------------------------------------+
| sqlite | 3 | | e3e9b61f8cdc2610480c2aa841e89dc0bb1dc9c9 |
| ruby | 6 | | b35a5da6c214d9a97fdd664931bf64987053ad4c |
… …
| debian_nfs_server | 3 | | c1dc860ed6ab2bee68172be996f64f9587e9ac0d |
+---------------------------+---------+-------+------------------------------------------+
Jobs
+---------------------------+----------+-------------+------------------------------------------+
| Name | Version | Notes | Fingerprint |
+---------------------------+----------+-------------+------------------------------------------+
| redis_node | 19 | | 61098860eaa8cfb5dc438255ebe28db74a1feabc |
| rabbit_gateway | 13 | | ddcc0901ded1e45fdc4f318ed4ba8d8ccca51f7f |
… …
| debian_nfs_server | 7 | | 234fabc1a2c5dfbfd7c932882d947fed331b8295 |
| memcached_gateway | 4 | | 998623d86733a633c789849721e00d85cc3ebc20 |
Jobs affected by changes in this release
+------------------+----------+
| Name | Version |
+------------------+----------+
… …
| cloud_controller | 45.1-dev |
+------------------+----------+
Release version: 95.10-dev
Release manifest: /home/boshcli/cf-release/dev_releases/cf-def-95.10-dev.yml
As you can see, the dev-releases directory contains the release manifest yml file (and a tarball file, if –with-tarball option is on).
3) Target BOSH CLI to the director of BOSH. If you don’t remember the director’s IP, you can find it in your BOSH deployment manifest in part III.
$ bosh target 10.40.97.117:25555
Target set to `bosh_director (http://10.40.97.117:25555) Ver: 0.5.1 (release:abb3e7a4 bosh:2c69ee8c)
4) Upload cf- release by referring to the generated manifest file, e.g. cf-def-95.10-dev.yml in our example.
$ bosh upload release cf-def-95.10-dev.yml
This step will copy packages and jobs, and build them into a tarball, then verify this release to make sure files and dependencies are right. After verifying, it will upload release and create new jobs. Finally, you can see information telling you release uploaded:
Task 317 done
Started 2012-10-28 05:35:43 UTC
Finished 2012-10-28 05:36:44 UTC
Duration 00:01:01
Release uploaded
You can verify your release by:
$ bosh releases
You can see all the newly uploaded releases in the listing:
+--------+------------------------------------------------------+
| Name | Versions |
+--------+------------------------------------------------------+
| cf-def | 95.1-dev, 95.8-dev, 95.9-dev, 95.10-dev |
| cflab | 92.1-dev |
+--------+------------------------------------------------------+
5) Now that, we have uploaded release and stemcell (the same stemcell as in part III), and manifest is ready, set deployment to the manifest:
$ bosh deployment cf-dev.yml
Deployment set to `/home/boshcli/cf-dev.yml'
We can deploy Cloud Foundry now:
$ bosh deploy
This will create VMs for the jobs, compile the packages and install dependencies. It will take several minutes depending on the server’s hardware condition. You can see output like:
Preparing deployment
binding deployment (00:00:00)
binding releases (00:00:01)
… …
Preparing package compilation
… …
Compiling packages
… …
Preparing DNS
binding DNS (00:00:00)
Creating bound missing VMs
… …
Binding instance VMs
… …
Preparing configuration
binding configuration (00:00:03)
… …
Creating job cloud_controller
cloud_controller/0 (canary) (00:02:45)
… …
Done 1/1 00:08:41
Task 318 done
Started 2012-10-28 05:37:52 UTC
Finished 2012-10-28 05:49:43 UTC
Duration 00:11:51
Deployed `cf-dev.yml' to `bosh_director'
To check your deployment, you can use this command:
$ bosh deployments
+----------+
| Name |
+----------+
| cf.local |
+----------+
Deployments total: 1
You can also verify every the running status of VMs:
$ bosh vms
+---------------------------+---------+---------------+-------------+
| Job/index | State | Resource Pool | IPs |
+---------------------------+---------+---------------+-------------+
| acm/0 | running | small | 10.40.97.58 |
| acmdb/0 | running | small | 10.40.97.57 |
| cloud_controller/0 | running | medium | 10.40.97.59 |
… …
+---------------------------+---------+---------------+-------------+
VMs total: 40
At this moment, Cloud Foundry has been completely installed. If you cannot wait to verify the installation, you can use vmc command to target one of the routers’ IP address and deploy a test web app on it (see subsequent section). Because there is no DNS available, you need to have at least these two lines in the hosts file of a vmc client machine and the machine running a browser to test the web app:
<router’s IP address> api.yourdomain.com
<router’s IP address> <youtestapp>.yourdomain.com
If the above testing works fine, your Cloud Foundry instance is working. The last thing is to take care of the load balancer and DNS. These two components are not part of Cloud Foundry’s components. However, they need to be set up properly in a production environment. So we briefly talk about how to set them up.
You can deploy either a hardware or software load balancer (LB) to distribute the load evenly to multiple instances of router components. In our sample deployment we have two routers. For a software LB, you can use Stingray Traffic Manager. It can be downloaded from here: https://support.riverbed.com/download.htm?filename=public/software/stingray/trafficmanager/9.0/ZeusTM_90_Linux-x86_64.tgz
A DNS server is needed to resolve the domain of your Cloud Foundry instance. Basically, the DNS server resolves a wildcard name like *.yourdomain.com to the IP address of the load balancer. If you do not have a LB, you can set up DNS rotation to resolve the domain to routers in a round robin fashion.
When the LB and DNS is setup properly, you can start to deploy apps on your instance.
Cloud Foundry has a command-line tool known as VMC. It can perform most of the operations on Cloud Foundry, such as configuring your applications, deploying them to Cloud Foundry and monitor the status of your apps. To install VMC, you must install Ruby and RubyGems (a Ruby package manager) on the computer on which you want to run VMC. Currently Ruby 1.8.7 and 1.9.2 are supported. After that, you can install VMC by the below command ( more on vmc installation http://docs.cloudfoundry.com/tools/vmc/installing-vmc.html):
$ sudo gem install vmc
Now, specify the target to your Cloud Foundry instance, the URL should look like api.yourdomain.com, for example:
$ vmc target api.cf.local
Log in with the admin user’s credential, which is specified in the deployment manifest:
$ vmc login
Initially, you will be asked to set password for your account. After logging in, you get the information of your Cloud Foundry instance:
$ vmc info
Now, let’s create and deploy a simple hello world Sinatra application to verify the instance.
$ mkdir ~/hello
$ cd ~/hello
Create a Ruby file called hello.rb with following contents:
require 'sinatra'
get '/' do
"Hello from Cloud Foundry"
end
Save this file and we are about to upload this application:
$ vmc push
Complete the prompts like below
Would you like to deploy from the current directory? [Yn]:
Application Name: hello
Detected a Sinatra Application, is this correct? [Yn]:
Application Deployed URL [hello.cf.local]:
Memory reservation (128M, 256M, 512M, 1G, 2G) [128M]:
How many instances? [1]:
Bind existing services to 'hello'? [yN]:
Create services to bind to 'hello'? [yN]:
Would you like to save this configuration? [yN]:
After a while, you will see the output:
Creating Application: OK
Uploading Application:
Checking for available resources: OK
Packing application: OK
Uploading (0K): OK
Push Status: OK
Staging Application 'hello': OK
Starting Application 'hello': OK
Now, go visit the application’s URL: hello.cf.local in your browser. If you can see the text, your application has been successfully deployed.
Congratulations, your Cloud Foundry instance had been completely set up. It is functionally identical to the cloudfoundry.com.
More on deploying Cloud Foundry on vSphere using BOSH: