Building Cloud Foundry on vSphere using BOSH Part 4

Installing Cloud Foundry

In previous blogs, we set up a micro BOSH and a BOSH. We are ready to start our installation of Cloud Foundry. First thing first, we create a resource plan for our deployment.

As we are writing this document, a complete installation of Cloud Foundry contains about distinct 34 jobs (VMs). Some of the jobs are core components and at least one instance must be installed, such as Cloud Controller, NATS and DEAs. Some jobs should have multiple instances depending on the actual need, such as DEAs and routers. Some jobs are optional, such as service gateways and service nodes. Therefore, before we install Cloud Foundry, we should decide which components are included in a deployment.  Once we have a list of components we want to deploy, we can plan for resources needed by each job. Typically, this includes IP address, CPU, memory and storage. Below is an example of a deployment plan.

Job Instances   IP
Memory CPU Disk(GB) Required?
debian_nfs_server 1 xx.xx.xx.xx 2GB 2 16 required
nats 1 xx.xx.xx.xx 1GB 1 8 required
ccdb_postgres 1 xx.xx.xx.xx 1GB 1 8 required
uaadb 1 xx.xx.xx.xx 1GB 1 8 required
vcap_redis 1 xx.xx.xx.xx 1GB 1 8 required
uaa 1 xx.xx.xx.xx 1GB 1 8 required
acmdb 1 xx.xx.xx.xx 1GB 1 8 required
acm 1 xx.xx.xx.xx 1GB 1 8 required
cloud_controller 1 xx.xx.xx.xx 2GB 2 16 required
stager 1 xx.xx.xx.xx 1GB 1 8 required
router 2 xx.xx.xx.xx 512MB 1 8 required
health_manager 1 xx.xx.xx.xx 1GB 1 8 required
dea 2 xx.xx.xx.xx 2GB 2 16 required
mysql_node(*) 1 xx.xx.xx.xx 1GB 1 8 optional
mysql_gateway 1 xx.xx.xx.xx 1GB 1 8 optional
mongodb_node 1 xx.xx.xx.xx 1GB 1 8 optional
mongodb_gateway 1 xx.xx.xx.xx 1GB 1 8 optional
redis_node 1 xx.xx.xx.xx 1GB 1 8 optional
redis_gateway 1 xx.xx.xx.xx 1GB 1 8 optional
rabbit_node 1 xx.xx.xx.xx 1GB 1 8 optional
rabbit_gateway 1 xx.xx.xx.xx 1GB 1 8 optional
postgresql_node 1 xx.xx.xx.xx 1GB 1 8 optional
postgresql_gateway 1 xx.xx.xx.xx 1GB 1 8 optional
vblob_node 1 xx.xx.xx.xx 1GB 1 8 optional
vblob_gateway 1 xx.xx.xx.xx 1GB 1 8 optional
backup_manager 1 xx.xx.xx.xx 1GB 1 8 optional
service_utilities 1 xx.xx.xx.xx 1GB 1 8 optional
serialization_data_server 1 xx.xx.xx.xx 1GB 1 8 optional
services_nfs 1 xx.xx.xx.xx 1GB 1 8 optional
syslog_aggregator 1 xx.xx.xx.xx 1GB 1 8 optional
services_redis 1 xx.xx.xx.xx 1GB 1 8 optional
opentsdb 1 xx.xx.xx.xx 1GB 1 8 optional
collector 1 xx.xx.xx.xx 1GB 1 8 optional
dashboard 1 xx.xx.xx.xx 1GB 1 8 optional
Total: 36 39GB 40 320

From the above table, we can come up with required resource pools:

Pool Name Size Configuration Jobs
small 30 RAM:1GB, CPU: 1, DISK: 8GB nats, ccdb_postgres, uaadb,
vcap_redis, uaa, acmdb, acm, stager, health_manager, mysql_node, mysql_gateway, mongodb_node, mongodb_gateway, redis_node, redis_gateway, postgresql_node, postgresql_gateway, vblob_node
vblob_gateway, backup_manager ,service_utilities, collector, dashboard, serialization_data_server
services_nfs, syslog_aggregator, services_redis, opentsdb
medium 4 RAM:2GB, CPU: 2, DISK: 16GB debian_nfs_server
cloud_controller , dea
router 2 RAM:512M, CPU: 1, DISK: 8GB router

From the above two tables, we can start to modify the manifest file. We name the manifest file as cf.yml. The following sections explain the fields in details.

name
This is the cloud foundry deployment name. We can name it arbitrarily.

director_uuid
The director uuid is the uuid of bosh director we just deployed in Part III. We can retrieve this value by command:

$ bosh status

release
The release name should be the same as the name you entered when creating the cf-release. The version was generated automatically when the release was created.

compilation, update, networks, resource_pools
These fields are similar to those in bosh.yml file. Refer to the previous part for more information.

jobs
Jobs are the components of cloud foundry. Each job runs on a virtual machine. Jobs are described as below.

debian_nfs_server, services_nfs: these two jobs are used as nfs server in Cloud Foundry. As they serve as file servers, we should make sure that the “persistent_disk” property indeed exists.

syslog_aggregator: this job is uses to collect system logs and store them in the database.

nats: NATS is the message bus of Cloud Foundry. It’s a core component in Cloud Foundry.

opentsdb: this is a database that stores the log information. Since it is a database, it also requires a “persistent_disk” property.

collector: this job collects system information and stores them in databases.

dashboard: this is a web based tool for monitoring and reporting of Cloud Foundry Platform.

cloud_controller, ccdb: cloud_controller controls all the Cloud Foundry components. “ccdb” is the database for cloud controller. “persistent_disk” property is required in ccdb.

uaa, uaadb: uaa is used for user authentication and authorization. uaadb is the database that stores the user information. “persistent_disk” property is required for uaadb.

vcap_redis, services_redis: these two jobs are used to store the internal key-value pairs for Cloud Foundry.

acm, acmdb: acm is short for Access Control Manager. The ACM is a service that allows cloud foundry components to implement access control features. “acmdb” is the database for acm. “acmdb” also requires a “persistent_disk” property.

stager: stager is a job that packs the source code and all the required packages of user’s application. When staging is completed, the app is passed to dea for execution.

router: router is used to route user’s request to proper destination in Cloud Foundry.

health_manager, health_manager_next: health_manager is the job that monitors the health status of all users’ apps. health_manager_next is the next-generation version of health_manager. They will be co-existing for some time.

dea: “dea” is short for droplet execution agent. All users’ apps are executed in dea.

mysql_node, mysql_gateway, mongodb_node, mongodb_gateway, redis_node, redis_gateway, rabbit_node, rabbit_gateway, postgresql_node, postgresql_gateway, vblob_node, vblob_gateway: these jobs are all services that Cloud Foundry supplies. Each service has a node that provisions resources. The corresponding gateway lies between the cloud_controller and a service node and it acts as the gateway for each service.

backup_manager: used to backup users’ data and databases.

service_utilities:  Utilities of service management.

serialization_data_server: a server used to serialize data in Cloud Foundry.

properties:

This is another important part in cf.yml file. We should pay attention that the IP addresses in this section should be in sync with those in the jobs field.  You should replace the password and tokens with your private secure password and tokens.

domain:  this is the domain name for user’s access. We should also create a DNS server to resolve the domain to the load balancer’s IP address. In our example, we set the domain name as cf.local, so users can use vmc target api.cf.local when pushing apps.

cc.srv_api_uri: This property usually takes the format of http://api.<yourdomain>. For example, we set domain as cf.local, the srv_api_uri would be http://api.cf.local.

cc.password: this password must have at least 16 characters.

cc. allow_registration: if it is true, users can register an account by using vmc command. Set this to false to disable this behavior.

cc.admins: a list of admin users. Admin users can register through vmc command even the flag allow_registration is set to false.

Most of the ‘nfs_server’ in the properties should be set to the IP address of the job ‘services_nfs’.

mysql_node.production: If it is true, the memory of mysql_node must be at least 4GB. In an experimental environment, we can set it to false so that the memory of mysql_node can be set to less than 4GB.

Because the yml file may evolve as the new release of Cloud Foundry, there is an option of bosh command to validate the yml file. Type “bosh help”, you can see the usage and explanation of “bosh diff”:

$ bosh diff [<template_file>]

This command compares your current deployment manifest against the specified deployment manifest template. It helps you to keep your deployment configuration file up to date. A dev template can be found in deployments repos.

For example, you can run the following command to compare your yml file to the template file. Firstly, you must cd into the directory where your cf.yml file and the template file reside, and then use this command:

$ bosh diff dev-template.erb

This command will help you find any mistakes in the cf.yml file. If there are some fields missing, the command helps fill in it automatically. If there is a spelling mistake or other errors, the command reports a syntax error.

You can download a sample yml file from here:
https://github.com/vmware-china-se/bosh_doc/blob/master/cf.yml

When the manifest file is completed, we can now start to install Cloud Foundry.
1) In Part III, we have cloned CF repository from Gerrit by:

$ gerrit clone ssh://<your username>@reviews.cloudfoundry.org:29418/cf-release.git

2) Go to the directory and create a CF release.

$ cd cf-release
$ bosh create release

This will download all the packages, blob data and other resources needed. It will take several minutes depending on your network speed.

NOTE:
1. If you have edited code in cf-release, you may have to add –force option to bosh create release.
2. It is important to have direct internet connection when running this command.
3. If your network is slow or you do not have a direct connection to the internet, you may want to do this in a better environment. You can create the release on a machine with a good internet connection the option –with-tarball. Then you copy the generated tarball back to the system you want.

If nothing goes wrong, you can see a summary of this release like this:

Generating manifest...
----------------------
Writing manifest...
Release summary
---------------
Packages
+---------------------------+---------+-------+------------------------------------------+
| Name                      | Version | Notes | Fingerprint                              |
+---------------------------+---------+-------+------------------------------------------+
| sqlite                    | 3       |       | e3e9b61f8cdc2610480c2aa841e89dc0bb1dc9c9 |
| ruby                      | 6       |       | b35a5da6c214d9a97fdd664931bf64987053ad4c |
… …
| debian_nfs_server         | 3       |       | c1dc860ed6ab2bee68172be996f64f9587e9ac0d |
+---------------------------+---------+-------+------------------------------------------+
Jobs
+---------------------------+----------+-------------+------------------------------------------+
| Name                      | Version  | Notes       | Fingerprint                              |
+---------------------------+----------+-------------+------------------------------------------+
| redis_node                | 19       |             | 61098860eaa8cfb5dc438255ebe28db74a1feabc |
| rabbit_gateway            | 13       |             | ddcc0901ded1e45fdc4f318ed4ba8d8ccca51f7f |
… …
| debian_nfs_server         | 7        |             | 234fabc1a2c5dfbfd7c932882d947fed331b8295 |
| memcached_gateway         | 4        |             | 998623d86733a633c789849721e00d85cc3ebc20 |

Jobs affected by changes in this release
+------------------+----------+
| Name             | Version  |
+------------------+----------+
… …
| cloud_controller | 45.1-dev |
+------------------+----------+

Release version: 95.10-dev
Release manifest: /home/boshcli/cf-release/dev_releases/cf-def-95.10-dev.yml

As you can see, the dev-releases directory contains the release manifest yml file (and a tarball file, if –with-tarball option is on).

3) Target BOSH CLI to the director of BOSH. If you don’t remember the director’s IP, you can find it in your BOSH deployment manifest in part III.

$ bosh target 10.40.97.117:25555
Target set to `bosh_director (http://10.40.97.117:25555) Ver: 0.5.1 (release:abb3e7a4 bosh:2c69ee8c)

4) Upload cf- release by referring to the generated manifest file, e.g. cf-def-95.10-dev.yml in our example.

$ bosh upload release cf-def-95.10-dev.yml

This step will copy packages and jobs, and build them into a tarball, then verify this release to make sure files and dependencies are right. After verifying, it will upload release and create new jobs. Finally, you can see information telling you release uploaded:

Task 317 done
Started               2012-10-28 05:35:43 UTC
Finished              2012-10-28 05:36:44 UTC
Duration              00:01:01
Release uploaded

You can verify your release by:

$ bosh releases

You can see all the newly uploaded releases in the listing:

+--------+------------------------------------------------------+
| Name   | Versions                                             |
+--------+------------------------------------------------------+
| cf-def | 95.1-dev, 95.8-dev, 95.9-dev, 95.10-dev              |
| cflab  | 92.1-dev                                             |
+--------+------------------------------------------------------+

5) Now that, we have uploaded release and stemcell (the same stemcell as in part III), and manifest is ready, set deployment to the manifest:

$ bosh deployment cf-dev.yml
Deployment set to `/home/boshcli/cf-dev.yml'

We can deploy Cloud Foundry now:

$ bosh deploy

This will create VMs for the jobs, compile the packages and install dependencies. It will take several minutes depending on the server’s hardware condition. You can see output like:

Preparing deployment
binding deployment (00:00:00)
binding releases (00:00:01)
… …
Preparing package compilation
… …
Compiling packages
… …
Preparing DNS
binding DNS (00:00:00)
Creating bound missing VMs
… …
Binding instance VMs
… …
Preparing configuration
binding configuration (00:00:03)
… …
Creating job cloud_controller
cloud_controller/0 (canary) (00:02:45)
… …
Done                    1/1 00:08:41

Task 318 done

Started               2012-10-28 05:37:52 UTC
Finished              2012-10-28 05:49:43 UTC
Duration              00:11:51
Deployed `cf-dev.yml' to `bosh_director'

To check your deployment, you can use this command:

$ bosh deployments
+----------+
| Name     |
+----------+
| cf.local |
+----------+
Deployments total: 1

You can also verify every the running status of VMs:

$ bosh vms
+---------------------------+---------+---------------+-------------+
| Job/index                 | State   | Resource Pool | IPs         |
+---------------------------+---------+---------------+-------------+
| acm/0                     | running | small         | 10.40.97.58 |
| acmdb/0                   | running | small         | 10.40.97.57 |
| cloud_controller/0        | running | medium        | 10.40.97.59 |
… …
+---------------------------+---------+---------------+-------------+
VMs total: 40

At this moment, Cloud Foundry has been completely installed. If you cannot wait to verify the installation, you can use vmc command to target one of the routers’ IP address and deploy a test web app on it (see subsequent section). Because there is no DNS available, you need to have at least these two lines in the hosts file of a vmc client machine and the machine running a browser to test the web app:

<router’s IP address>  api.yourdomain.com
<router’s IP address>  <youtestapp>.yourdomain.com

If the above testing works fine, your Cloud Foundry instance is working. The last thing is to take care of the load balancer and DNS. These two components are not part of Cloud Foundry’s components. However, they need to be set up properly in a production environment. So we briefly talk about how to set them up.

You can deploy either a hardware or software load balancer (LB) to distribute the load evenly to multiple instances of router components. In our sample deployment we have two routers. For a software LB, you can use Stingray Traffic Manager.  It can be downloaded from here: https://support.riverbed.com/download.htm?filename=public/software/stingray/trafficmanager/9.0/ZeusTM_90_Linux-x86_64.tgz

A DNS server is needed to resolve the domain of your Cloud Foundry instance. Basically, the DNS server resolves a wildcard name like *.yourdomain.com to the IP address of the load balancer. If you do not have a LB, you can set up DNS rotation to resolve the domain to routers in a round robin fashion.

When the LB and DNS is setup properly, you can start to deploy apps on your instance.

Cloud Foundry has a command-line tool known as VMC. It can perform most of the operations on Cloud Foundry, such as configuring your applications, deploying them to Cloud Foundry and monitor the status of your apps. To install VMC, you must install Ruby and RubyGems (a Ruby package manager) on the computer on which you want to run VMC. Currently Ruby 1.8.7 and 1.9.2 are supported.  After that, you can install VMC by the below command ( more on vmc installation http://docs.cloudfoundry.com/tools/vmc/installing-vmc.html):

$ sudo gem install vmc

Now, specify the target to your Cloud Foundry instance, the URL should look like api.yourdomain.com, for example:

$ vmc target api.cf.local

Log in with the admin user’s credential, which is specified in the deployment manifest:

$ vmc login

Initially, you will be asked to set password for your account. After logging in, you get the information of your Cloud Foundry instance:

$ vmc info

Now, let’s create and deploy a simple hello world Sinatra application to verify the instance.

$ mkdir ~/hello
$ cd ~/hello

Create a Ruby file called hello.rb with following contents:

require 'sinatra'

get '/' do
"Hello from Cloud Foundry"
end

Save this file and we are about to upload this application:

$ vmc push

Complete the prompts like below

Would you like to deploy from the current directory? [Yn]:
Application Name: hello
Detected a Sinatra Application, is this correct? [Yn]:
Application Deployed URL [hello.cf.local]:
Memory reservation (128M, 256M, 512M, 1G, 2G) [128M]:
How many instances? [1]:
Bind existing services to 'hello'? [yN]:
Create services to bind to 'hello'? [yN]:
Would you like to save this configuration? [yN]:

After a while, you will see the output:

Creating Application: OK
Uploading Application:
Checking for available resources: OK
Packing application: OK
Uploading (0K): OK
Push Status: OK
Staging Application 'hello': OK
Starting Application 'hello': OK

Now, go visit the application’s URL: hello.cf.local in your browser. If you can see the text, your application has been successfully deployed.

Congratulations, your Cloud Foundry instance had been completely set up. It is functionally identical to the cloudfoundry.com.

More on deploying Cloud Foundry on vSphere using BOSH:

Building Cloud Foundry on vSphere using BOSH Part 3

Installing micro BOSH and BOSH

When installing BOSH CLI is completed, we now start the installation of a micro BOSH. As mentioned before, Micro BOSH can be considered as a miniature of BOSH. While a standard BOSH has its components spread across 6 VMs, a micro BOSH in contrast contains all components in a single VM. It can be easily set up and is usually used to deployed small releases, such as BOSH. In this sense, BOSH is deployed by itself. As put it by the BOSH team, it is referred as “Inception”.

The below steps are based on the official BOSH document by adding more implementation details.

1) In the BOSH CLI VM, install the BOSH Deployer ruby gem.

$ gem install bosh_deployer

Once you have installed the deployer, you will see some extra commands appear after typing bosh on your command line.

$ bosh help
  ...
Micro
micro deployment [<name>]      Choose micro deployment to work with
micro status                   Display micro BOSH deployment status
micro deployments              Show the list of deployments
micro deploy <stemcell>        Deploy a micro BOSH instance to the currently selected deployment
--update                       update existing instance
micro delete                   Delete micro BOSH instance (including persistent disk)
micro agent <args>             Send agent messages
micro apply <spec>             Apply spec

NOTE: The bosh micro commands must be run within a micro BOSH deployment directory

2) In vCenter, under the view Home->Inventory->VMs and Templates, make sure the folders for virtual machines and templates are already created (see part II). These folders are used in the deployment configuration.

3) From the view Home->Inventory->Datastores, choose the NFSdatastore datastore we created and browse it.

Right click on the root folder and create a sub folder for storing virtual machines. In this example, we name it “boshdeployer”. This folder name will be the value of the “disk_path” parameter in our deployment manifest.
NOTE: If you do not have a shared NFS storage, you may use the local disks of the hypervisors as datastore. (However, please be aware that local disks are only recommended for an experimental system.) You can name the datastores as “localstore1” for host 1, “localstore2” for host 2, and so on. Later in the manifest file, you can use a wildcard pattern like “localstore*” to specify the datastore of all hosts. The “boshdeployer” folder should be created on all local datastores.

4) Download public stemcell

$ mkdir -p ~/stemcells
$ cd stemcells
$ bosh public stemcells

The output looks like this:

+---------------------------------+-------------------------------------------------------+
| Name                            | Url                                                   |
+---------------------------------+-------------------------------------------------------+
| bosh-stemcell-0.5.2.tgz         | https://blob.cfblob.com/rest/objects/4e4e78bca31e1... |
| bosh-stemcell-aws-0.5.1.tgz     | https://blob.cfblob.com/rest/objects/4e4e78bca21e1... |
| bosh-stemcell-vsphere-0.6.4.tgz | https://blob.cfblob.com/rest/objects/4e4e78bca31e1... |
| micro-bosh-stemcell-0.1.0.tgz   | https://blob.cfblob.com/rest/objects/4e4e78bca51e1... |
+---------------------------------+-------------------------------------------------------+
To download use 'bosh download public stemcell <stemcell_name>'.For full url use --full.

Download the stemcell of micro BOSH using below command:

$ bosh download public stemcell micro-bosh-stemcell-0.1.0.tgz

NOTE: The stemcell is 400-500MB in size. It may take a long time to download in a slow network. In this case, you can download using any tool (e.g. Firefox browser) which can resume a transmission from failures. Use the –full argument to display the full URL for download.

5) Configure your deployment (.yml) file, and save it under a folder with the same name defined in your .yml file. In our example, it is “micro01”.

$ cd ~
$ mkdir deployments
$ cd deployments
$ mkdir micro01

In the yml file, there is a section about vCenter. Enter the name of folders we created in Part II. The “disk_path” should be the folder we just created in the datastore (NFSdatastore).  The value of datastore_pattern and persistent_datastore_pattern is the shared data store name (NFSdatastore). If you use local disks, this could be the wildcard string like “localstore*”.

datacenters:
- name: vDataCenter
vm_folder: vm_folder
template_folder: template
disk_path: boshdeployer
datastore_pattern: NFSdatastore
persistent_datastore_pattern: NFSdatastore
allow_mixed_datastores: true

Here is a link of a sample yml file of micro BOSH:
https://github.com/vmware-china-se/bosh_doc/blob/master/micro.yml

6) Set the micro BOSH Deployment using:

$ cd deployments
$ bosh micro deployment micro01

Deployment set to '~/deployments/micro01/micro_bosh.yml'

$ bosh micro deploy ~/stemcells/micro-bosh-stemcell-0.1.0.tgz

If everything goes well, micro BOSH will be deployed within a few minutes. You can check the deployment status by this command:

$ bosh micro deployments

You will see your micro BOSH deployment listed:

+---------+-----------------------------------------+-----------------------------------------+
| Name    | VM name                                 | Stemcell name                           |
+---------+-----------------------------------------+-----------------------------------------+
| micro01 | vm-a51a9ba4-8e8f-4b69-ace2-8f2d190cb5c3 | sc-689a8c4e-63a6-421b-ba1a-400287d8d805 |
+---------+-----------------------------------------+-----------------------------------------+

Installing BOSH

When the micro BOSH is ready, we can now use it to deploy BOSH, which is a distributed system with 6 VMs. As mentioned in previous section, we need to have three items: a stemcell as the VM template, a BOSH release as the software to be deployed, and a deployment manifest file for deployment-specific definition. Let’s work on them one by one.

1) First, we target our BOSH CLI to the director of the micro BOSH. The BOSH director can be thought of as the controller or orchestrator of BOSH. All BOSH CLI commands are sent to the director for execution. The IP address of the director is defined in the yml file we used to create micro BOSH. The default credential of BOSH director is admin/admin. In our example, we use the below commands for targeting micro BOSH and authentication:

$ bosh target 10.40.97.124:25555
$ bosh login

2) Next, we download the bosh stemcell and upload to micro BOSH. This step is similar to downloading a stemcell of micro BOSH. The only difference is that we choose the stemcell for BOSH instead of micro BOSH.

$ cd ~/stemcells
$ bosh public stemcells

A list of stemcells is displayed; choose the latest stemcell to download:

$ bosh download public stemcell bosh-stemcell-vsphere-0.6.4.tgz
$ bosh upload stemcell bosh-stemcell-vsphere-0.6.4.tgz

If you have created a Gerrit account in Part II, skip step 3-7.

3) Sign up for the Cloud Foundry Gerrit server at http://reviews.cloudfoundry.org
4) Set up your ssh public key (accept all defaults)

$ ssh-keygen -t rsa

Copy your key from ~/.ssh/id_rsa.pub into your Gerrit account
5) Create and upload your public SSH key in your Gerrit account profile
6) Set your name and email

$ git config --global user.name "Firstname Lastname"
$ git config --global user.email your_email@youremail.com

7) Install out gerrit-cli gem
8) Clone the release code from Cloud Foundry repositories using Gerrit. The below commands get the code of BOSH and Cloud Foundry, respectively.

$ gerrit clone ssh://<yourusername>@reviews.cloudfoundry.org:29418/bosh.git
$ gerrit clone ssh://<yourusername>@reviews.cloudfoundry.org:29418/cf-release.git

We then create our own BOSH release:

$ cd bosh-release
$ ./update
$ bosh create release  --with-tarball

If there are local code conflicts, you can add “–force” option:

$ bosh create release  --with-tarball --force

This step may take some time to complete depending on the speed of your network. It first downloads binaries from a blob server. It then builds the packages and generates manifest files. The command’s output looks like below:

Syncing blobs…
...
Building DEV release
Please enter development release name: bosh-dev1
---------------------------------
Building packages
…
Generating manifest...
…
Copying jobs...

At last, when the release is created, you will see something like below. Notice the last two lines indicate the manifest file and the release file.

Generated /home/boshcli/bosh-release/dev_releases/bosh-dev1-6.1-dev.tgz

Release summary
---------------
Packages
+----------------+---------+-------+------------------------------------------+
| Name           | Version | Notes | Fingerprint                              |
+----------------+---------+-------+------------------------------------------+
| nginx          | 1       |       | 1f01e008099b9baa06d9a4ad1fcccc55840cf56e |
……
| ruby           | 1       |       | c79b76fcb9bdda122ad2c52c3604ba950b482681 |
+----------------+---------+-------+------------------------------------------+

Jobs
+----------------+---------+-------------+------------------------------------------+
| Name           | Version | Notes       | Fingerprint                              |
+----------------+---------+-------------+------------------------------------------+
| micro_aws      | 1.1-dev | new version | fadbedb276f918beba514c69c9e77978eadb65ac |
……
| redis          | 2       |             | 3d5767e5deec579dee61f2c131982a97975d405e |
+----------------+---------+-------------+------------------------------------------+

Release version: 6.1-dev
Release manifest: /home/boshcli/bosh-release/dev_releases/bosh-dev1-6.1-dev.yml
Release tarball (88.8M): /home/boshcli/bosh-release/dev_releases/bosh-dev1-6.1-dev.tgz

9) Upload the created release to micro BOSH’s director.

$ bosh upload release dev_releases/bosh-dev1-6.1-dev.tgz

10) Configure BOSH deployment manifest. First, we get the director’s UUID information by doing:

$ bosh status
Updating director data... done
Target         micro01 (http://10.40.97.124:25555) Ver: 0.4 (00000000)
UUID           7d72eb71-9a98-4081-9857-ad7c7ff4ee33
User           admin
Deployment     /home/boshcli/bosh-dev1.yml

Now we are moving to the trickiest part of this installation: modifying the deployment manifest file. Since most BOSH deployment errors are caused by improper settings in the manifest file, we explain this in more details.

To get started, let’s get the manifest template from here: https://github.com/cloudfoundry/oss-docs/blob/master/bosh/tutorial/examples/bosh_manifest.yml

Since the official BOSH document provides the specification of the manifest file, we assume you have gone through it before reading this article. We won’t go into every detail of this file; instead, we discuss some important items in the manifest file.

Network

Below is an example of the network section.

networks:            #define networks- name: default
subnets:
- reserved:        #ips you don’t want to allocate
- 10.40.97.121 - 10.40.97.254
static:          #ips you will use
- 10.40.97.115 - 10.40.97.120
range: 10.40.97.0/24
gateway: 10.40.97.1
dns:
- 10.40.62.11
- 10.135.12.101
cloud_properties: #the same network as all other vms.
name: VM Network

static: contains the IP addresses of BOSH VMs.
reserved: IP addresses BOSH should not use. It is very important to exclude any IP addresses which have been assigned to other devices on the same network, for example, storage devices, network devices, micro BOSH and vCenter host. During the installation, micro BOSH may spin up some temporal VMs (worker VMs) for compilation. If we do not specify the reserved addresses, these temporal VMs may have conflicts of IP address with existing devices or hosts.
cloud_properties: name is the network name we defined in vSphere (see part II).

Resource Pool

This section defines the configuration (cpu, memory, disk and network) of VMs used by jobs. Usually, jobs of an application vary in resource consumption. For example, some jobs require more memory than others, while some jobs need more vCPUs for computing-intensive tasks. Based on the actual requirements, we should create one or more resource pools. One thing to note is that the size of all pools should be equal to the total number of job instances defined in the manifest file. When deploying BOSH, since there are 6 VMs (6 jobs) altogether, the size of all pools should add up to 6.

In our manifest file, we have 3 resource pools:

Pool Name Size Configuration Jobs
small 3 RAM:512MB, CPU:1, DISK:2GB nats,  redis, health_monitor
medium 2 RAM:1GB, CPU: 1, DISK: 8GB postgres, blobstore
director 1 RAM:2GB, CPU: 2, DISK: 8GB director

Compilation

This section defines the worker VMs created for package compiling. In a system with limited resource, we should reduce the number of concurrent worker VMs to ensure a successful compilation. In our example, we define 4 worker VMs.

Update

This section contains a very useful parameter: max_in_flight. It tells BOSH the maximum jobs can be installed in parallel. In a slow system, try to reduce this number. If you set this number to 1, it means jobs are deployed sequentially. For BOSH deployment, we recommend setting this number to 1 to ensure BOSH can be installed successfully.

Jobs

There are six jobs in the BOSH release. Each job occupies a VM. Depending on the nature of a job and the resource consumption, we allocate jobs to various resource pools. One thing to note is that we need to assign persistent disks to three jobs: postgres, director and blobstore. Without persistent disks, these jobs will not work properly as their local disks get full very soon.

It is a good idea to fill in a spreadsheet like below to plan your deployment. Based on the spreadsheet, you can modify the deployment manifest.

Job Resource_pool IP
nats small 10.40.97.120
postgres medium 10.40.97.119
redis small 10.40.97.118
director director 10.40.97.117
blob_store medium 10.40.97.116
health_monitor small 10.40.97.115

Based on the above table, we created a sample deployment manifest, you can download it from here:

https://github.com/vmware-china-se/bosh_doc/blob/master/bosh.yml

11) After updating the deployment manifest file, we can start the actual deployment by below commands:

$ bosh deployment bosh_dev1.yml
$ bosh deploy

This may take some time to finish depending on your network condition and available hardware resources. You can also check out the vCenter console to see VMs being created, configured and destroyed.

Preparing deployment
…..
Compiling packages
……
Binding instance VMs
postgres/0 (00:00:01)
director/0 (00:00:01)
redis/0 (00:00:01)
blobstore/0 (00:00:01)
nats/0 (00:00:01)
health_monitor/0 (00:00:01)
Done                    6/6 00:00:01

Updating job nats
nats/0 (canary) (00:01:14)
Done                    1/1 00:01:14
……
Updating job director

director/0 (canary) (00:01:10)
Done                    1/1 00:01:10
……

If everything goes well, you will eventually see something like this:

Task 14 done
Started                         2012-08-12 03:32:24 UTC
Finished       2012-08-12 03:52:24 UTC
Duration      00:20:00
Deployed `bosh-dev1.yml' to `micro01'

This means you have successfully deployed BOSH. You can see your deployment by doing:

$ bosh deployments
+-------+
| Name  |
+-------+
| bosh1 |
+-------+

You can check all virtual machine’s status by doing:

$ bosh vms

If nothing goes wrong, you will see status of VMs like:

+------------------+---------+---------------+--------------+
| Job/index        | State   | Resource Pool | IPs          |
+------------------+---------+---------------+--------------+
| blobstore/0      | running | medium        | 10.40.97.116 |
| director/0       | running | director      | 10.40.97.117 |
| health_monitor/0 | running | small         | 10.40.97.115 |
| nats/0           | running | small         | 10.40.97.120 |
| postgres/0       | running | medium        | 10.40.97.119 |
| redis/0          | running | small         | 10.40.97.118 |
+------------------+---------+---------------+--------------+
VMs total: 6

Building Cloud Foundry by BOSH:

Building Cloud Foundry on vSphere using BOSH Part 2

Install BOSH CLI

Important Prerequisites:

Throughout the installation process of BOSH and Cloud Foundry, direct internet connection is required. This is very important because some of the software bits are downloaded directly from internet sources, such as Ruby gems and some open source software. Setting up a web proxy server between your VMs and the internet won’t work.

The other prerequisite is a stable internet connection. If your network is slow or unreliable when downloading files from internet, your installation may fail with timeout or connection error.

We have seen many unsuccessful setups due to internet connection issues. It is highly recommended you check with your network administrator before starting the BOSH installation.

Create a Cluster in vCenter

Assume all nodes are virtual machines, we first install vSphere (we use V5.x in this article) on all bare metal servers. The vSphere servers are connected by the Management Network VLAN. Once it is done, we create a VM on one of the hypervisors to install Windows 2008 R2 64 bit. We then install vCenter on this Windows 2008 VM. The next step is to connect to vCenter using vSphere client so that we can manage the servers.

We can install a vSphere client on any Windows machine (even a virtual machine). After that, we can connect to vCenter by the vSphere client remotely. Firstly, let’s create a datacenter in vCenter. Right click on the vCenter node in the left pane and choose “New Datacenter” to add a new data center.Create a Data Center in vCenter

Next is to right click the newly created data center node and select “New Cluster…”.

During the “New Cluster Wizard”,  if you turned on vSphere DRS feature, you will be asked to configure VMware DRS. Make sure the “automation level” is set to “partially automated” or “Fully automated” as shown below. If you choose “Manual”, you will be as prompted by a popup to enter your choice. This behavior may block your BOSH automated installation.

Then, select the “Enable Host Monitoring” checkbox and select “Disable: Power on VMs that violate availability constraints”:

Then, go to the sub-part of “VM Monitoring”, choose “Disabled”:

Click Next, choose as follows:

Adding vSphere Hosts

The next step is to place hypervisors into the cluster we just created. Right click on the cluster node and choose “Add Host…”. For each vSphere server, enter its IP address, administrator’s username and password and confirm your configuration:

After adding all hosts, they will be listed in the Datacenter->Hosts tab:

Attach Datastore to Hosts

All hosts in the cluster should share the same NFS storage. For each host, we add in the storage as datastore. In vCenter, click on the vSphere host and choose the “Configuration” tab. Select the “Hardware”->”Storage”. Click the upper right “Add Storage…Create a Data Center in vCenter

In the “Select Storage Type” dialogue, choose “Network File System” .

Enter the ip address of your NFS storage, the folder name, and the datastore name. It is very important to use exactly the same datastore name for all hosts in the cluster. In this example, we use the name “NFSdatastore”.

Create Folders for VMs and Templates

From vCenter’s navigation bar, choose the view Home->Inventory->VMs and Templates, create folders as below:

These folders are used later for grouping BOSH and Cloud Foundry VMs.  In the above example, “template_folder_bosh” is used to place BOSH stemcells. “vm_folder_bosh” is used for BOSH nodes. “template_folder” keeps Cloud Foundry stemcells. “vm_folder” stores Cloud Foundry nodes. These names will be used in the deployment manifest file later.

Network Configuration

The VMs of Cloud Foundry will be deployed onto one or multiple networks. Before a deployment, we need to create networks in vSphere. The below diagram illustrates the network connections required by Cloud Foundry.

On each vSphere host, we create two networks:

1)      CF Network: mapped to CF VLAN

2)      Service Network: mapped to Service VLAN.

Most of the VMs reside on the CF Network. Only the Router VMs are dual-homed with the Service Network and CF network.

NOTE: In an experimental environment, you can put all VMs on the same network to simplify the installation process. Hence a single network on a hypervisor may suffice.

To create a network, choose the “Hosts and Clusters” view. Select a host and switch to the “configuration” tab. Then select “Networking” and click “Add Networking”:

The connection type should be “Virtual Machine”:

Use an existing vSwitch:

In next step, rename the network label to “CF Network”. If your network administrator has assigned a VLAN ID, enter the CF VLAN ID accordingly.

Then Click “finish” and complete the network creation. Repeat the steps to create “Service Network”, just name the network label as “Service Network”.

It is important to keep the network name exactly the same on all hosts in the same cluster. The below figure shows two networks had been created for a host. We name them as “CF Network” and “Service Network”. These names will be used in the bosh and cloud foundry yml files later.

Also, if you check from the Datacenter->Inventory->Networking view, it shows the following:

Create a VM for BOSH CLI

In vCenter, we select one of the hosts in the cluster to create a VM. Click “Create a new virtual machine”. We will install on this VM a 64bit Ubuntu 10.04 OS. Assign the VM with 2 vCPUs, 2GB memory, 20GB disk space (or more). During the installing, make sure to set network manually.

We now start to install BOSH CLI on the VM. Log on to Ubuntu and follow the below steps. (Note, these steps are mostly from the official BOSH document. They are listed here for your convenience.)

1)      Install ruby via rbenv:
1. Bosh is written in Ruby. Let’s install Ruby’s dependencies:

$ sudo apt-get install git-core build-essential libsqlite3-dev curl \
libmysqlclient-dev libxml2-dev libxslt-dev libpq-dev genisoimage

2. Get the latest version of rbenv

$ git clone git://github.com/sstephenson/rbenv.git .rbenv

3. Add ~/.rbenv/bin to your $PATH for access to the rbenv command-line utility

$ echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bash_profile

4. Add rbenv init to your shell to enable shims and autocompletion

$ echo 'eval "$(rbenv init -)"' >> ~/.bash_profile

5. Download Ruby 1.9.2
Note: You can also build ruby using ruby-build plugin for rbenv. See https://github.com/sstephenson/ruby-build

$ wget http://ftp.ruby-lang.org/pub/ruby/1.9/ruby-1.9.2-p290.tar.gz

6. Unpack and install Ruby

$ tar xvfz ruby-1.9.2-p290.tar.gz
$ cd ruby-1.9.2-p290
$ ./configure --prefix=$HOME/.rbenv/versions/1.9.2-p290
$ make
$ make install

7. Restart your shell so the path changes take effect

$ source ~/.bash_profile

8. Set your default Ruby to be version 1.9.2

$ rbenv global 1.9.2-p290

Note: The rake 0.8.7 gem may need to be reinstalled when using this method

$ gem pristine rake

9. Update rubygems and install bundler.
Note: After installing gems (gem install or bundle install) run rbenv rehash to add new shims

$ rbenv rehash
$ gem update –system
$ gem install bundler
$ rbenv rehash

2) Install BOSH CLI:
1. Sign up for the cloud foundry gerrit server at http://reviews.cloudfoundry.org
2. Set up your ssh public key (accept all default):

$ ssh-keygen –t rsa

3. Copy your key from ~/.ssh/id_rsa.pub into your Gerrit account.
4. Create and upload your public SSH key in your Gerrit account profile.
5. Set your name and email:

$ git config --global user.name "Firstname Lastname"
$ git config --global user.email your_email@youremail.com

6. Install out gerrit-cli gem:

$ gem install gerrit-cli

7. Run some rake tasks to install the BOSH CLI:

$ gem install bosh_cli
$ rbenv rehash
$ bosh –version

If everything works well, the last command will show the bosh version you just installed. This indicates that BOSH CLI has been successfully installed.

How to deploy Cloud Foundry using BOSH: