Building Cloud Foundry on vSphere using BOSH Part 3

Installing micro BOSH and BOSH

When installing BOSH CLI is completed, we now start the installation of a micro BOSH. As mentioned before, Micro BOSH can be considered as a miniature of BOSH. While a standard BOSH has its components spread across 6 VMs, a micro BOSH in contrast contains all components in a single VM. It can be easily set up and is usually used to deployed small releases, such as BOSH. In this sense, BOSH is deployed by itself. As put it by the BOSH team, it is referred as “Inception”.

The below steps are based on the official BOSH document by adding more implementation details.

1) In the BOSH CLI VM, install the BOSH Deployer ruby gem.

$ gem install bosh_deployer

Once you have installed the deployer, you will see some extra commands appear after typing bosh on your command line.

$ bosh help
  ...
Micro
micro deployment [<name>]      Choose micro deployment to work with
micro status                   Display micro BOSH deployment status
micro deployments              Show the list of deployments
micro deploy <stemcell>        Deploy a micro BOSH instance to the currently selected deployment
--update                       update existing instance
micro delete                   Delete micro BOSH instance (including persistent disk)
micro agent <args>             Send agent messages
micro apply <spec>             Apply spec

NOTE: The bosh micro commands must be run within a micro BOSH deployment directory

2) In vCenter, under the view Home->Inventory->VMs and Templates, make sure the folders for virtual machines and templates are already created (see part II). These folders are used in the deployment configuration.

3) From the view Home->Inventory->Datastores, choose the NFSdatastore datastore we created and browse it.

Right click on the root folder and create a sub folder for storing virtual machines. In this example, we name it “boshdeployer”. This folder name will be the value of the “disk_path” parameter in our deployment manifest.
NOTE: If you do not have a shared NFS storage, you may use the local disks of the hypervisors as datastore. (However, please be aware that local disks are only recommended for an experimental system.) You can name the datastores as “localstore1” for host 1, “localstore2” for host 2, and so on. Later in the manifest file, you can use a wildcard pattern like “localstore*” to specify the datastore of all hosts. The “boshdeployer” folder should be created on all local datastores.

4) Download public stemcell

$ mkdir -p ~/stemcells
$ cd stemcells
$ bosh public stemcells

The output looks like this:

+---------------------------------+-------------------------------------------------------+
| Name                            | Url                                                   |
+---------------------------------+-------------------------------------------------------+
| bosh-stemcell-0.5.2.tgz         | https://blob.cfblob.com/rest/objects/4e4e78bca31e1... |
| bosh-stemcell-aws-0.5.1.tgz     | https://blob.cfblob.com/rest/objects/4e4e78bca21e1... |
| bosh-stemcell-vsphere-0.6.4.tgz | https://blob.cfblob.com/rest/objects/4e4e78bca31e1... |
| micro-bosh-stemcell-0.1.0.tgz   | https://blob.cfblob.com/rest/objects/4e4e78bca51e1... |
+---------------------------------+-------------------------------------------------------+
To download use 'bosh download public stemcell <stemcell_name>'.For full url use --full.

Download the stemcell of micro BOSH using below command:

$ bosh download public stemcell micro-bosh-stemcell-0.1.0.tgz

NOTE: The stemcell is 400-500MB in size. It may take a long time to download in a slow network. In this case, you can download using any tool (e.g. Firefox browser) which can resume a transmission from failures. Use the –full argument to display the full URL for download.

5) Configure your deployment (.yml) file, and save it under a folder with the same name defined in your .yml file. In our example, it is “micro01”.

$ cd ~
$ mkdir deployments
$ cd deployments
$ mkdir micro01

In the yml file, there is a section about vCenter. Enter the name of folders we created in Part II. The “disk_path” should be the folder we just created in the datastore (NFSdatastore).  The value of datastore_pattern and persistent_datastore_pattern is the shared data store name (NFSdatastore). If you use local disks, this could be the wildcard string like “localstore*”.

datacenters:
- name: vDataCenter
vm_folder: vm_folder
template_folder: template
disk_path: boshdeployer
datastore_pattern: NFSdatastore
persistent_datastore_pattern: NFSdatastore
allow_mixed_datastores: true

Here is a link of a sample yml file of micro BOSH:
https://github.com/vmware-china-se/bosh_doc/blob/master/micro.yml

6) Set the micro BOSH Deployment using:

$ cd deployments
$ bosh micro deployment micro01

Deployment set to '~/deployments/micro01/micro_bosh.yml'

$ bosh micro deploy ~/stemcells/micro-bosh-stemcell-0.1.0.tgz

If everything goes well, micro BOSH will be deployed within a few minutes. You can check the deployment status by this command:

$ bosh micro deployments

You will see your micro BOSH deployment listed:

+---------+-----------------------------------------+-----------------------------------------+
| Name    | VM name                                 | Stemcell name                           |
+---------+-----------------------------------------+-----------------------------------------+
| micro01 | vm-a51a9ba4-8e8f-4b69-ace2-8f2d190cb5c3 | sc-689a8c4e-63a6-421b-ba1a-400287d8d805 |
+---------+-----------------------------------------+-----------------------------------------+

Installing BOSH

When the micro BOSH is ready, we can now use it to deploy BOSH, which is a distributed system with 6 VMs. As mentioned in previous section, we need to have three items: a stemcell as the VM template, a BOSH release as the software to be deployed, and a deployment manifest file for deployment-specific definition. Let’s work on them one by one.

1) First, we target our BOSH CLI to the director of the micro BOSH. The BOSH director can be thought of as the controller or orchestrator of BOSH. All BOSH CLI commands are sent to the director for execution. The IP address of the director is defined in the yml file we used to create micro BOSH. The default credential of BOSH director is admin/admin. In our example, we use the below commands for targeting micro BOSH and authentication:

$ bosh target 10.40.97.124:25555
$ bosh login

2) Next, we download the bosh stemcell and upload to micro BOSH. This step is similar to downloading a stemcell of micro BOSH. The only difference is that we choose the stemcell for BOSH instead of micro BOSH.

$ cd ~/stemcells
$ bosh public stemcells

A list of stemcells is displayed; choose the latest stemcell to download:

$ bosh download public stemcell bosh-stemcell-vsphere-0.6.4.tgz
$ bosh upload stemcell bosh-stemcell-vsphere-0.6.4.tgz

If you have created a Gerrit account in Part II, skip step 3-7.

3) Sign up for the Cloud Foundry Gerrit server at http://reviews.cloudfoundry.org
4) Set up your ssh public key (accept all defaults)

$ ssh-keygen -t rsa

Copy your key from ~/.ssh/id_rsa.pub into your Gerrit account
5) Create and upload your public SSH key in your Gerrit account profile
6) Set your name and email

$ git config --global user.name "Firstname Lastname"
$ git config --global user.email your_email@youremail.com

7) Install out gerrit-cli gem
8) Clone the release code from Cloud Foundry repositories using Gerrit. The below commands get the code of BOSH and Cloud Foundry, respectively.

$ gerrit clone ssh://<yourusername>@reviews.cloudfoundry.org:29418/bosh.git
$ gerrit clone ssh://<yourusername>@reviews.cloudfoundry.org:29418/cf-release.git

We then create our own BOSH release:

$ cd bosh-release
$ ./update
$ bosh create release  --with-tarball

If there are local code conflicts, you can add “–force” option:

$ bosh create release  --with-tarball --force

This step may take some time to complete depending on the speed of your network. It first downloads binaries from a blob server. It then builds the packages and generates manifest files. The command’s output looks like below:

Syncing blobs…
...
Building DEV release
Please enter development release name: bosh-dev1
---------------------------------
Building packages
…
Generating manifest...
…
Copying jobs...

At last, when the release is created, you will see something like below. Notice the last two lines indicate the manifest file and the release file.

Generated /home/boshcli/bosh-release/dev_releases/bosh-dev1-6.1-dev.tgz

Release summary
---------------
Packages
+----------------+---------+-------+------------------------------------------+
| Name           | Version | Notes | Fingerprint                              |
+----------------+---------+-------+------------------------------------------+
| nginx          | 1       |       | 1f01e008099b9baa06d9a4ad1fcccc55840cf56e |
……
| ruby           | 1       |       | c79b76fcb9bdda122ad2c52c3604ba950b482681 |
+----------------+---------+-------+------------------------------------------+

Jobs
+----------------+---------+-------------+------------------------------------------+
| Name           | Version | Notes       | Fingerprint                              |
+----------------+---------+-------------+------------------------------------------+
| micro_aws      | 1.1-dev | new version | fadbedb276f918beba514c69c9e77978eadb65ac |
……
| redis          | 2       |             | 3d5767e5deec579dee61f2c131982a97975d405e |
+----------------+---------+-------------+------------------------------------------+

Release version: 6.1-dev
Release manifest: /home/boshcli/bosh-release/dev_releases/bosh-dev1-6.1-dev.yml
Release tarball (88.8M): /home/boshcli/bosh-release/dev_releases/bosh-dev1-6.1-dev.tgz

9) Upload the created release to micro BOSH’s director.

$ bosh upload release dev_releases/bosh-dev1-6.1-dev.tgz

10) Configure BOSH deployment manifest. First, we get the director’s UUID information by doing:

$ bosh status
Updating director data... done
Target         micro01 (http://10.40.97.124:25555) Ver: 0.4 (00000000)
UUID           7d72eb71-9a98-4081-9857-ad7c7ff4ee33
User           admin
Deployment     /home/boshcli/bosh-dev1.yml

Now we are moving to the trickiest part of this installation: modifying the deployment manifest file. Since most BOSH deployment errors are caused by improper settings in the manifest file, we explain this in more details.

To get started, let’s get the manifest template from here: https://github.com/cloudfoundry/oss-docs/blob/master/bosh/tutorial/examples/bosh_manifest.yml

Since the official BOSH document provides the specification of the manifest file, we assume you have gone through it before reading this article. We won’t go into every detail of this file; instead, we discuss some important items in the manifest file.

Network

Below is an example of the network section.

networks:            #define networks- name: default
subnets:
- reserved:        #ips you don’t want to allocate
- 10.40.97.121 - 10.40.97.254
static:          #ips you will use
- 10.40.97.115 - 10.40.97.120
range: 10.40.97.0/24
gateway: 10.40.97.1
dns:
- 10.40.62.11
- 10.135.12.101
cloud_properties: #the same network as all other vms.
name: VM Network

static: contains the IP addresses of BOSH VMs.
reserved: IP addresses BOSH should not use. It is very important to exclude any IP addresses which have been assigned to other devices on the same network, for example, storage devices, network devices, micro BOSH and vCenter host. During the installation, micro BOSH may spin up some temporal VMs (worker VMs) for compilation. If we do not specify the reserved addresses, these temporal VMs may have conflicts of IP address with existing devices or hosts.
cloud_properties: name is the network name we defined in vSphere (see part II).

Resource Pool

This section defines the configuration (cpu, memory, disk and network) of VMs used by jobs. Usually, jobs of an application vary in resource consumption. For example, some jobs require more memory than others, while some jobs need more vCPUs for computing-intensive tasks. Based on the actual requirements, we should create one or more resource pools. One thing to note is that the size of all pools should be equal to the total number of job instances defined in the manifest file. When deploying BOSH, since there are 6 VMs (6 jobs) altogether, the size of all pools should add up to 6.

In our manifest file, we have 3 resource pools:

Pool Name Size Configuration Jobs
small 3 RAM:512MB, CPU:1, DISK:2GB nats,  redis, health_monitor
medium 2 RAM:1GB, CPU: 1, DISK: 8GB postgres, blobstore
director 1 RAM:2GB, CPU: 2, DISK: 8GB director

Compilation

This section defines the worker VMs created for package compiling. In a system with limited resource, we should reduce the number of concurrent worker VMs to ensure a successful compilation. In our example, we define 4 worker VMs.

Update

This section contains a very useful parameter: max_in_flight. It tells BOSH the maximum jobs can be installed in parallel. In a slow system, try to reduce this number. If you set this number to 1, it means jobs are deployed sequentially. For BOSH deployment, we recommend setting this number to 1 to ensure BOSH can be installed successfully.

Jobs

There are six jobs in the BOSH release. Each job occupies a VM. Depending on the nature of a job and the resource consumption, we allocate jobs to various resource pools. One thing to note is that we need to assign persistent disks to three jobs: postgres, director and blobstore. Without persistent disks, these jobs will not work properly as their local disks get full very soon.

It is a good idea to fill in a spreadsheet like below to plan your deployment. Based on the spreadsheet, you can modify the deployment manifest.

Job Resource_pool IP
nats small 10.40.97.120
postgres medium 10.40.97.119
redis small 10.40.97.118
director director 10.40.97.117
blob_store medium 10.40.97.116
health_monitor small 10.40.97.115

Based on the above table, we created a sample deployment manifest, you can download it from here:

https://github.com/vmware-china-se/bosh_doc/blob/master/bosh.yml

11) After updating the deployment manifest file, we can start the actual deployment by below commands:

$ bosh deployment bosh_dev1.yml
$ bosh deploy

This may take some time to finish depending on your network condition and available hardware resources. You can also check out the vCenter console to see VMs being created, configured and destroyed.

Preparing deployment
…..
Compiling packages
……
Binding instance VMs
postgres/0 (00:00:01)
director/0 (00:00:01)
redis/0 (00:00:01)
blobstore/0 (00:00:01)
nats/0 (00:00:01)
health_monitor/0 (00:00:01)
Done                    6/6 00:00:01

Updating job nats
nats/0 (canary) (00:01:14)
Done                    1/1 00:01:14
……
Updating job director

director/0 (canary) (00:01:10)
Done                    1/1 00:01:10
……

If everything goes well, you will eventually see something like this:

Task 14 done
Started                         2012-08-12 03:32:24 UTC
Finished       2012-08-12 03:52:24 UTC
Duration      00:20:00
Deployed `bosh-dev1.yml' to `micro01'

This means you have successfully deployed BOSH. You can see your deployment by doing:

$ bosh deployments
+-------+
| Name  |
+-------+
| bosh1 |
+-------+

You can check all virtual machine’s status by doing:

$ bosh vms

If nothing goes wrong, you will see status of VMs like:

+------------------+---------+---------------+--------------+
| Job/index        | State   | Resource Pool | IPs          |
+------------------+---------+---------------+--------------+
| blobstore/0      | running | medium        | 10.40.97.116 |
| director/0       | running | director      | 10.40.97.117 |
| health_monitor/0 | running | small         | 10.40.97.115 |
| nats/0           | running | small         | 10.40.97.120 |
| postgres/0       | running | medium        | 10.40.97.119 |
| redis/0          | running | small         | 10.40.97.118 |
+------------------+---------+---------------+--------------+
VMs total: 6

Building Cloud Foundry by BOSH: