Hyperledger Fabric Deployment Using Helm Chart

Henry Zhang, Jiahao Chen, Hui Hu, Wenkai Yin

This blog post is to share the personal technical experience of the authors. It does not represent in any way the opinions of the authors’ employers.

* We assume readers are familiar with Helm, Docker, Kubernetes, and have knowledge of the Fabric’s architecture.

This post introduces an approach to deploy Fabric via Helm Chart. The Chart allows a user to flexibly configure the consensus algorithm (solo/Kafka) and the number of organizations and nodes of the Fabric network.

Our previous blogs describe how to deploy Fabric on Kubernetes (K8s), which were widely read and followed by the community members. Many readers were inspired by our posts e and were able to use the K8s to manage their Fabric cluster. If you are new to this domain, you can review our previous posts and understand the steps that are used in deploying Fabric on Kubernetes.

Helm Chart is used as a preferred way by more and more developers to deploy their applications on Kubernetes. It is growing as to be a de facto standard. The advantage of Helm is its highly automated and repeatable deployment. A single “Helm” command can perform a series of actions, while it takes a number of “kubectl” commands to achieve the same purpose.

Introduction of Helm

Helm is a package manager for Kubernetes. It simplifies the deployment and management of Kubernetes applications. Helm has three important concepts:

  • Chart: defines a package format that can be deployed on Kubernetes. A Chart contains a set of files that describe Kubernetes related resources.
  • Config: used to store the configuration information of the software, used together with Chart to create a Release.
  • Release: a running instance of a Chart.

Helm consists of two important functional components: the Helm Client and the Tiller Server.

  • Helm Client is a command-line tool for end users. It is used primarily for developing local charts, managing Chart repositories, and interacting with Tiller Server.
  • Tiller Server is installed in the Kubernetes cluster. It accepts requests from the Helm Client and interacts with the Kubernetes API Server, including responding to Helm Client requests, combining Chart and Config to create Releases, installing Charts into Kubernetes and keeping track of status.

Deploying Fabric by Helm Chart

  1. Helm installation

Please refer to the official document: https://docs.helm.sh/using_helm/#installing-helm

  1. Preparing the NFS server

In our deployment model, an NFS server is a place for sharing the configuration information among the Fabric nodes, so an available NFS server is required. For the details of how NFS server works in deployment, please refer to this post: How to Deploy Hyperledger Fabric on Kubernetes (1)

  1. Generating Certificates

In Fabric’ github repo (https://github.com/hyperledger/fabric), you could find a tool called “cryptogen”, which allows user to generate certificate files. The certificates are used by entities of Fabric network such as users and nodes.

In general, a user can define the organization structure in a YAML file, such as the number of organizations, the name and domain of organizations. The “cryptogen” tool can load the configuration file and generate corresponding certificate files.

A sample configuration file called “cluster-config.yaml” is as follows:

OrdererOrgs:

- Name: Orderer

Domain: example.com

Template:

Count: 1

PeerOrgs:

- Name: Org1

Domain: org1.example.com

Template:

Count: 1

- Name: Org2

Domain: org2.example.com

Template:

Count: 1

The keywords of OrdererOrgs and PeerOrgs are used to distinguish the types of organizations. The structure of the two organizations is as follows:

  • OrdererOrgs defines the information of the orderer organization. The above sample defines an organization named “Orderer” whose domain name is “example.com”, and it specifies the value of “Template.Count” as 1, thus there is only one orderer certificate will be generated, which is identified by “orderer0”.
  • PeerOrgs defines the information of the peer organization. The above sample defines two organizations: “Org1” and “Org2” . The corresponding domain names are “org1.example.com” and “org2.example.com”. Like the orderer, a Peer is provisioned to each organization. The first peer in “Org1” and “Org2” are both identified as “peer0”. They can be easily distinguished by the domain name.

For more configuration information about cryptogen, please refer to the description in the source code.

  1. Downloading the Chart

Download the chart via git clone:

https://github.com/LordGoodman/fabric-chart

or

https://github.com/hainingzhang/articles

After downloading, go to the fabric -chart/ directory. 

  1. Editing “values.yaml” of the chart
  • Select the consensus algorithm: “consensusType” is used to configure consensus algorithm: solo or Kafka. Helm will render the deployment templates accordingly.
  • Modify the definition of organization: since the “crypto-config” directory generated in step 3 contains an hierarchy of subdirectories based on the file “cluster-config.yaml”, the same information needs to be included into “values.yaml” to ensure that Fabric nodes could find their certificate files correctly. The easiest way is to copy the content of the above “cluster-config.yaml” to “values.yaml”.
  • Configure the NFS server, add NFS server information to let Fabric nodes load their configuration files in the Kubernetes environment.

A sample “values.yaml” is as follows:

clusterName: mycluster

consensusType: solo

nfs:

ip: 10.192.10.10

basePath: /opt/share

ordererOrgs:

- name: orderer

domain: example.com

template:

count: 1

peerOrgs:

- name: org1

domain: org1.example.com

template:

count: 1

- name: org2

domain: org2.example.com

template:

count: 1

 

  1. Moving the certificates to NFS server

The certificate files generated in step 3 are stored in a directory called “crypto-config/”. To allow Fabric nodes to obtain the certificate information, the certificate files should be copied onto the NFS server so that the nodes can access the certificate files through PV (persistent volume).

For example, assuming that the directory exported by the NFS server is “/opt/share”, “crypto-config” should be placed in the following path:

“/opt/share/<clusterName>/resources/crypto-config”, where clusterName is used to distinguish different instances of Fabric cluster. In our sample values.yaml in step 5, the clusterName is mycluster.

  1. Deploying Chart to Kubernetes

After the above steps, use the following command to deploy the Fabric to the Kubernetes cluster:

$ helm install .

The architecture of Fabric on Kubernetes is as follows:

  1. Installation principles

Basically, the Fabric Chart contains a set of dynamitic deployment templates. Before the actual deployment of Fabric, Helm renders the Fabric Chart based on “value.yaml”. It then pushes the rendered deployment files to Tiller server,   Tiller calls the Kubernetes APIs and finishes the deployment eventually.

We take the template file (fabric-chart/templates/peer.yaml) of the Peer node as an example to briefly explain this process:

1)

{{- range $peerOrg := $root.Values.peerOrgs}}

In the definition of the template, each Org corresponds to a namespace in Kubernetes. Each peer is mapped to a Pod under its org’s namespace, therefore the rendering will iterate through all the Orgs.

2)

{{- $namespace := printf "%s-%s" $orgName $clusterName }}

{{- $scope := dict "name" $namespace }}

{{- template "namespace" $scope }}

Render the namespace for an Org in each iteration. The namespace takes the format of “org.name-clustName”.

3)

{{- $name := printf "%s-shared" $namespace }}

{{- $scope := dict "name" $name "nfsServer" $root.Values.nfs.ip "nfsPath" $nfsPath "pvcNamespace" $namespace "pvcName" $sharedPVCName }}

{{- template "persistentVolume" $scope }}

The same as the iteration described above, render the Persistent Volume and Persistent Volume Claim for Pods within the same namespace.

4)

{{- range $index := until ($peerOrg.template.count | int) }}

{{- $name := printf "peer%d" $index }}

{{- $scope := dict "name" $name "namespace" $namespace "orgName" $orgName "orgDomainName" $orgDomainName "pvc" $sharedPVCName }}

{{- template "peer.deployment" $scope }}

{{- $name := printf "peer%d" $index }}

{{- $scope := dict "name" $name "namespace" $namespace }}

{{- template "peer.service" $scope }}

Finally, the deployment file of the Peer node will be rendered according to the Organization’s information. The deployment file is consisted of Kubernetes resources, such as deployment, service and ingress.

The templates also define other Fabric components like Orderer, CA and CLI. Readers can find them in fabric-chart/templates.

Related blogs:

Deploy Hyperledger Fabric on Kubernetes Part 1

Deploy Hyperledger Fabric on Kubernetes Part 2

Related Information:

A tool (including Blockchain Explorer) to automate the deployment Fabric on Kubernetes can be found at here.

Deploy Hyperledger Fabric on Kubernetes Part 2

In last post, we introduced the mechanism of running Fabric on Kubernetes. This post continues with the details.

Download automation scripts

Let’s assume there is an instance of Kubernetes ready for running Fabric. There are a few scripts we use for the following steps. Download them here:

https://github.com/hainingzhang/articles/tree/master/fabric_on_kubernetes/Fabric-on-K8S/setupCluster

The downloaded scripts in Fabric-on-k8s/ folder are as follows:

Fabric-on-k8s
 |--README.md
 |--setupCluster
 |--generateALL.sh             // for generating the K8S deployment file
 |--transform                  // run kubectl relative commands
 |--templates                  // store templates
 |--cluster-config.yaml        // config Fabric cluster
 |--configtx.yaml              // config channel

Configuration files of Fabric

We need to edit two configuration files in the Fabric-on-k8s/ folder to define the Fabric cluster to be deployed:

A. cluster-config.yaml

The cryptogen tool generates certificates for Fabric members based on cluster-config.yaml. An example is as follows:

OrdererOrgs:
 - Name: Orderer

Domain: orgorderer1
     Template:
       Count: 1
  
 PeerOrgs:
 - Name: Org1
     Domain: org1
     Template:­
       Count: 2­­
  
 - Name: Org2
     Domain: org2
     Template:
       Count: 2

Two keywords OrdererOrgs and PeerOrgs specify the type of organization:

1) OrdererOrgs defines an organization with the name Orderer and the domain name orgorderer1. The value of Count under Template is 1. It means only one orderer is under the organization. Its id is orderer0.

2) PeerOrgs defines two organizations: Org1 and Org2. The corresponding domain names are org1 and org2, respectively. As specified by the value of Count, each organization has two peers, namely peer0 and peer1. Peer0 of org1 and peer0 of org2 have the same ID, but they could be distinguished easily by domain name, e.g. peer0.org1 and peer0.org2.

Note: Since the namespace of Kubernetes does not support ‘.’ and uppercase letters, the domain names of each organization should follow the same rule.

For more information on how to customize cluster-config.yaml, please refer to the source code of cryptogen (fabric/common/tools/cryptogen/main.go) .

From the file cluster-config.yaml, cryptogen will generate crypto-config/ directory as follows:

crypto-config
|--- ordererOrganizations
|    |--- orgorderer1
|           |--- msp
|           |--- ca
|           |--- tlsca
|           |--- users
|           |--- orderers
|           |--- orderer0.orgorderer1
|                 |--- msp
|                 |--- tls
|
|--- peerOrganizations
      |--- org1
      |     |--- msp
      |     |--- ca
      |     |--- tlsca
      |     |--- users
      |     |--- peers
      |           |--- peer0.org1
      |           |     |--- msp
      |           |     |--- tls
      |           |--- peer1.org1
      |                 |--- msp
      |                 |--- tls
      |--- org2
            |--- msp
            |--- ca
            |--- tlsca
            |--- users
            |--- peers
                   |--- peer0.org2
                   |     |--- msp
                   |     |--- tls
                   |--- peer1.org2
                         |--- msp
                         |--- tls

 B. configtx.yaml

The tool configtxgen will generate genesis block according to configtx.yaml. The genesis block is used to boot up orderer and restrict permission of channel creation. We need to modify configtx.yaml to generate the appropriate genesis block according to the definition of organizations in cluster-config.yaml.

For example, if we add an Org3 to cluster-config.yaml and prepare to create a channel that contains Org1, Org2, Org3, we should modify configtx.yaml by the following two steps:

  1. Add Org3 to yaml:

  1. Add Org3 to Organizations section:

Template files of Kubernetes

While deploying Fabric on Kubernetes, we need to create appropriate configuration files for namespaces and pods which Fabric components are mapped to. Considering there could be many nodes in a Fabric cluster, it is complicated and error-prone to manually create these files. Fortunately, we can automate this process by using some scripts. Five templates serve as the cookie cutter to generate configuration files. They can be found in the templates directory of the downloaded sample code. The function of templates are as follows:

a) fabric_1_0_tmeplate_namespace.yaml
Define the namespace of the Fabric cluster in Kubernetes, which corresponds to the domain name of the organization.

b) fabric_1_0_template_cli.yaml
CLI pod deployment template for each namespace. The CLI pod provides a command line interface for managing all the peers within the organization, e.g. the creation of the channel and chaincode installation.

c) fabric_1_0_template_ca.yaml
The deployment template for the CA service of the Fabric. It is used for certificate management in the organization.

d) fabcric_1_0_template_orderer.yaml
Deployment template of orderer. It should be noted that cryptogen does not generate genesis block which the orderder replies on to start. Thus it’s necessary to prepare genesis block before booting up the orderer.

e) fabric_1_0_template_peer.yaml
Deployment template of peers. When instantiating a chaincode, peer needs to connect to the Docker endpoint to create the chaincode container. For this reason, the var/run/docker.sock of the host is mapped into the peer container.

Deploying Fabric onto Kubernetes

The following operations are performed on the CMD client in Figure 1. The shared directory of NFS is /opt/share. The directory on NFS should be owned by nobody:nogroup.

  • Generating startup files 

a)  Mount the shared directory of NFS to /opt/share of local host.

b) Change to the Fabric-on-K8S/ directory. Use the following command to download Fabric cryptogen and other tools:

$ curl https://nexus.hyperledger.org/content/repositories/releases/org/hyperledger/fabric/hyperledger-fabric/linux-amd64-1.0.0/hyperledger-fabric-linux-amd64-1.0.0.tar.gz | tar xz

c) Change the NFS address of templates/fabric_1_0_template_pod_cli.yaml to your own NFS server, as shown in Figure 7:

Figure 7

d) Edit templates/fabric_1_0_template_pod_namespace.yaml and modify the address of NFS as shown in Figure 8:

Figure 8

e) By default, cluster-config.yaml define two peer organizations and one orderer organization. Each peer organization has two peers. and orderer organization has only one orderer. We can add more organizations by modifying cluster-config.yaml.

f) Generate startup files by the following command:

$ sudo bash generateAll.sh

 

  • Running startup script 

Start the Fabric cluster with the following command (PyYAML-3.5 is required): 

$ python3.5 transform/run.py

For each PeerOrganization orgN (N=1,2,3…), the workflow of run.py runs as follows:

  • Create orgN namespace in Kubernetes;
  • Create CA pod under orgN namespace;
  • Create CLI pod under orgN namespace;
  • Traverse the subdirectory of orgN/peers to find the corresponding yaml files and start all pods of the peers.

For each OrdererOrganization ordererorgN (N=1,2,3…), the workflow of run.py runs as follows:

  • Create ordererorgN namespace in Kubernetes;
  • Traverse the subdirectory of ordererorgN/orderers to find the corresponding yaml files and start all orderers.

 

  • Checking cluster status 

After starting the cluster, use the following command to check the status of all pods:

$ kubectl get pods --all-namespaces

When all pods are displayed as running, all components are working properly. Figure 9 is an example of the command output:

Figure 9

 Testing the cluster

Assume we have deployed and started a Fabric cluster with the default cluster-config.yaml. Next, we will do some tests to verify the fabric cluster is working properly.

At first, we use the configtx tool to generate channel-related files to create and join a channel:

[1]  Go to Fabric-on-K8S/setupCluster/ directory of the CMD client:

$ cd Fabric-on-K8S/setupCluster/

[2]  Create the file channel.tx, where the channel ID is mychannel:

$ ../bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx \

./channel-artifacts/channel.tx -channelID mychannel

[3]  Create an upgrade file for the channel which updates the anchor of Org1 in mychannel:

$ ../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate \
./channel-artifacts/Org1MSPanchors.tx -channelID mychannel -asOrg Org1MSP

[4]  Create an upgrade file for the channel which updates the anchor of Org2 in mychannel:

$ ../bin/configtxgen -profile TwoOrgsChannel -outputAnchorPeersUpdate \
./channel-artifacts/Org2MSPanchors.tx -channelID mychannel -asOrg Org2MSP

[5]  Since each Org’s CLI Pod needs to use the files created in the above steps, we can share these files with CLI Pod via NFS:

$ sudo cp -r ./channel-artifacts /opt/share/

 After completing the above work, we enter the command prompt of CLI Pod of org1. From there we run commands as follows:

A)   List all pods under org1 namespace:

$ kubectl get pods --namespace org1

As shown in Figure 10, we found out the CLI Pod of org1 is cli-2586364563-vclmr.

Figure 10

B) Enter the cli container’s command prompt:

$ kubectl exec -it cli-2586364563-vclmr bash --namespace=org1

C) Create a channel named mychannel:

$ peer channel create -o orderer0.orgorderer1: 7050 \  
 -c mychannel -f ./channel-artifacts/channel.tx

D)   Copy mychannel.block to channel-artifacts/ directory:

$ cp mychannel.block ./channel-artifacts

E)    The peer joins mychannel:

$ peer channel join -b ./channel-artifacts/mychannel.block

F)    Update the anchor peer. Each organization only needs to execute this command once:

$ peer channel update -o orderer0.orgorderer1: 7050 \   

-c mychannel -f ./channel-artifacts/Org1MSPanchors.tx

G)   Download the chaincode_example02/ directory of the Fabric project from Github. Place it in the /opt/share/channel-artifacts directory of the CMD client.

H)   Install the chaincode named mycc:

$ peer chaincode install -n mycc -v 1.0 –p \
github.com/hyperledger/fabric/peer/channel-artifacts/chaincode_example02

I)  Instantiate the chaincode mycc:

$ peer chaincode instantiate -o orderer0.orgorderer1: 7050 \
-C "mychannel -n mycc -v 1.0 -c \
'{" Args ": [" init "," a "," 100 "," b "," 200"]}' \
-P "OR ('Org1MSP.member', 'Org2MSP.member')"

After mycc is instantiated, we can switch to other organization’s CLI Pod by adding the same channel and other steps to verify whether the ledger has synchronized.

Externally invoking Fabric

Now that the Fabric cluster is up and running. However, it is only accessible from the Kubernetes’ network. To expose its service to external network, we should configure Kubernetes to accept external requests. To do that, the service type is defined as NodePort. The port mapping rules are described as follows ( N=1,2,3…. , and M = 0,1,2,3…):

1. The port range of orgN is 30000 + (N-1) * 100 ~ 30000 + (N) * 100-1 .  Each organization can be assigned up to 100 port numbers. For example, org1’s port range is 30000 to 30099 .

2. The mapping of CA’s port 7054 is as follows:

ca.orgN: 7054 -> worker: 30000+ (N-1) * 100

3. Because each peer needs to expose both port 7051 and 7052, the port mapping of peerM in orgN is as follows:

peerM.orgN: 7051 -> worker: 30000+ (N-1) * 100 + 2 * M + 1

peerM.orgN: 7052 -> worker: 30000+ (N-1) * 100 + 2 * M + 2

For example: if a worker node of Kubernetes has an IP address of 192.168.0.7.  We can access peer0.org1 through the 192.168.0.7:30001 (the peer’s port 7051).

4. The port of the ordererN:

ordererN: 7050 -> worker: 33700 + N

Note: NodePort is a way to expose service in Kubernetes. However, it occupies the same ports in the whole Kubernetes cluster. This may not work well in some situations. Refer to Kubernetes’ document for other approaches.

Deleting Fabric cluster

When we want to delete the Fabric cluster, run the transform/delete.py script to clean up the environment. The script will traverse the crypto-config directory and find out all the yaml files. It then removes the resources one by one by command “kuberclt delete-f xxx.yaml”.

Conclusion

In this 2-part blog posts, we walked through the mechanism and process of deploying Fabric on Kubernetes. While there could be improvement over our approach, we hope it shows a way to leverage Kubernetes for blockchain applications. Our approach has been used in a tool called “Blockchain on vSphere”. It is an automation tool that allow you to deploy Fabric on Kubernetes (and vSphere) with minimum configuration. You are welcome to try it out and give us feedback.

About the Authors:

Henry Zhang: Chief Architect of VMware China R&D, the creator of open source Project Harbor (https://github.com/vmware/harbor) – an enterprise class container registry server. Henry is a co-author of the book “Blockchain Technical Guide”. He is also a contributor of Hyperledger Cello project.

Luke Chen: MTS intern at VMware China R&D. He is a master student at Guangzhou University. He is a contributor of Hyperledger Cello project.

Deploy Hyperledger Fabric on Kubernetes Part 1

Overview

Fabric is one of the Hyperledger projects hosted by the Linux Foundation. It provides a framework to develop blockchain applications. Since the release of Fabric 1.0 in this July, people are eager to build applications using Fabric to solve their business problems. However, many encounter difficulties in deploying and managing the Fabric system due to the complexity of Fabric in configuration.

To simplify the operation of Fabric, we need some tools to help us better manage the distributed system of Fabric. Kubernetes seems ideal for this purpose for several reasons. (It is interesting to note that Kubernetes is a flagship project under CNCF, which is also a Linux Foundation project.)

First, Fabric’s bits are built into container images. Its chaincode (smart contract) also leverages container to run in a sandbox. The Fabric system consists of components running in multiple containers. On the other hand, Kubernetes is becoming the dominant platform to automate the deployment, scaling, and other management of containerized applications. There is a natural fit of the two.

Second, Fabric components can achieve high availability by deploying on Kubernetes. Kubernetes has a feature called replicator, which monitors running pods and brings up crashed ones automatically.

Third, Kubernetes supports multi-tenancy. We can run multiple isolated Fabric instances on the same Kubernetes platform. This facilitates the development and testing of blockchain applications.

In the following sections, we introduce a way to deploy Fabric on Kubernetes. We assume readers have basic knowledge of Fabric, Docker container and Kubernetes.

Network Topology

Our network topology is shown in Figure 1. The physical network is represented by blue lines. Kubernetes has one or more master and worker nodes. Besides that, we have a CMD machine as a client to issue the deployment commands. An NFS server is used as a shared file system for configuration files and other data. All these nodes are connected by a physical network (e.g. 192.168.0.1/24).

Kubernetes’ network model enables all pods to connect to each other directly regardless which node they are on. By using Kubernetes’ CNI addons, such as Flannel, it’s easy to create an overlay network for this purpose. As indicated by the red lines in Figure 1 (some details of flannel components are omitted), Kubernetes connects all pods to the Flannel network, allowing containers of those pods to communicate with each other properly.

The IP address range of the Flannel network, as well as the IP address of the kube_dns can be specified in the add-on configuration file. We need to make sure the IP address of kube_dns must be in the specified address range. In Figure 1, for example, the Flannel network is 10.0.0.1/16, and the kube_dns address is 10.0.0.10.

Figure 1

Mapping Fabric Components to Kubernetes Pods

Figure 2

Fabric is a distributed system which contains multiple nodes. The nodes could belong to different entities. As shown in Figure 2, each organization has its own set of nodes (For simplicity, not all nodes are shown). There is also a public consensus service formed by Orderers. To deploy Fabric onto Kubernetes, we need to convert all components into pods for deployment and use namespace to segregate organizations.

In Kubernetes, namespace is an important concept. It is used to divide cluster resources between multiple users. In the case of Fabric, organizations can be mapped into namespaces so that they have their dedicated resource. After this mapping, peers of each organization can be distinguished by domain name. Furthermore, we could isolate different organizations by setting network policy (not covered in this blog).

As shown in Figure 2, suppose there are N peer organizations and M orderer organizations in the Fabric network. Here’s how we divide them on Kubernetes:

A) If Fabric, we assign the N-th peer organization with a name orgN. Its corresponding namespace in Kubernetes is also called orgN. All components of Fabric orgN will be placed into namespace orgN in Kubernetes. There are multiple pods under each organization’s namespace. A pod is a deployment unit in Kubernetes, it consists of one or more containers. We can bundle Fabric containers of each organization into several pods. These pod types are as follows:

  • Peer Pod: including Fabric peer, couchDB (optional), representing the organization’s peer node. Each organization could have one or more peer pods.
  • CA Server Pod: Fabric CA Server node of the organization. Usually one pod is needed in an organization.
  • CLI Pod: (optional) Provides an environment for command-line tools to manipulate the nodes of the organization. Fabric’s peer environment variables are configured in this pod.

Figure 3

B) There could be one or more orderers in Fabric. We set the name of the M-th orderer organization to orgordererM. Its corresponding namespace on Kubernetes is orgordererM. It has one or more pods to run the orderer node(s).

Figure 4

C) If Kafka is used for consensus process, we can put Kafka into a separate namespace. It is only used to run and manage Zookeeper and Kafka containers.

Putting it altogether, the overall deployment would look like below:

Figure 5

Shared storage

Before we deploy Fabric, we need to prepare configuration files of its components, such as peers and orderers. This is a very complicated process and tends to be error prone. Fortunately, we created a tool to automate the generation of these configuration files. The generated files are stored in a shared file system like NFS.

When we later launch the pods of Fabric, we mount different subsets of configuration files into pods so that they have configuration specific to the organization they belong to.

In Kubernetes, we can mount files or directories into a pod by using Persistent Volume (PV) and Persistent Volume Claim (PVC). We create PVs and PVCs for each organization in Fabric for resource isolation. Each organization should only see its own directory in NFS server.

After the creation of PV, we define PVC so that Fabric nodes can consume PV to access the corresponding directories and files.

Take peer organization org1 as an example. At first, we create a namespace org1 and its PV. The PV is mapped to the directory /opt/share/crypto-config/peerOrganizations/org1 on NFS. Secondly, we create a PVC to consume the PV. All pods under the namespace org1 use the same PVC. However, we only map necessary files into each pod by specifying the mounting path in the pod configuration file.

Figure 6 shows the relationship between pods and their shared directory of NFS. Variable $PVC represents the PVC mount point, which is /opt/share/crypto-config/peerOrganizations/org1 in this example.

Figure 6

Communication between Fabric components

When all Fabric’s components are placed into Kubernetes’ pods, we need to consider the network connectivity between these pods. Each pod in Kubernetes has an internal IP address, but it is hard to use IP and port to communicate between pods because the IP address is ephemeral to the pod. When the pod gets restarted, its IP address gets changed too. Therefore, it is necessary to create services in Kubernetes for pods so that they can talk to each other through service name. The naming of a service should follow the principles to show the pod information that it binds to:

1) The namespace of the service and the pod should be consistent.
2) The name of the service should be consistent with the id of the container within the pod.

Fabric’s peer0 of organization org1, for example, is mapped to a pod named peer0 under namespace org1. The service binding to it should be named peer0.org1, where peer0 is the name of the service and org1 is the namespace of the service. Other pods can connect to the peer0 of org1 by service name peer0.org1, which appears as peer0’s hostname.

Work around the chaincode sandbox

When a peer in Fabric instantiates a chaincode, it creates a Docker container in which the chaincode runs. The Docker API endpoint it invokes to create the container is unix:///var/run/docker.sock . This mechanism works well as long as the peer container and the chaincode container are managed by the same Docker engine. However, in Kubernetes, the chaincode container is created by the peer without notifying Kubernetes. Hence the chaincode and peer containers cannot connect to each other which results in failure when instantiating the chaincode.

To work around this problem, we need to add the Kube_dns IP address in each worker node’s Docker engine. Add in the below option in Docker engine’s configuration file. In the below example, 10.0.0.10 is the IP address of kube_dns pod. Replace it with the right value in your environment.

"--dns=10.0.0.10 --dns=192.168.0.1 --dns-search \
default.svc.cluster.local --dns-search \
svc.cluster.local --dns-opt ndots:2 --dns-opt \
timeout:2 --dns-opt attempts:2 "

Up to now, we have illustrated the key points in deploying Fabric onto Kubernetes. In the next post, we will describe detailed steps of the deployment. For people who cannot wait, please download our Fling “Blockchain on vSphere” to get a feel of how it works. It is an automation tool that allow you to deploy Fabric on Kubernetes with minimum configuration. If you do not use vSphere to run Kubernetes, you can choose whatever underlying infrastructure for your Kubernetes instance. Just skip the part to deploy Kubernetes on vSphere.

Continue with Part 2:

Deploy Hyperledger Fabric on Kubernetes Part 2

 

About the Authors:

Henry Zhang: Chief Architect of VMware China R&D, the creator of open source Project Harbor (https://github.com/vmware/harbor) – an enterprise class container registry server. Henry is a co-author of the book “Blockchain Technical Guide”. He is also a contributor of Hyperledger Cello project.

Luke Chen: MTS intern at VMware China R&D. He is a master student at Guangzhou University. He is a contributor of Hyperledger Cello project.