July 23, 2019 By Sugandha Agrawal
Martin Henke
6 min read

Recently, the Node.js runtime of Apache OpenWhisk got adapted so that it can also be executed as a Knative service.

In her article, “OpenWhisk Actions on Managed Knative on IBM Standard Kubernetes Cluster,” Priti Desai described how to inject Node.js action code into the Apache Openwhisk Node.js runtime image using Knative Build and how to execute the action as a Knative service.

Since then, the Knative project decided to evolve Knative Build into a separate project named Tekton that aims to provide generic CI/CD capabilities based on Kubernetes

Using Tekton instead of Knative Build

In this post, we will demonstrate how to use Tekton instead of Knative Build to inject the action code into the Node.js runtime and to run it as a Knative service. This will allow you to run your OpenWhisk actions in Knative virtually with no change.

To learn more about Apache OpenWhisk and Knative, see the following resources:

More on Tekton

Tekton is a powerful yet flexible Kubernetes-native open-source framework for creating continuous integration and delivery (CI/CD) systems. It lets you build, test, and deploy across multiple cloud providers or on-premises systems by abstracting away the underlying implementation details.” — Tekton project

To learn more about Tekton, check out “Tekton: A Modern Approach to Continuous Delivery.”

How to create your Knative service using Tekton 

We are using the IBM Cloud Kubernetes Service as a basis and have run Knative on top of it, installed as a Knative addon

Following the steps below will help you get insight about Tekton and how to use it with Knative. But, first, get your cluster with Knative up and running. 

Installing Tekton

After you set Knative on your cluster, install Tekton on your cluster with this command:

$ kubectl apply --filename https://storage.googleapis.com/tekton-releases/latest/release.yaml

This will give you the following output:

Check whether all the components are up and running:

$ kubectl get pods --namespace tekton-pipelines

Proceed only when the components show a STATUS of running:

NAME                                       READY                 STATUS 
tekton-pipelines-controller-<some uuid>    1/1                   Running 
tekton-pipelines-webhook-<some uuid>       1/1                   Running 

Generating service account and secrets

1. We need to create a Kubernetes secret to allow the image to be pushed to a registry. It can be Dockerhub or any other registry. Create a registry-secret.yaml using the schema below (which can be found here):

You can also use any of the methods provided by Kubernetes.

Note: The registry used in this blog is Dockerhub, so you should provide your Dockerhub info in the secret created. If you wish to use any other registry, provide the necessary credentials in the stringData section.

2. Apply the secret created in Step 1 using the following command:

$ kubectl apply -f docker-secret.yaml 

Ouput:

secret/dockerhub-user-pass created

3. Verify the created secret by issuing:

$ kubectl get secret dockerhub-user-pass

Output: 

NAME                   TYPE 
dockerhub-user-pass    kubernetes.io/basic-auth

4. Create a service account to link the build process with the registry secret created in Step 1 so that Tekton can push container images to the registry using those credentials (source here):

5. Apply the Service account definition:

$ kubectl apply -f serviceaccount-tekton.yaml

Output:

serviceaccount/serviceaccount-tekton created

Building Tekton pipelines

1. Now we will build our first Tekton pipeline resource—i.e., get Node.js runtime image code from the Apache OpenWhisk repo (source here):

A PipelineResource of type Git defines parameters, including which revision and git url to use.

2. Use the following command to create the git resource on your Kubernetes cluster:

$ kubectl apply -f pipeline-git-resource.yaml

Output:

pipelineresource.tekton.dev/ow-git created

This is what the output should look like when issuing the following command:

$ kubectl get pipelineresources 

Output:

NAME     AGE 
ow-git   35s

3. Our second pipeline resource would contain the steps to push to the registry. In our case, we used Dockerhub as our registry and pushed our image there. As defined in Step 1 of Generating service account and secrets, the secret corresponds to this registry to allow us to push our image (source here): 

Tekton expects a type to be specified for the PipelineResource. In this case, type: image indicates that the image has to be created and pushed to the registry url provided in the parameters.

4. Issue the following command to create the registry resource on your Kubernetes cluster:

$ kubectl apply -f pipeline-registry-resource.yaml 

Output:

pipelineresource.tekton.dev/ow-registry created

This is what the output should look like when issuing the command:

$ kubectl get pipelineresources 

Output:

NAME           AGE 
ow-git         1m 
ow-registry    5s

5. A Tekton task defines the work that needs to be executed. Simply define the input resource as the git Pipeline resource, the output resource as the Docker repo, and the steps to be sequentially executed by the task.

We would define various parameters required by the steps to complete our task. In order to create a Node.js function, we used the cool work from Priti Desai and team.

This defines the build template for Knative-build. We are going to extract the parameters and steps defined in the build template and define our very own Tekton task.

Let’s define our task (source here):

Use the following command to create the task resource on your Kubernetes cluster:

$ kubectl apply -f task-build.yaml

Output:

task.tekton.dev/task-build created

The output should look like this on issuing the following command:

$ kubectl get task

Output:

NAME          AGE
task-build    4s

7. A task can be run by a TaskRun Tekton resource. TaskRun binds the inputs and outputs to already defined PipelineResources, sets values to the parameters used for templating, and executes the task steps. It is the resource where we would provide our function code and function name (source here). 

8. Now to the final fun part—to run the deployment pipeline, use the following command:

$ kubectl apply -f task-run-helloworld.yaml 

Output:

taskrun.tekton.dev/task-run-helloworld created

The output should look like this on issuing the following command:

$ kubectl get taskruns.tekton.dev

Output:

NAME                   SUCCEEDED  REASON    STARTTIME COMPLETIONTIME
task-run-helloworld    Unknown    Building  36s  

9. To check the steps being processed within your task, run resource using the following command:

$ kubectl get taskruns.tekton.dev task-run-helloworld -o=yaml

Look out for the status property with the generated output. When the task run is complete, you should see the following:

Voila! You are all set to test your first Knative action using Tekton.

Note: If any changes are made to any of the above Tekton resources, make sure to restart the Task run resource:

$ kubectl delete task-run task-run-helloworld

Then, reapply the following command:

$ kubectl apply -f task-run-helloworld.yaml

Creating a Knative service

After all the setup, it is time to invoke your very first Knative service. All you need to do is the following: 

1. Install the knctl command line tool.

2. In order to deploy your knctl service, issue the following command:

$ knctl deploy --image <your docker repo name>/nodejs-10-helloworld --service <provide a name for your service> --watch-revision-ready 

Output:

Name helloworld-nodejs 
Waiting for new revision to be created... Tagging new revision 
'helloworld-nodejs-x2625' as 'latest' Tagging new revision 
'helloworld-nodejs-x2625' as 'previous' Annotating new revision 
'helloworld-nodejs-x2625' Waiting for new revision 
'helloworld-nodejs-x2625' to be ready for up to 5m0s (logs below)... Revision 
'helloworld-nodejs-x2625' became ready Continuing to watch logs for 
5s before exiting

 This deploys the Tekton-created image from the steps above, and you don’t need to write a deployment yaml file for your service since we are using knctl.

Another cool cli tool for Knative is kn. Feel free to check that out and/or use that in place of knctl.

3. Check if the service is ready to be used:

$ knctl service list 

Output:

Services in namespace 'default' 
Name                   Domain                              
<your service name>    <your service name>.default.<your ingress sub-domain>

4. Call our service using the following command:

curl -X POST http://<your service name>.default.<your ingress sub domain>

It is not a GET but a POST request, as defined by the work from Priti Desai.

This is the result for the very first call:

This is what it should be for every other consecutive invocation:

For deploying and calling more Knative services, feel free to refer to these blogs from Priti Desai. (Note: The below examples use knative-build and not Tekton.)

Questions or comments

If you have any questions, comments, or suggestions, please reach out to us by email:

References

 

Was this article helpful?
YesNo

More from Cloud

IBM Cloud Virtual Servers and Intel launch new custom cloud sandbox

4 min read - A new sandbox that use IBM Cloud Virtual Servers for VPC invites customers into a nonproduction environment to test the performance of 2nd Gen and 4th Gen Intel® Xeon® processors across various applications. Addressing performance concerns in a test environment Performance testing is crucial to understanding the efficiency of complex applications inside your cloud hosting environment. Yes, even in managed enterprise environments like IBM Cloud®. Although we can deliver the latest hardware and software across global data centers designed for…

10 industries that use distributed computing

6 min read - Distributed computing is a process that uses numerous computing resources in different operating locations to mimic the processes of a single computer. Distributed computing assembles different computers, servers and computer networks to accomplish computing tasks of widely varying sizes and purposes. Distributed computing even works in the cloud. And while it’s true that distributed cloud computing and cloud computing are essentially the same in theory, in practice, they differ in their global reach, with distributed cloud computing able to extend…

How a US bank modernized its mainframe applications with IBM Consulting and Microsoft Azure

9 min read - As organizations strive to stay ahead of the curve in today's fast-paced digital landscape, mainframe application modernization has emerged as a critical component of any digital transformation strategy. In this blog, we'll discuss the example of a US bank which embarked on a journey to modernize its mainframe applications. This strategic project has helped it to transform into a more modern, flexible and agile business. In looking at the ways in which it approached the problem, you’ll gain insights into…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters