The year 2021 feels like 2020, but more mature and secure.

You can catch up with our 2018, 2019 and 2020 wrap-ups and then move on to this year 2021 in review. We hope you enjoy!

The year 2021, at first sight, looks a lot like 2020 when looking at events and technology. But by digging deeper, you’ll see that many of the technologies have matured and become more secure. We see the same with many of the IBM Cloud offerings and projects we care for. Here is our look back at 2021.

Vidya

While exploring emerging technologies and tools, we often learn a lot. It’s easy to forget things unless you write about it somewhere. This year, I got a chance to work with some of the latest cloud technologies and tools. Along with my team, I was able to draft and publish a couple of my learnings as solution tutorials or blog posts, starting with “Scale workloads in shared and dedicated VPC environments.”

Following the step-by-step instructions in the solution tutorial, you will provision an IBM Cloud® Virtual Private Cloud (VPC) with subnets spanning multiple availability zones (AZs) and virtual server instances (VSIs) that can scale according to your requirements to ensure the high availability of your application. You will also isolate workloads by provisioning a dedicated host, attaching an encrypted data volume to a VSI, expanding the attached data volume and resizing the VSI after the fact. 

All of these services and VPC resources use IBM Cloud Schematics, which provides Terraform-as-a-Service capabilities:

VPC scaling and dedicated host.

Along with drafting new tutorials, we monitor, update and enhance our existing tutorials. All the OpenShift solution tutorials are updated to the latest version of Red Hat OpenShift on IBM Cloud.

As part of exploring new use-cases, I was able to publish an interesting post called “Run and Scale an Apache Spark Application on IBM Cloud Kubernetes Service.” You will learn how to set up Apache Spark on IBM Cloud Kubernetes Service by pushing the Spark container images to IBM Cloud Container Registry. 

I also published the following:

It didn’t stop there, with the release of GPU-enabled instance profiles that provide on-demand access to NVIDIA V100 GPUs to accelerate AI, high-performance computing, data science and graphics workloads, I was able to publish a blog post that talks about “Deploying RAPIDs on GPU-Enabled Virtual Servers on IBM Cloud Virtual Private Cloud.”

This is just the tip of the iceberg. There’s more coming next year around IBM Cloud Satellite and Quantum Serverless (for starters, check the text analysis with Code Engine tutorial).

Dimitri

I am always looking for techniques that I can use to configure cloud resources and applications. I find it satisfactory to configure my resources/applications during the initial deployment, but I find it is even more important to consider how to modify the deployment based on updated inputs. For example, the location of a database may change post-deployment or there may be a change in the underlying host running the virtual server instance.

As is typical these days, I use Terraform in the initial configuration of my resources and applications, as shown in the “Terraform Template for Activity Tracker, Monitoring, and Log Analysis with Team Controls” post. However, once the compute resource(s) and application(s) are running, updates are needed that may not be feasible using Terraform. This need is more apparent when the updates impact a third-party application I am running on the virtual server instance (VSI).

I demonstrated one of these techniques in “Automate the Configuration of Instance Storage on a Linux-Based VSI.” In that post, I discussed how a custom service automatically re-configures the Instance Storage — which is ephemeral — used by an application running inside a VSI. As a result, every time an instance is cycled off/on or moved to another physical host, the custom service ensures the storage is ready to be used by the application before the application itself starts.

In an upcoming post, I will explore how a custom service is utilized to update which SSH Keys are allowed to authenticate to the operating system in a VSI. That service leverages two capabilities introduced recently: IAM Trusted Profiles and Instance Metadata Service (as discussed below by Henrik):

Powell

I have been interested in infrastructure throughout my career. Cloud computing has been a playground that I could not have imagined just a decade ago. It has been exciting to be in on the ground floor of the IBM hybrid cloud. I’m fascinated by the capabilities of a fundamental technology like virtual private cloud (VPC).   I’m a huge fan of automation and typically use a combination of Terraform and scripting. The following are a few examples:

Henrik

We all love (and even expect) security. And we all enjoy comfort and don’t want to miss it. Thus, life — especially digital life — is often a battle between really aiming at higher security standards or giving in for some simplicity. Therefore, I am always happy when security features are seemingly invisible and happening in the background or when they can be easily automated.

Throughout the year, I worked with and evaluated many new security-related features, but also covered some existing technologies. In my blog post “JSON Web Tokens as Building Blocks for Cloud Security,” I looked at JWTs (JSON Web Tokens) — something we all use day in, day out, but often are not aware of. In another blog post, I discussed encryption options for cloud environments; “Your Key to Cloud Security” explains the difference between provider-controlled encryption keys and the concepts of BYOK (bring your own key) and KYOK (keep your own key). Understanding your options and their impact is the first step toward more secure solutions.

Resource isolation combined with a zero trust approach is key to building secure solutions. Virtual private clouds (VPCs) have become a common foundation for establishing isolated cloud environments on the enterprise level. This year, VPC on IBM Cloud saw many improvements and new features. Some of them directly tie in with IAM (Identity and Access Management) capabilities. The VPC Instance Metadata service allows you to retrieve information about VPC resources and includes the ability to generate special identity tokens. They can be exchanged for IAM tokens to morph into a Trusted Profile to access resources based on access policies. Overall, this allows you to move privileges from users or service IDs to trusted compute resources, cutting complexity and improving security.

Many of the IBM Cloud security features and related services have seen broader support for Terraform. Thus, I spent some time automating scenarios and simplifying project onboarding. Establishing security requires regular control and assessment and checks against established standards. The IBM Cloud Security and Compliance Center assists in that process and provides valuable tools. Over some lighter summer workload, I utilized the new IBM-managed collector to establish daily security checkpoints, incrementally hardened my cloud account and moved towards meeting cloud security goals. All of that combined is named Posture Management:

How the Posture Management components work together.

I want to conclude my view back on 2021 with a fun side project. I lecture about Data Security at a Cooperative State University in Germany. To discuss API security and data privacy issues with my IT Security students, I built up a data lake with rideshare and mobility data. My blog post “Data Scraping Made Easy Thanks to IBM Cloud Code Engine” details a serverless approach of running cron-based jobs to collect and process data. Thanks to containerization, simple setup and easy integration, it allowed me to build my cloud-based data lake and then analyze and visualize the data, as shown below:

Heatmap from collected vehicle positions (map data (C) OpenStreetMap contributors).

2022 certainly will bring new exciting features, the convergence of (public, private, dedicated, specialized) cloud environments and, thus, new challenges.

Frederic

In my daily work, I’m a big fan of automation. When asked to provide weekly or monthly reports on our achievements or to analyze user requirements, I try to end up with an automated solution that would extract the right data and produce the right report (e.g., web pages, Powerpoint, Excel, whatever works for the target audience). I did that a lot this year with tools like Python, Node.js, JSON, markdown, docsify, GitHub Pages and Travis.

The other space where automation plays a big role is cloud. I cannot think of a client project that would not make use of automation to provision all its resources and deploy its applications across multiple environments. Automation is a key enabler of agile development environments, and if you want to increase speed and velocity, you want to reduce the number of manual steps in your processes so that you can run them more often.

This shows in most of my posts this year:

Next year will undoubtedly bring more of that. More hybrid cloud projects, more automation, more tooling to simplify the deployment of end-to-end patterns and (my own wish) more tools to visualize your infrastructure.

Engage with us

If you have feedback, suggestions, or questions about this post, please reach out to us on Twitter (@data_henrik, @l2fprod, @powellquiring, @VidyasagarMSC) or LinkedIn (Dimitri, Frederic, Henrik, Powell, Vidya). 

Use the feedback button on individual tutorials to provide suggestions. Moreover, you can open GitHub issues on our code samples for clarifications. We would love to hear from you.

Was this article helpful?
YesNo

More from Cloud

How a US bank modernized its mainframe applications with IBM Consulting and Microsoft Azure

9 min read - As organizations strive to stay ahead of the curve in today's fast-paced digital landscape, mainframe application modernization has emerged as a critical component of any digital transformation strategy. In this blog, we'll discuss the example of a US bank which embarked on a journey to modernize its mainframe applications. This strategic project has helped it to transform into a more modern, flexible and agile business. In looking at the ways in which it approached the problem, you’ll gain insights into…

The power of the mainframe and cloud-native applications 

4 min read - Mainframe modernization refers to the process of transforming legacy mainframe systems, applications and infrastructure to align with modern technology and business standards. This process unlocks the power of mainframe systems, enabling organizations to use their existing investments in mainframe technology and capitalize on the benefits of modernization. By modernizing mainframe systems, organizations can improve agility, increase efficiency, reduce costs, and enhance customer experience.  Mainframe modernization empowers organizations to harness the latest technologies and tools, such as cloud computing, artificial intelligence,…

Modernize your mainframe applications with Azure

4 min read - Mainframes continue to play a vital role in many businesses' core operations. According to new research from IBM's Institute for Business Value, a significant 7 out of 10 IT executives believe that mainframe-based applications are crucial to their business and technology strategies. However, the rapid pace of digital transformation is forcing companies to modernize across their IT landscape, and as the pace of innovation continuously accelerates, organizations must react and adapt to these changes or risk being left behind. Mainframe…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters