As we look back at 2019, these are our favorite solution tutorials and technologies.
With the year 2019 closing down, it is time again to look back at the new and updated IBM Cloud solution tutorials and technology topics we worked on. As agile software developers, we’re fond of the retrospective. So, once again, we are looking back and asking the team: Which were your favorite tutorials or interesting topics you worked on? What are your personal technology highlights? Any recommendations (or even predictions) for the future?
It's been a fruitful 2019 as I got a chance to draft and contribute to a variety of solution tutorials, starting from VPC to Red Hat OpenShift/Kubernetes to Machine Learning. One of my favorite tutorials is "Securely access remote instances with a bastion host" because it walks you through the deployment of a bastion host to securely access remote instances within a virtual private cloud. This is pretty important for you to securely connect to your instance and install the required software to get up and running.
Next on my list is "Scalable web application on OpenShift," which, in a way, is my first encounter with Red Hat OpenShift. The tutorial walks you through how to scaffold a web application, run it locally in a container, push the scaffolded code to a private Git repository, and then deploy it to a Red Hat OpenShift on IBM Cloud cluster. These are only two of the many awesome tutorials I drafted, co-contributed, and reviewed along with my wonderful colleagues.
As mentioned in the last year's review post, AutoML picked up its pace and got included in one of our tutorials— "Build, deploy, test, retrain and monitor a predictive machine learning model."
In the Quantum computing world, there is Qiskit. Qiskit is an open source quantum computing software development framework for leveraging today's quantum processors in research, education, and business.
Looking back at 2019, security topics dominated my work. Virtual Private Clouds (VPCs) provide secure computing environments by isolating tenant-specific resources. Our team created several tutorials discussing almost all aspects of using VPCs.
To securely access the virtual machines in a VPC environment, you can implement the concept of jump hosts or bastion hosts. To combine resources from on-prem or other cloud environments with those in a VPC, it is possible to establish Virtual Private Network (VPN) connections between those sites.
In 2019, we reworked some of our tutorials to add more security-related information. All of IBM Cloud uses IAM (Identity and Access Management) with fine-grained privileges. Key Protect and Hyper Protect Crypto Services are available to manage your encryption (root) keys. The two services are the foundation for you to take control of data encryption across many of the offered services.
In a series of blog posts and with newly added instructions in the tutorials, I covered how to rotate service credentials for your deployed solutions, regardless of chosen runtimes like Kubernetes, Cloud Foundry, or Cloud Functions. We also have a new tutorial discussing how to enhance the security of your deployed application which complements the introductory tutorial on applying end-to-end security to a cloud application.
Standards for security keys seem to evolve. FIDO2 is the latest effort to better protect accounts by either using USB security keys as a second factor or implementing passwordless logins. Using the IBM services App ID and Cloud Identity, I discussed how to easily enable existing applications for passwordless login. As a consequence of that work, I switched on 2FA for many of my private Internet accounts, using a mix of TOTP (Time-based One-time Password) and security keys and avoiding insecure SMS/text messages.
Next year, I will also continue to watch the evolution of Artificial Intelligence and how ethics, trust, fairness and robustness are applied to AI-enabled business solutions.
I'd like to highlight not a tutorial per se, but a blog post about Cloud Object Storage and its integration with Cloud Functions. Being able to trigger code whenever a file is added or removed from a storage bucket opens the door to event-based workflow. Dropping a file in a bucket and retraining your AI model or triggering a file conversion is only a matter of connecting the bucket to a function through a trigger now. You no longer need to poll the bucket—just connect the pieces together as described in the Cloud Functions documentation.
Manual configuration is not something I will do more than twice because I typically will gravitate towards automation/scripting. I most enjoyed working on a deployment scenario that included: creating virtual instances and load balancers in a multi-zone Virtual Private Cloud; configuring block storage with bring-your-own encryption keys; and deploying and configuring an open source database and a small application.
I initially scripted the deployment with a shell script leveraging the IBM Cloud CLI, which offered an opportunity to get in deep with the various available plugins. I eventually switched to using Terraform and the IBM Cloud Provider for Terraform because it expanded its capabilities around the cloud services I required for this scenario.
Second on my list was exploring a DevOps Toolchain to automate the build, deployment, and update of a web app on top of WebSphere Liberty, taking me back to my old stomping grounds. I am looking forward to continuous improvements in IBM Cloud Schematics and opportunities to improve upon these automation workflows.
Terraform with the IBM Cloud Provider has been a great way to create cloud infrastructure.
The IBM Cloud VPC is a particularly good fit with Terraform. The tutorial "Install software on virtual server instances in VPC" was a place I could investigate cloud automation using both shell scripts and Terraform. I have found that the declarative representation of cloud infrastructure that Terraform provides matches my mental model of the cloud.
Terraform execution and state can now be centralized with the fully managed IBM Schematics service. This is a great way for teams to centrally locate their architecture and iterate the deployment. The Terraform configuration is kept in source code control, where collaboration by all contributing teams takes place before applying changes to the cloud resources. For example, network, performance, and security experts can discuss pull requests created during development. I have found this to be a great way to manage cloud resources.
Finally, the Observability changes in the IBM Cloud have been extremely helpful. The IBM Log Analysis with LogDNA and Cloud Monitoring with Sysdig services are great tools to monitor all my cloud resources. While writing "Account Auditing Using Activity Tracker with LogDNA," I learned it is easy to set up, use, and integrate into my normal workflow (I'm a Slack user). Getting notifications at the time unusual account activity is happening eliminates unwanted surprises.
Engage with us
If you have feedback, suggestions, or questions about this post, please reach out to us on Twitter (@data_henrik, @l2fprod, @powellquiring, @VidyasagarMSC) or LinkedIn (Dimitri, Frederic, Henrik, Powell, Vidya).
Use the feedback button on individual tutorials to provide suggestions.
Moreover, you can open GitHub issues on our code samples for clarifications.
We would love to hear from you.