Articles

Hashicorp Vault + Vault Secret Operator + GCP for imagePullSecrets

Summary

The need for this mix of buzz words is for a very specific use case. For all of my production hosting I use Google Cloud. For my local environment its podman+kind provisioned by Terraform.

Usually to load container images I will build them locally and push into kind. I do this to alleviate the requirement of an internet connection to do my work. But it got my thinking, if I wanted to, couldn’t I just pull from my us.gcr.io private repository?

Sure – I could load a static key in place but I’d likely forget and that could be an attack vector for compromise. I decided to play with Vault to see if I could accomplish this. Spoiler, you can but there aren’t great instructions for it!

Why Vault?

There are a great many articles on why Vault or a secret manager is a great idea. What it comes down to is minimizing the time a credential is valid and to do that using more short lived credentials so if it gets compromised, the longevity of that compromise will be minimized.

Vault Setup

I will not go into full details on the setup but Vault was deployed via helm chart into the K8s cluster and using this guide from HashiCorp to enable gcp secrets

Your gcpbindings.hcl will need to look something like this at a minimum. You likely don’t need the roles/viewer.

 resource "//cloudresourcemanager.googleapis.com/projects/woohoo-blog-2414" {
        roles = ["roles/viewer", "roles/artifactregistry.reader"]
      }

For the roleset, I called mine “app-token” which you will see later.

The values I used for vault’s helm chart were simply as follows because I don’t need the injector and I don’t think it would even work for what we’re trying to do.

#vault values.yaml
injector:
  enabled: "false"

For the Vault Secret Operator it was simply these values as vault was installed in the default namespace. I did this for simplicity just to get it up and running. A lot of the steps I will share ARE NOT BEST PRACTICES but will help you get it up quickly and then be able to learn best practices. This includes disabling client caching and encryption on the storage (which is a default BUT NOT BEST PRACTICE). Ideally client caching is enabled to have near zero downtime upgrades and therefore encrypting the cache in transit and at rest.

defaultVaultConnection:
  enabled: true
  address: "http://vault.default.svc.cluster.local:8200"
  skipTLSVerify: false

Vault Operator CRDs

First we will start with a VaultConnection and Vault Auth. This is how the Operator will connect with vault.

apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultConnection
metadata:
  name: vault-connection
  namespace: default
spec:
  # required configuration
  # address to the Vault server.
  address: http://vault.default.svc.cluster.local:8200
---
apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultAuth
metadata:
  name: static-auth
  namespace: default
spec:
  vaultConnectionRef: vault-connection
  method: kubernetes
  mount: kubernetes
  kubernetes:
    role: test
    serviceAccount: default

The test role attaches to a policy called test policy that looks like this

path "gcp/roleset/*" {
    capabilities = ["read"]
}

This allows us to read the “gcp/roleset/app-token/token” path. Above should likely be more specific such as “gcp/roleset/app-token/+” to lock it down to specific tokens wanting to be read.

All of this to get us to the VaultStaticSecret CRD.

apiVersion: secrets.hashicorp.com/v1beta1
kind: VaultStaticSecret
metadata:
  annotations:
    imageRepository: us.gcr.io
  name: vso-gcr-imagepullref
spec:
  # This is important, otherwise it will try to pull from gcp/data/roleset
  type: kv-v1

  # mount path
  mount: gcp

  # path of the secret
  path: roleset/app-token/token

  # dest k8s secret
  destination:
    name: gcr-imagepullref
    create: true
    type: kubernetes.io/dockerconfigjson
    #type: Opaque
    transformation:
      excludeRaw: true
      excludes:
        - .*
      templates:
        ".dockerconfigjson":
          text: |
            {{- $hostname := .Annotations.imageRepository -}}
            {{- $token := .Secrets.token -}}
            {{- $login := printf "oauth2accesstoken:%s" $token | b64enc -}}
            {{- $auth := dict "auth" $login -}}
            {{- dict "auths" (dict $hostname $auth) | mustToJson -}}

  # static secret refresh interval
  refreshAfter: 30s

  # Name of the CRD to authenticate to Vault
  vaultAuthRef: static-auth

The bulk of this is in the transformation.templates section. This is the magic. We can easily pull the token but its not in a format that Kubernetes would understand and use. Most of the template is to format correctly the mirror the dockerconfigjson format.

To make it more clear, we use an annotation to store the repository hostname.

Incase the template text is a little confusing, a more readable version of this template text section would be as follows.

{{- $hostname := "us.gcr.io" -}}
{{- $token := .Secrets.token -}}
{{- $login := printf "oauth2accesstoken:%s" $token | b64enc -}}
{
  "auths": {
    "{{ $hostname}}": {
      "auth": "{{ $login }}"
    }
  }
}

Apply the manifest and if all went well you should have a secret named “gcr-imagepullref” which you can use in your “imagePullSecrets” section of the manifest.

In Closing

In closing, we leveraged gcp secrets engine and kubernetes auth to attain time limited OAuth tokens and inject into a secret to use for pulling images from a private repository. There are a number of times you may want to do something like this such as when you’re multicloud but want to utilize one repository or have on-premise clusters but want to use your cloud repository. Instead of just pulling a long lived key, this will be more secure and minimize attack vector.

Following some of the best practices will also help that as well such as limiting the scope of roles and ACLs and enabling encryption on the storage and transmission of the data.

For more on the transformation templating, you can go here.

Terraform For Local Environments (podman+kind)

Summary

I do most of my containerization work locally using podman & kind. It’s an easy way to spin up a local environment. From time to time I want to upgrade the K8s version or just completely blow it away.

With kind it is pretty simple…

kind delete cluster --name=<cluster_name>

kind create cluster --name=<cluster_name>

I then load in my Mozilla SOPS key. Then I run my bootstrap script for FluxCD.

But Then I got Lazy

Over the weekend, there was an interesting podman Desktop Bug which caused my kube-apiserver to peg the CPU. It took a bit of fiddling and recreating the cluster a few times.

So I got lazy and wrote some terraform to do it for me.

Providers

For this I used a few terraform providers, namely tehcyx/kind, alekc/kubectl, integrations/github and hashicorp/kubernetes.

TF Resources

For everything we need a kind cluster. This is pretty simple. The key is we want to wait_for_ready because we’ll be doing further actions. The node_image is option and it will just pick the latest.

resource "kind_cluster" "this" {
  name = var.kind_cluster_name
  node_image = var.kind_node_image
  wait_for_ready = true
}

We then want to apply two manifests since Flux has already been bootstrapped and setup.

These two data sources will pull the appropriate manifests from the repository. The components are just that and the base dependencies. The sync manifest is the actual sync configuration data (what to sync, where to sync from, etc).

data "github_repository_file" "gotk-components" {
  repository          = "${var.github_org}/${var.github_repository}"
  branch              = "main"
  file                = var.gotk-components_path
}

data "github_repository_file" "gotk-sync" {
  repository          = "${var.github_org}/${var.github_repository}"
  branch              = "main"
  file                = var.gotk-sync_path
}

Because these manifests have multiple documents, we need to use another data source since kubectl_manifest can only apply a single document at a time.

data "kubectl_file_documents" "gotk-components" {
    content = data.github_repository_file.gotk-components.content
}

data "kubectl_file_documents" "gotk-sync" {
    content = data.github_repository_file.gotk-sync.content
}

We then loop through with a foreach on the components

resource "kubectl_manifest" "gotk-components" {
  depends_on = [ kind_cluster.this ]
  for_each  = data.kubectl_file_documents.gotk-components.manifests
  yaml_body = each.value
}

Before we can apply the sync section, we need to ensure the Mozilla SOPS age.key is applied. We have sensitive data in this environment and key allows us to decrypt it. In other environments this may be a key vault or KMS.

resource "kubernetes_secret" "sops" {
  depends_on = [kubectl_manifest.gotk-components]
  metadata {
    name = "sops-age"
    namespace = "flux-system"
  }
  data = {
    "age.agekey" = file(var.sops_age_key_path)
  }
}

Finally we now want to apply the sync configurations and we’re done!

resource "kubectl_manifest" "gotk-sync" {
  depends_on = [ kind_cluster.this, kubernetes_secret.sops ]
  for_each  = data.kubectl_file_documents.gotk-sync.manifests
  yaml_body = each.value
}

Finale!

From here its terraform apply and we’re off!

OpenTelemetry In Golang

Summary

I was recently working on a project that involved VueJS, Golang(Go) and Mongo. For the API layer in Go, it was time to instrument it with metrics, logs and traces. I was using Gin due to its ease of setup and ability to handle json data.

Parts of the instrumentation were easy. For example traces worked out of the box with the otelgin middlware. Metrics had some examples going around but needed some work and logs were a pain.

The Beauty of OpenTelemetry(OTEL) is that you can instrument your application with it and it does not matter where you send the telemetry on the back end, most of the big name brands support OTLP directly.

Go + Gin + Middleware

Go has the concept of middleware in its web frameworks which make it really easy to monitor or adjust a request in flight. Gin is no exception. Gin by default has two middlewares it applies. They are gin.Logger() & gin.Recovery(). Logger implements a simple logger to the console. Recovery recovers from any panics and returns a 5xx error.

The otelgin middleware above simply takes the context of the http request and with a properly setup OpenTelemetry tracer and internal propagation of context, it will export to your tracing tool that supports OpenTelemetry.

Initializing and Using OTEL Tracing

Initializing the tracer is pretty simple but rather lengthy.

I have a “func InitTracer() func(context.Context) error” function that handles this. For those not terribly familiar with Go, this is a function that returns another function with context that returns an error.

func InitTracer() func(context.Context) error {
	//TODO: Only do cleanup if we're using OTLP
	if os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT") == "" {
		return func(ctx context.Context) error {
			//log.Print("nil cleanup function - success if this is without OTEL!")
			return nil
		}
	}
	exporter, err := otlptrace.New(
		context.Background(),
		otlptracegrpc.NewClient(),
	)

	if err != nil {
		panic(err)
	}

	resources, err := resource.New(
		context.Background(),
		resource.WithAttributes(
			attribute.String("library.language", "go"),
		),
	)
	if err != nil {
		//log.Print("Could not set resources: ", err)
	}

	otel.SetTracerProvider(
		tracesdk.NewTracerProvider(
			tracesdk.WithSampler(tracesdk.AlwaysSample()),
			tracesdk.WithBatcher(exporter),
			tracesdk.WithResource(resources),
		),
	)
	// Baggage may submit too much sensitive data for production
		  otel.SetTextMapPropagator(propagation.NewCompositeTextMapPropagator(propagation.TraceContext{}, propagation.Baggage{}))

	return exporter.Shutdown
}

The actual usage of this in func main() might look something like this

tracerCleanup := InitTracer()
	//TODO: I don't think this defer ever runs
	defer tracerCleanup(context.Background())

If you use multiple packages, the way this is initialized, it will persist as its configured as the global tracer for the instance.

From there its just a matter of using the middleware from the otelgin package

router.Use(otelgin.Middleware(os.Getenv("OTEL_SERVICE_NAME")))

That is really it. It mostly works out of the box.

Initializing and Using OTEL Metrics

Metrics was a little more difficult. I couldn’t find a suitable example online so I ended up writing my own. It initializes the same way calling

meterCleanup := otelmetricsgin.InitMeter()
defer meterCleanup(context.Background())

router.Use(otelmetricsgin.Middleware())

You want this to be higher up on the usage of middleware because we’re starting a timer to capture latency.

Key Notes About My otelginmetrics

The first thing to do is it is the quickest and dirtiest quick and dirty middleware I could possibly put together. There are much better and eloquent ways of doing it but I needed something to work.

It exports two metrics. One is http_server_requests_total. This is the total number of requests. The other is http_server_request_duration_seconds which is the duration in seconds of each request. The http_server_request_duration_seconds is a histogram with quite a few tags to be able to split by HTTP Method, HTTP Status Code, URI and hostname of the node serving the HTTP.

Prometheus style histograms are out of scope for this article but perhaps another. In short they are time series metrics that are slotted into buckets. In our case we’re slotting them into buckets of response time. Because the default OTEL buckets are poor for latency in seconds (which should almost always be less than 1, I opted to adjust the buckets on this metric to 0.005, 0.01, 0.05, 0.5, 1, 5.

Initializing and Using OTEL Logs

Both of the metrics and traces API for Go for OTEL are considered stable. Logs, however are beta and it shows. It was a bit more complicated to get through but it is possible!

The first one is the default log provider in Go does not have any middleware that supports. As of Go 1.21 slog or Structured Logging became available and uses json format to output rich logging. OTEL doesn’t let you call the logging API directly. It provides what it calls bridges so other providers can call it. For this I used the otelslog api bridge. It initializes similarly.

func InitLog() func(context.Context) error {

	//TODO: Only do cleanup if we're using OTLP
	if os.Getenv("OTEL_EXPORTER_OTLP_ENDPOINT") == "" {
		return func(ctx context.Context) error {
			//log.Print("nil cleanup function - success if this is without OTEL!")
			return nil
		}
	}
	ctx := context.Background()

	exporter, err := otlploggrpc.New(ctx)
	if err != nil {
		panic("failed to initialize exporter")
	}

	// Create the logger provider
	lp := log.NewLoggerProvider(
		log.WithProcessor(
			log.NewBatchProcessor(exporter),
		),
	)

	global.SetLoggerProvider(lp)

	return lp.Shutdown
}

And then usage

logger := otelslog.NewLogger(os.Getenv("OTEL_SERVICE_NAME"))

router.Use(sloggin.NewWithConfig(logger, config))

// Health Checks will spam logs, we don't need this
filter := sloggin.IgnorePath("/")

config := sloggin.Config{
	WithRequestID: true,
	Filters:       []sloggin.Filter{filter},
}

router.Use(sloggin.NewWithConfig(logger, config))

From here, we could use the sloggin middleware for Gin to instrument logging on every request with request and response information. An example might look something like this.

Datalog Log & Trace Correlation

In the above screenshot you see an otel.trace_id and otel.span_id. Unfortunately, DataDog cannot use this directly so it needs a conversion and to use dd.trace_id and dd.span_id. We needed to override the logger to somehow inject this. That expertise was way beyond my skill set but I did find someone that could do it and had documented it on their blog. The code did not compile as is and required some adjusting along with DD’s conversion.

To save people some trouble I published my updated version.

To use it we would import it as a different namespace to avoid conflict

import (
   newslogin "github.com/dchapman992000/otelslog"
)

func main() {
    ....
    // This was our first slog logger
    logger := otelslog.NewLogger(os.Getenv("OTEL_SERVICE_NAME"))
    //This is the new one where we inject our new one into it using the embedded structs and promotions in Go
    logger = newslogin.InitialiseLogging(logger.Handler())
}

You can then see in the screenshot pulling up the logs, we have the ability to see the related traces and it all works!

ChatGPT Features For Beginners

Introduction

Artificial Intelligence (AI) is revolutionizing the way we interact with technology. One of its most user-friendly manifestations is ChatGPT. This AI-driven tool is designed to understand and generate human-like text, providing responses in a conversational manner. Commonly, ChatGPT is employed in customer service, education, and content creation, helping to automate responses and assist with information retrieval.

However, the potential applications of ChatGPT extend far beyond these well-known paths. In this article, we will delve into some of the lesser-known yet fascinating ways that ChatGPT can be utilized across different sectors. Whether you’re completely new to AI or just curious about the capabilities of ChatGPT, join us as we explore its innovative uses that are quietly transforming industries.

Creative Writing and Artistic Assistance

ChatGPT is proving to be a transformative tool in the realm of creative writing and music. It offers innovative ways for artists to push the boundaries of their creativity. Writers facing writer’s block can find in ChatGPT a valuable collaborator. It can suggest plot twists, character developments, and even specific dialogues, serving as a source of inspiration and helping writers navigate through creative stalls.

For musicians, ChatGPT’s potential is equally fascinating. Artists can use it to help compose lyrics or brainstorm musical concepts that align with their personal styles or desired themes. By suggesting rhymes, rhythms, and thematic elements, ChatGPT can introduce musicians to ideas that may not have occurred to them, enriching their compositions and providing a fresh perspective on their work.

Moreover, there have been notable instances where entire novels and musicals have been co-created by humans and AI. These collaborative projects highlight the capability of ChatGPT as a co-creator, not merely a tool, facilitating new forms of expression and innovation in creative fields. This emerging partnership between human artists and artificial intelligence is reshaping the landscape of creative production, promising a new era of artistic expression.

Digital Image Creation with ChatGPT

In the expanding world of digital art, ChatGPT emerges as an innovative tool that assists artists in the creation of digital images. While ChatGPT itself does not generate visual content directly, it aids in the conceptualization and planning stages of digital art creation. Artists can engage with ChatGPT to generate ideas, themes, and detailed descriptions of scenes or characters that they envision for their projects.

By inputting desired emotions, themes, or even abstract concepts, artists can use ChatGPT to flesh out complex ideas that can then be visually interpreted through their preferred digital mediums. For instance, an artist might describe a futuristic cityscape or a serene landscape, and ChatGPT can help expand on these ideas with suggestions for elements that could enhance the scene, such as atmospheric effects, lighting conditions, or architectural styles.

Moreover, ChatGPT can be used to write descriptions for artworks, helping artists articulate the vision and intent behind their creations, which can be particularly useful for presentations or gallery exhibitions. This collaborative process between AI and artists not only streamlines the creative workflow but also opens up new avenues for artistic exploration and expression in the digital age.

Code Review, Analysis and Generation With ChatGPT

ChatGPT is increasingly being utilized by software developers for code review and analysis. This AI tool can be programmed to understand coding patterns and flag potential issues, such as syntax errors, logical mistakes, or inefficient code blocks. By integrating ChatGPT into the code review process, developers can automate preliminary checks, saving time and reducing the cognitive load during manual reviews.

Moreover, ChatGPT can suggest alternative coding methods or optimizations, providing educational insights that can improve a developer’s coding skills. This AI-driven approach not only speeds up the development cycle but also enhances code quality, making it a valuable asset in any developer’s toolkit.

When learning a new language and scouring the internet for boiler plate templates for your needs, it can be a difficult task. With ChatGPT you can ask it to generate that for you for your specific language and use case! Tools like GitHub CoPilot integrate nicely with it for autocomplete assistance.

Personalized Learning and Education

ChatGPT stands out as a transformative tool in personalized education, functioning effectively as a tutor tailored to individual learning styles and paces. This adaptability is crucial for educational technologies, particularly those emphasizing customized learning experiences. ChatGPT can dynamically adjust lesson plans based on a student’s responses, ensuring a truly personalized educational journey.

Particularly valuable for language learners, ChatGPT facilitates real-world conversation practice in various languages, helping users grasp complex grammatical structures and idiomatic expressions typically challenging through traditional study methods. This function is not just limited to language acquisition but extends to cultural learning, making the tool an integral part of holistic language education.

Moreover, ChatGPT can help mitigate educational disparities by reaching students in remote areas with limited access to quality tutors or educational resources. Accessible via simple technologies like mobile phones and requiring minimal internet bandwidth, ChatGPT can deliver high-quality, adaptive educational content globally, democratizing learning and potentially transforming educational outcomes in underserved communities.

Conclusion

There are many use cases for AI and ChatGPT. We are just starting to scratch the surface as it evolves. One common theme is that it is a great resource for creativity. It can help plow through writer’s block on one hand. On the other it can create imaging to go along with content.

This blog was written as a test with ChatGPT’s help to see the possible!

Sprucing up a Site with AI – DALL-E

Summary

I like to tinker. It is the way I learn best. Along with that I love sharing information and documenting it through various mediums like this blog. One of my shortcoming however is I have no artistic sense.

As I started to dive into Generative AI, one of the areas that intrigued me the most was the area of Text To Image. Most of my sites have been extremely bland because I lack any sort of graphic design capabilities. In my recent reading of Artificial Intelligence & Generative AI for Beginners, it helped me work through an area where AI could help me.

Image Generators

For this project of sprucing up my https://tools.woohoosvcs.com site I used DALLE-E but it is not the only one. There are many and easy to find. It does require the Plus subscription of ChatGPT. The Bing Image Creator is a free version of that with limits. It uses the same engine, as I understand it.

Prompting

Prompting is key. We want to ensure that we’re guiding AI in the right direction. Perhaps this will not be as necessary in the future but for now it really helps. A simple prompt of “Generate an image logo to depict an ssl certificate” and DALL-E will get to going. The more specific guidance you can give it the better though. This will help ensure uniqueness as well as specificity.

Site

For this, I updated the https://tools.woohoosvcs.com and if you want to see what it used to look like you can go to https://tools.woohoosvcs.com/old

For a side by side

Further Down AI Powered Chatbot Rabbit Hole

Summary

In my previous article Chatbots, AI and Docker! I talked about a little of the theory behind this but for this article I wanted to fully go down the rabbit hole and produced my own chatbot. To do this, I had to find an updated chatterbot fork, learn a little more python to handle dependencies better, create my own fork of corpus/training material and learn Google Cloud Run. Ultimately you can skip straight to the source if you like. That’s the great part of GitOps/IaC

And Then Some

Previously, I had a workable local instance that I was able to host in podman/kind but I wanted to put this in my hosting environment on Google. In order to personalize this, I wanted to be able to add some training data and use some better practices. Having previously used Google App Engine, assumedly that would naturally be the landing place for this. I then ran into some hiccups and came across Cloud Run which was not originally available and seemed like a suitable fit as it is built for containerized workflows. It provided me a way to use my existing Dockerfile to unify the build and deploy. For tools, I have a separate build and test workflow in my cloudbuild.yaml.

Get on with Chatbots!

In my last article, I mentioned I had to find a fork of chatterbot because it has not been recently maintained. In reality though it only allowed command line prompting which is not terribly useful for a wider audience to test. I came across this amazing medium post which I have to give full credit for (and do in the html as well). The skin is pretty amazing. It also provides a wealth of in depth details.

For the web framework, I opted to use Flask and gunicorn which was fairly trivial to get going after finding that great medium post above.

Training Data

Without any training data AI/ML does not really exist. It needs to be pre-trained and/or train “on the job”. For this, chatterbot-corpus comes into play. This is a pre-built training data set for the chatterbot library. It has some decent basic training. I wanted to be able to add my own and based on the input of casmith, its in python so shouldn’t it be able to converse with Monty Python quotes? So I did and created my own section for that.

categories:
- Monty
- humor
conversations:
- - What is your name?
  - My name is Sir Lancelot of Camelot.
- - What is your quest?
  - To seek the Holy Grail.
- - What is the air speed velocity of an unladen swallow?
  - What do you mean? An African or European swallow?
- - How do know so much about swallows?
  - Well, you have to know these things when you're a king, you know.

I have the real-time training disabled or rather put my chatterbot into read only mode because the internet can be a cruel place and I don’t need my creation coming home with a foul mouth! For my lab, the training is loaded at image creation time. This is primarily because its using the default sqlite back end. I could easily use a database for this and load the training out of band so it doesn’t require a deploy.

Logic Adapters

You may be thinking this is a simple bot that’s just doing string matching to figure out how to respond. For the most part you’re correct. This is not deep learning and it doesn’t fully understand what you are asking. With that said its very extensible with multiple logic adapters. The default is a “BestMatch” based on text input. Others allow it to report time and do math. It will weigh the confidence of the response on each adapter to let the highest scoring/weighing response win. Pretty neat!

chatbot = ChatBot(
    "Sure, Not",
    logic_adapters=[
        'chatterbot.logic.BestMatch',
        'chatterbot.logic.MathematicalEvaluation',
        'chatterbot.logic.TimeLogicAdapter'
    ],
    read_only=True
)

Over To The Infrastructure

For all of this, it starts with a Dockerfile. I already had this but it was a little bloated with build dependencies. Therefore, I created a multistage image using virtual python environment as guided by https://pythonspeed.com/articles/multi-stage-docker-python. I am not new to multistage images. My golang images use it. I was, however, new to doing this with Python. Not only did it reduce my image size down 100MB but it also removed 30 vulnerabilities from the images because of a dependency on git for some of the python libraries.

Cloud Run

To get deployed to Cloud Run, it was pretty simple although there were a few trial an errors due to permissions. The service account needed Cloud Run Admin access. Aside from that, this pumped everything through and let me keep my singular Dockerfile.

steps:
  # Docker Build
  - name: 'gcr.io/cloud-builders/docker'
    args: ['build', '-t', 
           'us.gcr.io/${PROJECT_ID}/chatbot:${SHORT_SHA}', '.']

  # Docker push to Google Artifact Registry
  - name: 'gcr.io/cloud-builders/docker'
    args: ['push',  'us.gcr.io/${PROJECT_ID}/chatbot:${SHORT_SHA}']

  # Deploy to Cloud Run
  - name: google/cloud-sdk
    args: ['gcloud', 'run', 'deploy', 'chatbot', 
           '--image=us.gcr.io/${PROJECT_ID}/chatbot:${SHORT_SHA}', 
           '--region', 'us-central1', '--platform', 'managed', 
           '--allow-unauthenticated', '--port', '5000', '--memory', '256Mi',
           '--no-cpu-boost']

# Store images in Google Artifact Registry 
images:
  - us.gcr.io/${PROJECT_ID}/chatbot:${SHORT_SHA}

It really was this simple since I had a working local environment and working Dockerfile. Just don’t look at my commit history 🙂 Quite a few silly mistakes were made if you look deep enough.

Caveat

Google App Engine lets you use custom domain mapping and bring your own certificates. I use Cloudflare to protect my entire environment and for this in GAE I placed a Cloudflare Origin certificate to help prevent it from being accessed by the outside world as no browser would trust it bypassing Cloudflare.

Google Cloud run has a preview feature of custom domain mapping. The easiest of the options doesn’t support custom certificates and therefore wants to issue you a certificate. The temp workaround for this is to not proxy through Cloudflare until the certificate is issued and then turn on proxy. Rinse and repeat yearly when the cert needs to be renewed.

I have to imagine this will get rectified once out of preview to be feature parity with Google App Engine since it seems Cloud Run intends to replace GAE.

Credits

For Multi-stage help with Python Docker Images – https://pythonspeed.com/articles/multi-stage-docker-python

For the entire UI of this demo/test – https://medium.com/@kumaramanjha2901/building-a-chatbot-in-python-using-chatterbot-and-deploying-it-on-web-7a66871e1d9b

Chatbots, AI and Docker!

Summary

I have started my learning journey about AI. With that I started reading Artificial Intelligence & Generative AI for Beginners. One of the use cases it went through for NLP (Natural Language Processing) was Chatbots.

To the internet I went – ready to go down a rabbit hole and came across a Python library called ChatterBox. I knew I did not want to bloat and taint my local environment so I started using a Docker instance in Podman.

Down the Rabbit Hole

I quickly realized the project has not been actively maintained in a number of years and had some specific and dated dependencies. For example, it seemed to do best with python 3.6 whereas the latest at the time if this writing is 3.12.

This is where Docker shines though. It is really easy to find older images and declare which versions you want. The syntax of Dockerfile is such that you can specify the image and layer the commands you want to run on it. It will work every time, no matter where it is deployed from there.

I eventually found a somewhat updated fork of it here which simplified things but it still had its nuances. chatterbox-corpus (the training data) required PyYaml 3.13 but to get this to work it needed 5.

Dockerfile

FROM python:3.6-slim

WORKDIR /usr/src/app

#COPY requirements.txt ./
#RUN pip install --no-cache-dir -r requirements.txt
RUN pip install spacy==2.2.4
RUN pip install pytz pyyaml chatterbot_corpus
RUN python -m spacy download en

RUN pip install --no-cache-dir chatterbot==1.0.8

COPY ./chatter.py .

CMD [ "python", "./chatter.py" ]

Here we can see, I needed a specific version of Python(3.6) whereas at the time of writing the latest is 3.12. It also required a specific spacy package version. With this I have a repeatable environment that I can reproduce and share (to peers or even to production!)

Dockerfile2

Just for grins, when I was able to use the updated fork it did not take much!

FROM python:3.12-slim

WORKDIR /usr/src/app

#COPY requirements.txt ./
#RUN pip install --no-cache-dir -r requirements.txt
RUN pip install spacy==3.7.4 --only-binary=:all:
RUN python -m spacy download en_core_web_sm

RUN apt-get update && apt-get install -y git
RUN pip install git+https://github.com/ShoneGK/ChatterPy

RUN pip install chatterbot-corpus

RUN pip uninstall -y PyYaml
RUN pip install --upgrade PyYaml

COPY ./chatter.py .

CMD [ "python", "./chatter.py" ]

Without Docker

Without Docker(podman) I would have tainted my local environment with many different dependencies. At the point of getting it all working, I couldn’t be sure it would work properly on another machine. Even if it did, was their environment tainted as well? With Docker, I knew I could easily repeat the process from a fresh image to validate.

Previous projects I worked on that were python related could have also tainted my local to cause unexpected results on other machines or excessive hours troubleshooting something unique to my machine. All of that avoided with a container image!

Declarative Version Management

When it becomes time to update to the next version of Python, it will be a really easy feat. Many tools will even parse these types of files and do dependency management like Dependabot or Snyk

Mozilla SOPS To Protect My cloudflared Secrets In Kubernetes

Summary

Aren’t these titles getting ridiculous? When talking about some of these stacks, you need a laundry list of names to drop. In this case I was working on publishing my CloudFlare Tunnels FTW work that houses my kind lab into my public GitHub Repository. I wanted to tie in FluxCD to it and essentially be able to easily blow away the cluster and recreate with secrets all through FluxCD.

I was able to successfully achieve that with all but the private key which needs to be manually loaded into the cluster so it can decrypt the sensitive information.

Why Do We Care About This?

While trying to go fully GitOps for Kubernetes, everything is stored in a Git Repository. This makes change management extremely simple and reduces complexities of compliance. Things like policy bots can automate change approval processes and document. But generally everything in Git is clear text.

Sure, there are private repositories but do all the the developers that work on the project need to read sensitive records like passwords for that project? Its best that they don’t and as a developer you really don’t want that responsibility!

Mozilla SOPS To The Rescue!

Mozilla SOPS is very well documented. In my case I’m using Flux which also has great documentation. For my lab, this work is focusing on “cluster3” which simply deploys my https://www.woohoosvcs.com and https://tools.woohoosvcs.com in my kind lab for local testing before pushing out to production.

Create Key with Age

Age appears to be the preferred encryption tool to use right now. It is pretty simple to use and going by the flux documentation we simply need to run

age-keygen -o age.agekey

This will create a file that contains both the public and private key. The public key will be in the comment and the command line will output the public key. We will need the private key later to add as a secret manually to decrypt. I’m sure there are ways of getting this into the cluster securely but for this blog article this is the only thing done outside of GitOps.

Let’s Get To the Details!

With Flux I have a bootstrap script to load flux into the environment. I also have a generate_cluster3.sh script that creates the yaml.

The pertinent lines to add to it above the standard are the following. The first line indicates that sops is a decryption provider. The second indicates the name of the secret to be stored. Flux requires this to be in the flux-system namespace

    --decryption-provider=sops \
    --decryption-secret=sops-age \

From there you simpley need to run the bootstrap_cluster3.sh which just loads the yaml manifests for flux. With flux you can do this on the command line but I preferred to have this generation and bootstrapping in Git. As you want to upgrade flux there’s also a upgrade_cluster3.sh script that is really a one liner.

flux install --export > ./clusters/cluster3/flux-system/gotk-components.yaml

This will update the components. If you’re already bootstrapped and running flux, you can run this and commit to push out the upgrades to use flux to upgrade itself!

In the root of the cluster3 folder I have .sops.yaml. This tells the kustomization module in flux what to decrypt and which public key to use.

Loading Private Key Via Secret

Once you have run the bootstrap_cluster3.sh you can then load the private key via

cat age.agekey | kubectl create secret generic sops-age \
  --namespace=flux-system --from-file=age.agekey=/dev/stdin

Caveat

This lab won’t work for you out of the box. This is because it requires a few confidential details

  1. My cloudflared secret is encrypted with my public key. You do not have my private key so you cannot load it into your cluster to decrypt it
  2. I have some private applications I am pushing into my kind cluster. You will have to clone and modify for your needs

CloudFlare Tunnels FTW

Summary

CloudFlare provides VPN tunnels between your web application and the CloudFlare network for private and direct access. There are a multitude of use cases for this. The nice part about this is the tunnels are available in all tiers (including free).

Use Cases

The main use case for this is Least Privileged Security. Without tunnels, the common use case for CloudFlare is to add ACLs to your edge allowing in connections from CloudFlare. With Tunnels you run an appliance or daemon/service internally that creates an outbound tunnel to CloudFlare for your web applications. What this allows is only allowing egress traffic, worst case. Best case only opening up FQDN based whitelists on specific ports to CloudFlare’s network to allow the tunnel to negotiate. In essence, only allowing specific outbound connections needed to support the applications.

An interesting secondary use case for this is self-hosting of your web application. Years ago if you wanted to self-host something at your home, you would have to either ask your ISP for a static IP or use a Dynamic DNS provider that would constantly update your DNS with your IP. With CloudFlare Tunnels, once configured, the tunnel will come up regardless of your location or IP address. This is great for self-hosting at home (when you can’t afford a cloud provider and want to reuse some equipment) or even having a local lab that you want to share out to friends for testing.

Technical Setup

There are other articles that walk through the setup and it really depends on your implementation but I will share a few links of what I did to setup a Kubernetes lab up with CloudFlare Tunnels to expose my local lab running podman + kind + Kubernetes with a custom app I wrote onto the Internet.

This is a create tutorial by CloudFlare on the steps – https://developers.cloudflare.com/cloudflare-one/tutorials/many-cfd-one-tunnel/

Some of the dependencies for it are to

  1. Have a working Kubernetes cluster. For a quick lab, I highly recommend kind+podman but many use minikube.
  2. Have a local cloudflared that you can run to setup the tunnel and access token

One of the tweaks I had to make though is that the cloudflared manifest is a bit dated. As of the time of this writing I made the following changes

#Image was set to 2022.3.0 which did not even start
image: cloudflare/cloudflared:2024.3.0

# Reduce replicas - this was a lab with a single node!
replicas: 1

# Update the ingress section.
# This maps the CloudFlare proxied address to a kubernetes service address.
# Since cloudflared runs int the cluster it will use K8 DNS to resolve
    - hostname: k8s-fcos-macos.woohoosvcs.com
      service: http://tools-service:80

Don’t forget to import the secret from the CloudFlare instructions!

If setup properly you’ll see success!

More Than Tunnels

This is just the beginning. This is just a piece of the full Zero Trust Offering by Cloud Flare. It is a bit out of scope for this article but the nice part about CloudFlare is a lot of it is set it and forget it and let them manage once its configured properly.

Conclusion

Whether you are a large enterprise needing full Zero Trust or just a startup hosting a few servers out of your garage off your home internet, CloudFlare has tiers and offerings that can meet your budget. Its a great tool that I have used for this site and my https://tools.woohoosvcs.com/ for a number of years.

Local Kubernetes Lab – The Easy Way With podman and kind

Summary

In a few articles, I’ve shared how to provision your own local Kubernetes (K8s) lab using VMs. Over the years this has simplified from Intro To Kubernetes to Spinning Up Kubernetes Easily with kubeadm init!. With modern tools, many of us do not need to manage our own VMs.

These days there are many options but one of my favorites is kind. To use kind, you need either Docker Desktop or Podman Desktop.

Podman is quickly becoming a favorite because instead of the monolithic architecture of Docker Desktop, Podman is broken out a bit and the licensing may be more appeasable to people.

Installation

Installing Podman is relatively easy. Just navigate to https://podman.io and download the installer for your platform. Once installed you will need to provision a Podman Machine. The back end depends on your platform. Windows will use WSL2 if available. MacOS will use a qemu VM. This is nicely abstracted.

Installing kind is very easy. Depending on your platform you may opt for a package manager. I use brew so its a simple

brew install kind

The site https://kind.sigs.k8s.io/docs/user/quick-start/#installation goes through all the options.

Provisioning the Cluster

Once these two dependencies are in play its a simple case of

kind create cluster

The defaults are enough to get you a control plane that is able to schedule workloads. There are custom options such as specific K8s version and multiple control plans and worker nodes to more properly lab up a production environment.

kind create cluster

From here we should be good to go. Kind will provision the node and set up your KUBECONFIG to select the cluster.

As a quick validation we’ll run the recommended command.

We can see success and we’re able to apply manifests as we would expect. We have a working K8s instance for local testing.

Extending Functionality

In IaC & GitOps – Better Together we talked about GitOps. From here we can apply that GitOps repo if we wanted to.

We can clone the specific tag for this post and run the bootstrap

git clone https://github.com/dchapman992000/k8-fcos-gitops.git --branch postid_838 --single-branch

cd k8-fcos-gitops

./bootstrap_cluster1.sh

# We can check the status with the following until ingress-nginx is ready
kubectl get pods -A

# From there, check helm
helm ls -A

And here we are. A local testing cluster we stood up and is controlled by GitOps. Once flux was bootstrapped it pulled down and installed the nginx ingress controller.