Geeky BlinderAnother Tech Person With A Blog

Google Cloud Scheduler in Terraform

(and in the console :D)

Geeky Blinder 2024-12-09


Cloud Scheduler with Terraform offers a powerful way to automate and manage scheduled tasks in Google Cloud, allowing developers to create, monitor, and maintain cron jobs using infrastructure-as-code principles.

Setting Up Cloud Scheduler

To set up Cloud Scheduler using Terraform, begin by creating a Google Cloud project and enabling the necessary APIs, such as Cloud Scheduler and Pub/Sub. Install Terraform and the Google Cloud CLI on your local machine. Then, create a Terraform configuration file that defines your Cloud Scheduler job, specifying details like the job name, description, schedule, and target. For example:

resource "google_cloud_scheduler_job" "default" {
  name        = "test-job"
  description = "Automated scheduled task"
  schedule    = "30 16 * * 7"  # Runs at 16:30 every Sunday
  region      = "us-central1"
  
  pubsub_target {
    topic_name = google_pubsub_topic.default.id
    data       = base64encode("Hello world!")
  }
}

This configuration creates a job that runs weekly on Sundays at 16:30, publishing a message to a specified Pub/Sub topic. Ensure that you’ve properly set up your Google Cloud credentials and project details in your Terraform configuration to enable smooth deployment and management of your scheduled jobs.

Monitoring Scheduled Jobs

Effective monitoring of Cloud Scheduler jobs is crucial for ensuring their reliability and performance. Google Cloud’s native logging capabilities provide insights into job execution status and performance metrics. To track job outcomes, use the following command to pull messages from the Pub/Sub subscription:

gcloud pubsub subscriptions pull pubsub_subscription --limit 5

For more comprehensive monitoring:

  • Implement Cloud Monitoring alerts for job failures
  • Set up Stackdriver notifications for real-time updates
  • Create custom dashboards to visualize job execution metrics
  • Use detailed logging in scheduled tasks for easier troubleshooting
  • Consider implementing error handling and retry mechanisms for improved resilience

These strategies enable proactive management of scheduled jobs, allowing for quick identification and resolution of any issues that may arise.

Terraform Execution Workflow

To implement Cloud Scheduler jobs using Terraform, follow these key steps:

  • Initialize Terraform with terraform init to set up the working directory
  • Preview changes with terraform plan to ensure the configuration is correct
  • Apply the configuration using terraform apply to create or update resources

Once deployed, you can manually trigger a job if needed using the command:

gcloud scheduler jobs run test-job --location=us-central1

This workflow allows for version-controlled, reproducible infrastructure management, ensuring consistency across environments and simplifying the deployment process for scheduled tasks in Google Cloud.

Best Practices and Challenges

When implementing Cloud Scheduler with Terraform, it’s crucial to follow best practices such as using base64 encoding for message payloads, implementing comprehensive error logging, and adhering to the least privilege principle for service accounts. Regularly auditing and rotating credentials is also essential for maintaining security. However, challenges may arise, including initial configuration complexity, proper error handling, and ensuring idempotent job designs. To mitigate these issues, consider implementing detailed logging in scheduled tasks and setting up error handling and retry mechanisms for improved resilience.

Error Handling and Retry Strategies

Error handling and retry strategies are crucial for ensuring the reliability and resilience of Cloud Scheduler jobs. When configuring error handling for tasks in Google Cloud Application Integration, you can specify different strategies for synchronous and asynchronous executions. For asynchronous executions, a common approach is to implement a retry mechanism with linear backoff.

To enhance error handling in Cloud Scheduler jobs:

  • Configure retry conditions based on specific error codes (e.g., $ErrorInfo.code$ = 404) (https://cloud.google.com/application-integration/docs/error-handling-strategy)
  • Implement exponential backoff for retries to avoid overwhelming systems (https://www.googlecloudcommunity.com/gc/Data-Analytics/How-to-Set-Dataform-Retry-Mechanism-with-Native-Workflow/m-p/756881)
  • Set appropriate retry limits and maximum retry durations (https://cloud.google.com/application-integration/docs/error-handling-strategy)
  • Use Cloud Functions in conjunction with Cloud Scheduler to implement custom retry logic and error handling (https://www.googlecloudcommunity.com/gc/Data-Analytics/How-to-Set-Dataform-Retry-Mechanism-with-Native-Workflow/m-p/756881)

Creating a Cloud Scheduler Job in Google Cloud Console

Prerequisites

  • Ensure you have a Google Cloud project created
  • Enable the Cloud Scheduler API
  • Have a target endpoint or Pub/Sub topic ready

Step-by-Step Process

  1. Access Cloud Scheduler
  • Navigate to the Google Cloud Console
  • Go to the Cloud Scheduler section
  • Click “Create Job”
  1. Configure Job Basics
  • Enter a unique job name
  • Select a region for deployment
  • Optionally add a description
  • Define the schedule using cron expression (e.g., 0 1 * * * for daily at 1 AM)
  1. Select Target Type Choose from three primary target types:
  • HTTP/HTTPS endpoint
  • Pub/Sub topic
  • App Engine
  1. Configure Execution Details
  • For HTTP: Specify URL, method, and optional payload
  • For Pub/Sub: Select existing topic
  • Set optional retry and error handling parameters
  1. Review and Create
  • Verify all configuration details
  • Click “Create” to deploy the scheduled job

Pro Tips

  • Use precise cron expressions
  • Implement robust error handling
  • Leverage Cloud Monitoring for job tracking

Explanation

Cloud Scheduler Interaction Flowchart

Cloud Scheduler Interaction Flowchart

Cloud Scheduler Interaction Flowchart

Diagram 1: Cloud Scheduler Components

This diagram illustrates the components involved in a Cloud Scheduler setup and their interactions:

  • Cloud Scheduler: The core service that triggers tasks at specified intervals based on a cron schedule.
  • Target Service: The endpoint or service that receives the trigger from Cloud Scheduler.
  • Optional Components:Pub/Sub: Messages can be published to Pub/Sub topics as part of the job.HTTP Endpoint: HTTP/S endpoints can be targeted to execute specific tasks.App Engine: App Engine services can also be triggered.

How It Works:

  1. Cloud Scheduler initiates tasks according to the defined schedule.
  2. It sends requests to the target service, which could be:A Pub/Sub topic for messaging.An HTTP endpoint for triggering APIs.An App Engine service for running applications.

Diagram 2: Cloud Scheduler Interaction Flowchart

This flowchart explains the workflow of a Cloud Scheduler job with error handling and monitoring:

  • Cloud Scheduler triggers a task based on the schedule.
  • The task is sent to the Target Service, which processes it.
  • If an error occurs, it is sent to an Error Handling mechanism (e.g., retries, logging).
  • Job execution and errors are tracked in Monitoring, providing visibility into performance and failures.

How It Works:

  1. Cloud Scheduler triggers the task and sends it to the target service.
  2. If errors occur, they are logged and retried based on configured policies (e.g., retry intervals or limits).
  3. Monitoring systems (e.g., Cloud Monitoring) collect logs and metrics for tracking job health and performance.

Key Takeaways

  1. Cloud Scheduler acts as the orchestrator, sending triggers to various services.
  2. Optional components like Pub/Sub or HTTP endpoints allow flexibility in task execution.
  3. Error handling ensures reliability, while monitoring provides visibility into job performance.

Service Mesh DevOps Training!

Heres one I prepared earlier

Geeky Blinder 2024-11-25

A 5-Week Training Plan I wrote for learing Service Mesh, Kubernetes, and Related Technologies. I hope you find it of use!! Its a bit ugly but heres a PDF Download File

Content

Week 1: Fundamentals and Kubernetes

Day 1-2: Kubernetes Basics and Local Development Environments

Day 3-4: Advanced Kubernetes

Day 5: Working with local K8s options

Week 2: Service Mesh Concepts and Python

Day 1-2: Service Mesh

Day 3-4: Python for Kubernetes

Day 5: Helm Basics

Week 3: Istio Deep Dive

Day 1: Istio Basics

Day 2: Istio Traffic Management

Day 3: Istio Security and Observability

Day 4-5: Deploying a Sample Application with Istio

Week 4: Linkerd and Practical Applications

Day 1: Linkerd Basics

Day 2-4: Hands-on Exercise

Day 5: Service Mesh Comparison

Week 5: Practical Project

Designing and implementing a microservices application

Deploying the application using Helm

Implementing service mesh features

Creating Python scripts for automation

Additional Resources and Best Practices

Tips for Successful Service Mesh Adoption

Tools

This document is meant to be a central spring point to allow you to understand points to cover yet expects the user to use external resources to dig deeper in the points and subjests

Week 1: Fundamentals and Kubernetes

Day 1-2: Kubernetes Basics and Local Development Environments

Kubernetes Architecture and Core Concepts

Kubernetes is a powerful container orchestration platform that manages containerized applications across multiple hosts. Its architecture consists of two main components: the control plane and worker nodes
source k8s.

+------------------------+  +---------------------+
|      Control Plane     |  |    Worker Nodes     |
|                        |  |                     |
| +--------------------+ |  | +-----------------+ |
| |   kube-apiserver   | |  | |     kubelet     | |
| +--------------------+ |  | +-----------------+ |
| |        etcd        | |  | |   kube-proxy    | |
| +--------------------+ |  | +-----------------+ |
| |      scheduler     | |  | |   Container     | |
| +--------------------+ |  | |    Runtime      | |
| | controller manager | |  | +-----------------+ |
| +--------------------+ |  |                     |
|                        |  | (Multiple nodes)    |
+------------------------+  +---------------------+
Control Plane Components

kube-apiserver: The API server is the front-end for the Kubernetes control plane. It exposes the Kubernetes API and handles all administrative operations.

etcd: A consistent and highly-available key-value store used as Kubernetes' backing store for all cluster data.

kube-scheduler: Responsible for assigning newly created pods to nodes based on resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, and more.

kube-controller-manager: Runs controller processes that regulate the state of the system. These controllers include the node controller, replication controller, endpoints controller, and service account & token controllers.

cloud-controller-manager: (Optional) Integrates with underlying cloud providers.

+----------------------------------------------------+
|                     Control Plane                  |
|                                                    |
|  +-----------------+  +-------------------------+  |
|  | kube-apiserver  |  |        scheduler        |  |
|  | (API Gateway)   |  | (Assigns Pods to Nodes) |  |
|  +-----------------+  +-------------------------+  |
|                                                    |
|  +-----------------+  +-------------------------+  |
|  |      etcd       |  |    controller manager   |  |
|  | (Cluster State  |  |(Maintains Desired State)|  |
|  |   Database)     |  |                         |  |
|  +-----------------+  +-------------------------+  |
+----------------------------------------------------+
Node Components

kubelet: An agent that runs on each node, ensuring containers are running in a Pod.

kube-proxy: Maintains network rules on nodes, implementing part of the Kubernetes Service concept.

Container runtime: Software responsible for running containers (e.g., Docker, containerd, CRI-O).

Pods: The smallest deployable units in Kubernetes, consisting of one or more containers

+-----------------------------------------------+
|                  Worker Node                  |
|  +-----------------+   +-------------------+  |
|  |      kubelet    |   |     kube-proxy    |  |
|  |   (Node Agent)  |   |  (Network Proxy)  |  |
|  +-----------------+   +-------------------+  |
|                                               |
|  +-----------------------------------------+  |
|  |           Container Runtime             |  |
|  |       (e.g., Docker, containerd)        |  |
|  +-----------------------------------------+  |
|                                               |
|  +-----------------------------------------+  |
|  |                   Pods                  |  |
|  |  +---------+  +---------+  +---------+  |  |
|  |  |Container|  |Container|  |Container|  |  |
|  |  +---------+  +---------+  +---------+  |  |
|  +-----------------------------------------+  |
+-----------------------------------------------+
Core Concepts

Pods: The smallest deployable units in Kubernetes, consisting of one or more containers.

Services: An abstraction that defines a logical set of Pods and a policy by which to access them.

Deployments: Provide declarative updates for Pods and ReplicaSets.

Namespaces: Virtual clusters backed by the same physical cluster, providing a way to divide cluster resources between multiple users.

Additional Components

These components include the Dashboard (a web-based UI), cluster-level logging, container resource monitoring, and network plugins.

+--------------------------------------------------+
|                Additional Components             |
|                                                  |
|  +-----------------+  +-----------------------+  |
|  |    Dashboard    |  | Cluster-level Logging |  |
|  |    (Web UI)     |  |     (Centralized      |  |
|  +-----------------+  |      Log Storage)     |  |
|                       +-----------------------+  |
|                                                  |
|  +-----------------------+  +-----------------+  |
|  |       Monitoring      |  | Network Plugins |  |
|  | (Resource Monitoring) |  | (Implement CNI) |  |
|  +-----------------------+  +-----------------+  |
+--------------------------------------------------+

Local Kubernetes Development Options

kind (Kubernetes in Docker)

kind is a tool for running local Kubernetes clusters using Docker container "nodes". It's designed for testing Kubernetes itself, but can be used for local development or CI.

Installation
`go install sigs.k8s.io/kind@v0.24.0`

# Or for macOS users brew install kind

Creating a cluster
`kind create cluster`

Advantages of kind:

  • Lightweight and fast to start up, making it ideal for rapid development cycles.
  • Supports multi-node clusters, allowing you to simulate more complex environments.
  • Runs Kubernetes inside Docker containers, which is efficient and consistent across different host systems.
  • Ideal for testing and CI/CD pipelines due to its speed and reproducibility
Minikube

Minikube is a tool that makes it easy to run Kubernetes locally. It runs a single-node Kubernetes cluster inside a VM on your laptop.

Installation

# For macOS brew install minikube For other systems, refer to the official documentation

Starting a cluster

minikube start

Advantages of Minikube:

  • More established and feature-rich, with a large community and extensive documentation.
  • Supports multiple hypervisors (VirtualBox, HyperKit, etc.), allowing flexibility in your local setup.
  • Provides built-in addons for common services, making it easy to enable additional functionality.
  • Offers a dashboard for visual management of your cluster.

Practice with Basic Kubernetes Resources

To solidify your understanding, practice creating and managing these basic Kubernetes resources in both kind and Minikube environments:

  • Pods: The smallest deployable units in Kubernetes.
  • Deployments: Manage the deployment and scaling of a set of Pods.
  • Services: Expose your application to network traffic.

Example commands:

# Create a deployment kubectl create deployment nginx --image=nginx

# Expose the deployment as a service kubectl expose deployment nginx --port=80 --type=LoadBalancer

# List pods kubectl get pods

# List services kubectl get services

By thoroughly understanding these concepts and practicing with both kind and Minikube, you'll build a solid foundation for working with Kubernetes in various environments.

You will need to search so that you can view the nginx on your localhost
eg: minikube external ip expose command

You will ultimately see the nginx default banner

Screenshot 2024-10-17 at 15.09.46.png

Day 3-4: Advanced Kubernetes

ConfigMaps, Secrets, and Volumes

+----------------------------------------------------+
|                       Pod                          |
|                                                    |
|  +---------------+   +--------------------------+  |
|  |   Container   |   |     Volume Mounts        |  |
|  | (Application) |   | /etc/config -> ConfigMap |  |
|  |               |   | /etc/secrets -> Secret   |  |
|  +---------------+   +--------------------------+  |
|                                                    |
|  +---------------------+     +------------------+  |
|  |     Environment     |     |     ConfigMap    |  |
|  |      Variables      |     |                  |  |
|  |   (from ConfigMap   |     |   key1: value1   |  |
|  |     and Secret)     |     |   key2: value2   |  |
|  +---------------------+     +------------------+  |
|                                                    |
|         +----------------------------------+       |
|         |             Secret               |       |
|         |      username: base64(user)      |       |
|         |      password: base64(pass)      |       |
|         +----------------------------------+       |
+----------------------------------------------------+
ConfigMaps https://kubernetes.io/docs/concepts/configuration/configmap/
  • Used to store non-confidential data in key-value pairs.
  • Can be consumed as environment variables, command-line arguments, or configuration files in a volume.
  • Example creation:
    kubectl create configmap name --from-literal=name='{"first":"John", "second": "Doe"}'
  • Example extract
    kubectl get configmap name -o jsonpath='{.dataname}' or kubectlget configmap name3 -o json | jq -r '.data.name'| jq -r .first
Secrets https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/
  • Similar to ConfigMaps but intended for confidential data.
  • Base64 encoded by default (not encrypted).
  • Can be mounted as files or exposed as environment variables.
  • Example creation:
    kubectl create secret generic user-pass --from-literal=username=john --from-literal=password=s3cr3t
  • Example extract:
    kubectl get secrets user-pass -o json | jq -r .data.password | base64 -D
Volumes https://kubernetes.io/docs/concepts/storage/volumes/
  • Provide persistent storage for pods.
  • Types include emptyDir, hostPath, nfs, and cloud provider-specific options.
  • PersistentVolumes (PV) and PersistentVolumeClaims (PVC) provide a way to use storage resources in a pod-independent manner.
  • Example
    Create a configmap to hold your var
    kubectl create configmap config-vol --from-literal=log_level=debug
    Now create a pod with a running container that mounts the configmap as a var
cat <<EOF | k apply -f -
 apiVersion: v1
 kind: Pod
 metadata:
   name: configmap-pod
 spec:
   containers:
     - name: test
     image: busybox:1.28
     command: ['sh', '-c', 'echo "The app is running!" && tail -f /dev/null']
     volumeMounts:
       - name: config-vol
         mountPath: /etc/config
   volumes:
     - name: config-vol
     configMap:
       name: config-vol # Corrected to match the ConfigMap name
       items:
         - key: log_level
           path: log_level
 EOF

Run a command to extract the var held at this point kubectl exec -it configmap-pod -- cat /etc/config/log_level
OR
Exec into the container
kubectl exec -it configmap-pod -- sh\

Here you can navigate to the location
cd etc/config
ls < here you should see log_level
cat log_level
debug/etc/config
To give a clean output
cat log_level ; echo

This could easily be a static volume location as opposed to a configmap

+----------------------------------------------------+
|                       Node                         |
|                                                    |
|  +-------------------+  +-----------------------+  |
|  |         Pod       |  | Persistent Volume     |  |
|  |                   |  |                       |  |
|  |  +-------------+  |  | (Network File System, |  |
|  |  |  Container  |  |  | /Volume Mount,        |  |
|  |  +-------------+  |  | Cloud Storage, etc.)  |  |
|  +-------------------+  +-----------------------+  |
|                                                    |
|  +---------------------+  +---------------------+  |
|  |   Empty Dir Volume  |  |   Host Path Volume  |  |
|  | (Temporary Storage) |  | (Nodes file system) |  |
|  +---------------------+  +---------------------+  |
|                                                    |
+----------------------------------------------------+

Kubernetes Networking and Ingress

Networking is a large area of K8s and is the largest challenge or concept to learn.

+----------------------------------------------------+
|                +------------------+                |
|                | External Traffic |                |
|                +------------------+                |
|                         |                          |
|                         v                          |
|            +-------------------------+             |
|            |      Load Balancer      |             |
|            +-------------------------+             |
|                         |                          |
|                         v                          |
|          +-----------------------------+           |
|          |     Ingress Controller      |           |
|          |   (e.g., NGINX, Traefik)    |           |
|          +-----------------------------+           |
|           |                           |            |
|           v                           v            |
|   +-------------------+     +-------------------+  |
|   |   Ingress Rule 1  |     |   Ingress Rule 2  |  |
|   |   host: foo.com   |     |   host: bar.com   |  |
|   |   path: /app1     |     |   path: /app2     |  |
|   +-------------------+     +-------------------+  |
|           |                           |            |
|           v                           v            |
|  +---------------------+  +---------------------+  |
|  |      Service 1      |  |      Service 2      |  |
|  | (ClusterIP/NodePort)|  | (ClusterIP/NodePort)|  |
|  +---------------------+  +---------------------+  |
|      |             |          |            |       |
|      v             v          v            v       |
|  +-------+     +-------+  +-------+     +-------+  |
|  | Pod 1A|     | Pod 1B|  | Pod 2A|     | Pod 2B|  |
|  +-------+     +-------+  +-------+     +-------+  |
|      |             |          |             |      |
|      v             v          v             v      |
|  +-----------------------------------------------+ |
|  |               Container Network               | |
|  |    (e.g., Flannel, Calico, Weave, Cilium)     | |
|  +-----------------------------------------------+ |
|                          |                         |
|                          v                         |
|               +-------------------+                |
|               |    Node Network   |                |
|               +-------------------+                |
+----------------------------------------------------+

This diagram illustrates:

  • External traffic enters through a Load Balancer.
  • The Ingress Controller (e.g., NGINX or Traefik) receives the traffic and processes it based on Ingress Rules.
  • Ingress Rules define how traffic should be routed based on hostnames and paths.
  • Services (ClusterIP or NodePort) receive traffic from the Ingress Controller and distribute it to Pods.
  • Pods contain the application containers and are distributed across nodes.
  • The Container Network (implemented by CNI plugins like Flannel, Calico, Weave, or Cilium) enables communication between Pods across nodes.
  • The Node Network connects all nodes in the cluster.
Networking Model
+-------------------------++-------------------------+
|          Node 1         ||          Node 2         |
| +---------+ +---------+ || +---------+ +---------+ |
| |   Pod1  | |   Pod2  | || |   Pod3  | |   Pod4  | |
| |IP:10.1.1| |IP:10.1.2| || |IP:10.2.1| |IP:10.2.2| |
| +---------+ +---------+ || +---------+ +---------+ |
|            |            ||            |            |
| Virtual Ethernet Bridge || Virtual Ethernet Bridge |
|            |            ||            |            |
+----------- |------------++------------|------------+
             |                          |
             |  Cluster Network Fabric  |
             +--------------------------+
  • Pod IP Addressing: Each pod is assigned a unique IP address from the cluster-wide CIDR range. This ensures that every pod has a distinct identity within the cluster.
  • Direct Communication: Pods can communicate directly with each other using their assigned IP addresses, without the need for Network Address Translation (NAT) or port mapping.
  • Intra-Node Communication: For pods on the same node, communication occurs through a virtual ethernet bridge. This allows for efficient local traffic routing.
  • Inter-Node Communication: When pods on different nodes need to communicate, the cluster-level network layer handles routing based on the pod IP ranges assigned to each node.
  • CNI Plugins: Container Network Interface (CNI) plugins implement the actual networking, ensuring proper routing and connectivity across the cluster. Popular CNI plugins include Calico, Flannel, and Weave.

This architecture simplifies application design and deployment, as pods can be treated similarly to VMs or physical hosts from a networking perspective.

Services

        +------------------------+
        |        Service         |
        |  (ClusterIP/NodePort)  |
        |      IP: 10.0.0.1      |
        +------------------------+
                    |
              Load Balancing       
                    |              
        +-----------+-----------+
        |           |           |
+---------------+   |   +---------------+
|     Pod 1     |   |   |     Pod 2     |
|    IP:10.1    |   |   |    IP:10.2    |
+---------------+   |   +---------------+
                    |
            +--------------+
            |     Pod 3    |
            |    IP:10.3   |
            +--------------+

Kubernetes Services provide a stable network endpoint for a set of Pods, enabling reliable communication within the cluster. Services abstract the underlying Pod network, offering a consistent way to access applications regardless of Pod lifecycle changes. Key aspects of Kubernetes Services include:

  • Service Types:
    • ClusterIP (default): Exposes the service on an internal IP in the cluster
    • NodePort: Exposes the service on each node's IP at a static port
    • LoadBalancer: Exposes the service externally using a cloud provider's load balancer
    • ExternalName: Maps the service to the contents of the externalName field
    • Headless: Allows direct access to individual pod IPs
  • Service Discovery: Services can be discovered through DNS or environment variables, making it easy for applications to find and communicate with each other.
  • Load Balancing: Services automatically distribute incoming traffic across all backend pods, ensuring even load distribution.
  • Stable Endpoints: Services provide stable IP addresses and DNS names for groups of pods, abstracting away the dynamic nature of pod lifecycles.
  • Cloud Integration: Services can integrate with cloud provider load balancers for external access, simplifying the process of exposing applications to the internet.

Services play a crucial role in microservices architectures, facilitating seamless communication between application components and enabling scalability and resilience in Kubernetes environments

Ingress
External Traffic
       |
+------v------+
|   Ingress   |
|  Controller |
+------+------+
       |
+------v------+
|   Ingress   |
|    Rules    |
+------+------+
       |
+------v------+
|  Services   |
+------+------+
       |
+------v------+
|     Pods    |
+-------------+

Kubernetes Ingress is an API object that manages external access to services within a cluster, providing HTTP and HTTPS routing rules. It acts as a single entry point for incoming traffic, simplifying the exposure of multiple services through a unified interface. Key features of Ingress include:

  • Traffic Routing: Ingress can route traffic based on URL paths, hostnames, or other criteria, allowing for complex routing scenarios.
  • SSL/TLS Termination: Ingress can handle SSL/TLS termination, offloading this responsibility from individual services.
  • Load Balancing: Ingress can distribute traffic across multiple backend services, acting as a load balancer.
  • Name-based Virtual Hosting: Ingress supports routing to different services based on the hostname, enabling multiple applications to share a single IP address.
  • Ingress Controller: Ingress requires an Ingress Controller to function, which implements the actual routing and load balancing logic. Popular Ingress Controllers include NGINX, Traefik, and Istio.

By consolidating routing rules into a single resource, Ingress simplifies network management and reduces the need for multiple load balancers, making it an essential component for production-ready Kubernetes deployments.

Examples:
Create a simple web application

cat <<EOF | k apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
        - name: web-app
          image: nginx:latest
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: web-app-service
spec:
  selector:
    app: web-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
EOF

This will create an app named web-app with a port 80 exposure to the pod.
It will also create a service directing calls to the deployment named web-app on port 80 to port 80 of one of the containers.
kubectl get deployments
kubectl get pods
kubectl get services

giving something like

NAME READY UP-TO-DATE AVAILABLE AGE
web-app 2/2 2 2 46s

NAME READY STATUS RESTARTS AGE
web-app-6fdf6bcdd6-cfkjk 1/1 Running 0 42s
web-app-6fdf6bcdd6-nxv7f 1/1 Running 0 42s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 \<none\> 443/TCP 57s
web-app-service ClusterIP 10.110.70.144 \<none\> 80/TCP 46s

Now create an ingress to create access

cat \<\<EOF \| k apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-app-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /\$1
spec:
  rules:
    - host: web-app.info
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: web-app-service
                port:
                  number: 80
EOF

This will create an ingress that will create a connection outside of the cluster with web-app.info as the host name that will direct all connections to port 80 of web-app-service service that will then forward this to port 80 of the deployment for forwarding to one of the replicas for connection.

kubectl get ingress

NAME CLASS HOSTS ADDRESS PORTS AGE\
web-app-ingress <none> http://web-app.info 80 2m28s

Ensure that the Ingress addon is enabled in Minikube.
minikube addons enable ingress

This command enables the NGINX Ingress Controller in your Minikube cluster.

Obtain the IP address of your Minikube cluster.
minikube ip

This will return the IP address of your Minikube cluster.

Add an entry to your hosts file for web-app.info to the Minikube IP.

echo "$(minikube ip) web-app.info" | sudo tee -a /etc/hosts

This step is necessary because you’ve specifiedweb-app.infoas the host in your Ingress resource.

Now you should be able to access your application by opening a web browser and navigating to: http://web-app.info

If everything is set up correctly, you should see the NGINX welcome page.

If you’re unable to access the application, try the following:
Check Ingress status kubectl get ingress, ensure that the ADDRESS field is populated with an IP address.

Verify ingress kubectl get pods -n ingress-nginx , make sure the Ingress Controller pod is running.

Check ingress logs looking for ERRORS kubectl logs -n ingress-nginx $(kubectl get pods -n ingress-nginx -o name) (this can be run in seperate parts kubectl get pods -n ingress-nginx -o name then run kubectl logs -n ingress-nginx with the ingress)

Last resort you can try port forwarding. kubectl port-forward svc/web-app-service 8080:80 , now access the application at http://localhost:8080

Remember that Minikube is running inside a VM, so network access can sometimes be tricky depending on your setup. The methods described above should work in most cases, but you might need to adjust based on your specific environment

Kubernetes RBAC and Security Concepts

+----------------------------------------------------+
|                 Kubernetes Cluster                 |
|                                                    |
| +--------------------+  +------------------------+ |
| |    RBAC Objects    |  |   Security Contexts    | |
| |  +---------------+ |  | +--------------------+ | |
| |  |     Roles     | |  | |     Pod Security   | | |
| |  | (Namespaced)  | |  | |      Context       | | |
| |  +---------------+ |  | |    - User/Group    | | |
| |          |         |  | |    - SELinux       | | |
| |          v         |  | |    - RunAsUser     | | |
| |  +---------------+ |  | |    - Capabilities  | | |
| | |  RoleBindings  | |  | +--------------------+ | |
| | |  (Namespaced)  | |  |            |           | |
| | +----------------+ |  |            v           | |
| |                    |  | +--------------------+ | |
| | +----------------+ |  | | Container Security | | |
| | |  ClusterRoles  | |  | |      Context       | | |
| | | (Cluster- Wide)| |  | |  - RunAsNonRoot    | | |
| | +----------------+ |  | |  - ReadOnlyRootFS  | | |
| |         |          |  | |  - Privileged      | | |
| |         v          |  | +--------------------+ | |
| | +----------------+ |  |                        | |
| | |  ClusterRole-  | |  +------------------------+ |
| | |    Bindings    | |                             |
| | | (Cluster-wide) | |                             |
| | +----------------+ |                             |
| |                    |                             |
| +--------------------+                             |
|                                                    |
| +------------------------------------------------+ |
| |               Network Policies                 | |
| | +-------------------+ +----------------------+ | |
| | |   Ingress Rules   | |     Egress Rules     | | |
| | |                   | |                      | | |
| | | - From: (sources) | | - To: (destinations) | | |
| | | - Ports           | | - Ports              | | |
| | +-------------------+ +----------------------+ | |
| |                                                | |
| +------------------------------------------------+ |
|                                                    |
+----------------------------------------------------+
  • RBAC Objects:
    • Roles and RoleBindings (namespaced)
    • ClusterRoles and ClusterRoleBindings (cluster-wide)
      These objects define who can access what resources and perform what actions.
  • Security Contexts:
    • Pod Security Context: Applies to all containers in a pod
    • Container Security Context: Specific to individual containers
      These define privilege and access control settings for pods and containers.
  • Network Policies:
    • Ingress Rules: Control incoming traffic to pods
    • Egress Rules: Control outgoing traffic from pods
      These act as a virtual firewall for your Kubernetes cluster.

The diagram shows how these components interact within the Kubernetes cluster to provide a comprehensive security model. RBAC controls access to Kubernetes API resources, Security Contexts manage the runtime security settings for pods and containers, and Network Policies control the network traffic between pods and external sources

Role-Based Access Control (RBAC)
  • Regulates access to resources based on the roles of individual users.
  • Key objects: Role, ClusterRole, RoleBinding, ClusterRoleBinding. Example: Creating a role that allows reading pods: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: pod-reader Rules:
  • apiGroups: [””] resources: [“pods”] verbs: [“get”, “watch”, “list”] ```
Security Contexts
  • Define privilege and access control settings for Pods or Containers.
  • Can set UID, GID, capabilities, and other security parameters.
Network Policies
  • Specify how groups of pods are allowed to communicate with each other and other network endpoints.
  • Act as a virtual firewall for your Kubernetes cluster.\

Exercise:

Deploying a Configurable Web Application\

In this exercise, we'll create a simple web application that reads its configuration from a ConfigMap. We'll then deploy it to Kubernetes and expose it using a Service and Ingress.
This exercise demonstrates:

  1. Creating and using ConfigMaps
  2. Deploying a web application with Kubernetes
  3. Exposing the application using a Service and Ingress
  4. Injecting configuration into a container using environment variables
  5. Mounting ConfigMap data as a volume
  6. Updating configuration and seeing the changes reflected in the application
  • Step 1: Create a ConfigMap
    First, let's create a ConfigMap with some configuration data:
    cat \<\<EOF \| kubectl apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: webapp-config
    data:
      BACKGROUND_COLOR: \"#f0f0f0\"
      MESSAGE: \"Welcome to our configurable web app!\"
    EOF
    
    +-------------------------------------------------+
    |                Kubernetes Cluster               |
    |                                                 |
    | +---------------------------------------------+ |
    | |                  ConfigMap                  | |
    | | Name: webapp-config                         | |
    | | Data:                                       | |
    | | +-----------------------------------------+ | |
    | | |         Key      |        Value         | | |
    | | +------------------+----------------------+ | |
    | | | BACKGROUND_COLOR |       "#f0f0f0"      | | |
    | | +------------------+----------------------+ | |
    | | | MESSAGE          | "Welcome to our      | | |
    | | |                  | configurable web app"| | |
    | | +------------------+----------------------+ | |
    | |                                             | |
    | +---------------------------------------------+ |
    |                                                 |
    +-------------------------------------------------+
    

    This diagram shows:

  1. The overall Kubernetes cluster environment.
  2. Within the cluster, a ConfigMap named "webapp-config" is created.
  3. The ConfigMap contains two key-value pairs:
    • BACKGROUND_COLOR: "#f0f0f0"
    • MESSAGE: "Welcome to our configurable web app!"

The diagram illustrates how the ConfigMap stores configuration data as key-value pairs, which can be used by applications running in the cluster. This ConfigMap could be mounted as a volume or used as environment variables in a Pod, allowing the application to access these configuration values at runtime.

  • Step 2: Create a Deployment
    Now, let's create a Deployment for our web application. We'll use a simple Nginx image and inject our configuration as environment variables:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: webapp
  template:
    metadata:
      labels:
        app: webapp
    spec:
      containers:
        - name: webapp
          image: nginxtest
          ports:
            - containerPort: 80
          envFrom:
            - configMapRef:
                name: webapp-config
          volumeMounts:
            - name: config
              mountPath: /usr/share/nginx/html
      volumes:
        - name: config
          configMap:
            name: webapp-content
            items:
              - key: index.html
                path: index.html
EOF
+----------------------------------------------------+
|                   Kubernetes Cluster               |
|                                                    |
| +------------------------------------------------+ |
| |                Deployment: webapp              | |
| |                                                | |
| | +--------------------------------------------+ | |
| | |           ReplicaSet (2 replicas)          | | |
| | |                                            | | |
| | | +----------------------------------------+ | | |
| | | |                 Pod 1                  | | | |
| | | |                                        | | | |
| | | | +------------------------------------+ | | | |
| | | | |         Container: webapp          | | | | |
| | | | |                                    | | | | |
| | | | | Image: nginx:alpine                | | | | |
| | | | | Port: 80                           | | | | |
| | | | |                                    | | | | |
| | | | | EnvFrom:                           | | | | |
| | | | | ConfigMap: webapp-config           | | | | |
| | | | |                                    | | | | |
| | | | | VolumeMount:                       | | | | |
| | | | |   Name: config                     | | | | |
| | | | |   MountPath: /usr/share/nginx/html | | | | |
| | | | +------------------------------------+ | | | |
| | | |                                        | | | |
| | | | +------------------------------------+ | | | |
| | | | |           Volume: config           | | | | |
| | | | |  ConfigMap: webapp-config          | | | | |
| | | | |    Key: index.html                 | | | | |
| | | | |    Path: index.html                | | | | |
| | | | +------------------------------------+ | | | |
| | | |                                        | | | |
| | | +----------------------------------------+ | | | 
| | |                                            | | |
| | |   +------------------------------------+   | | |
| | |   |                Pod 2               |   | | |
| | |   |      (Same structure as Pod 1)     |   | | |
| | |   +------------------------------------+   | | |
| | |                                            | | |
| | +--------------------------------------------+ | |
| |                                                | |
| +------------------------------------------------+ |
|                                                    |
+----------------------------------------------------+

This diagram illustrates:

  1. The overall Kubernetes Deployment named “webapp”.
  2. The ReplicaSet managing 2 replicas (Pods).
  3. The structure of each Pod, including:
    - The container named “webapp” using the nginx:alpine image.
    - The container port 80 exposed.
    - Environment variables loaded from the ConfigMap “webapp-config”
    - A volume mount for the “/usr/share/nginx/html” path.\
  4. The volume configuration, which mounts the “index.html” key from the “webapp-config” ConfigMap.

The diagram shows how the Deployment manages multiple identical Pods, each containing a container with the specified configuration. It also illustrates the use of ConfigMaps for both environment variables and file mounting, demonstrating how Kubernetes can inject configuration data into containers.

  • Step 3: Create a ConfigMap for the HTML content
    Let's create another ConfigMap to hold our HTML content:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: webapp-content
data:
  index.html: |
  <!DOCTYPE html>
  <html>
    <head>
      <title>Configurable Web App\</title\>
        <style>
          body { background-color: ${BACKGROUND_COLOR}; font-family: Arial,sans-serif; }
        </style>
    </head>
    <body>
      <h1>${MESSAGE}</h1>
        <p\>This page is served by Nginx and configured using Kubernetes ConfigMaps.</p>
    </body>
  </html>
EOF
+----------------------------------------------------+
|                  Kubernetes Cluster                |
|                                                    |
|  +----------------------------------------------+  |
|  |           ConfigMap: webapp-config           |  |
|  |                                              |  |
|  | Data:                                        |  |
|  | BACKGROUND_COLOR: "#f0f0f0"                  |  |
|  | MESSAGE: "Welcome to our configurable\..."   |  |
|  +----------------------------------------------+  |
|                                                    |
|  +----------------------------------------------+  |
|  | ConfigMap: webapp-content                    |  |
|  |                                              |  |
|  | Data:                                        |  |
|  | index.html: (HTML content)                   |  |
|  | - Uses ${BACKGROUND_COLOR}                   |  |
|  | - Uses ${MESSAGE}                            |  |
|  +----------------------------------------------+  |
|                                                    |
|  +----------------------------------------------+  |
|  | Deployment: webapp                           |  |
|  |                                              |  |
|  |  +----------------------------------------+  |  |
|  |  | Pod                                    |  |  |
|  |  |   +-------------------------------+    |  |  |
|  |  |   | Container: webapp             |    |  |  |
|  |  |   |                               |    |  |  |
|  |  |   | - Image: nginx:alpine         |    |  |  |
|  |  |   | - Port: 80                    |    |  |  |
|  |  |   |                               |    |  |  |
|  |  |   | EnvFrom:                      |    |  |  |
|  |  |   | ConfigMap: webapp-config      |    |  |  |
|  |  |   |                               |    |  |  |
|  |  |   | VolumeMount:                  |    |  |  |
|  |  |   | Name: config                  |    |  |  |
|  |  |   | MountPath: /usr/share/\...    |    |  |  |
|  |  |   +-------------------------------+    |  |  |
|  |  |                                        |  |  |
|  |  |   +-------------------------------+    |  |  |
|  |  |   | Volume: config                |    |  |  |
|  |  |   | ConfigMap: webapp-content     |    |  |  |
|  |  |   | Key: index.html               |    |  |  |
|  |  |   | Path: index.html              |    |  |  |
|  |  |   +-------------------------------+    |  |  |
|  |  +----------------------------------------+  |  |
|  +----------------------------------------------+  |
+----------------------------------------------------+

This updated diagram now includes:

  1. The original webapp-config ConfigMap with BACKGROUND_COLOR and MESSAGE.
  2. The new ``webapp-content ConfigMap containing the index.html` template.
  3. The Deployment and Pod structure, showing how these ConfigMaps are used:
    • webapp-config is used as environment variables (EnvFrom).
    • webapp-content is mounted as a volume, providing the index.html file.

The new webapp-content ConfigMap contains an HTML template that uses the ${BACKGROUND_COLOR} and ${MESSAGE} variables. These variables will be replaced with the actual values from the webapp-config ConfigMap when the page is served.This setup allows for a dynamic, configurable web application where:

  • The content of the page (HTML structure) is defined in one ConfigMap (webapp-content).
  • The configuration values (background color and message) are defined in another ConfigMap (webapp-config).
  • The Nginx container serves the HTML content, with the variables replaced by the actual configuration values.

This separation of concerns makes it easy to update either the content template or the configuration values independently, providing flexibility in managing your web application's appearance and content.

  • Step 4: Create a Service
    Now, let's create a Service to expose our Deployment:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
  name: webapp-service
spec:
  selector:
    app: webapp
    ports:
      - protocol: TCP
        port: 80
    targetPort: 80
EOF
+---------------------------------------------------+
|                Kubernetes Cluster                 |
|                                                   |
|  +---------------------------------------------+  |
|  | ConfigMap: webapp-config                    |  |
|  |                                             |  |
|  | Data:                                       |  |
|  | BACKGROUND_COLOR: "#f0f0f0"                 |  |
|  | MESSAGE: "Welcome to our configurable..."   |  |
|  +---------------------------------------------+  |
|                                                   |
|  +---------------------------------------------+  |
|  | ConfigMap: webapp-content                   |  |
|  |                                             |  |
|  | Data:                                       |  |
|  | index.html: (HTML content)                  |  |
|  | - Uses ${BACKGROUND_COLOR}                  |  |
|  | - Uses ${MESSAGE}                           |  |
|  +---------------------------------------------+  |
|                                                   |
|  +---------------------------------------------+  |
|  | Deployment: webapp                          |  |
|  |                                             |  |
|  |  +---------------------------------------+  |  |
|  |  | Pod                                   |  |  | 
|  |  |   +-------------------------------+   |  |  |
|  |  |   | Container: webapp             |   |  |  |
|  |  |   |                               |   |  |  |
|  |  |   | - Image: nginx:alpine         |   |  |  |
|  |  |   | - Port: 80                    |   |  |  |
|  |  |   |                               |   |  |  |
|  |  |   | EnvFrom:                      |   |  |  |
|  |  |   | ConfigMap: webapp-config      |   |  |  |
|  |  |   |                               |   |  |  |
|  |  |   | VolumeMount:                  |   |  |  |
|  |  |   | Name: config                  |   |  |  |
|  |  |   | MountPath: /usr/share/...     |   |  |  |
|  |  |   +-------------------------------+   |  |  |
|  |  |                                       |  |  |
|  |  |   +-------------------------------+   |  |  |
|  |  |   | Volume: config                |   |  |  |
|  |  |   | ConfigMap: webapp-content     |   |  |  |
|  |  |   | Key: index.html               |   |  |  |
|  |  |   | Path: index.html              |   |  |  |
|  |  |   +-------------------------------+   |  |  |
|  |  +---------------------------------------+  |  |
|  +---------------------------------------------+  |
|                                                   |
|  +---------------------------------------------+  |
|  | Service: webapp-service                     |  |
|  |                                             |  |
|  | Selector: app: webapp                       |  |
|  | Port: 80 -> targetPort: 80                  |  |
|  +---------------------------------------------+  |
+---------------------------------------------------+

This updated diagram now includes:

  1. The original webapp-config ConfigMap with BACKGROUND_COLOR and MESSAGE.
  2. The webapp-content ConfigMap containing the index.html template.
  3. The Deployment and Pod structure, showing how these ConfigMaps are used.
  4. The new webapp-service Service, which:
    • Selects Pods with the label app: webapp
    • Exposes port 80 and forwards traffic to the Pods' port 80

The Service acts as a stable network endpoint for the Pods created by the Deployment. It provides:

  • Load balancing: Distributes incoming traffic across all Pods matching the selector.
  • Service discovery: Provides a stable IP address and DNS name for the set of Pods.
  • Port mapping: Maps the Service port (80) to the target port on the Pods (also 80 in this case).

This Service allows other components within the cluster (or external to the cluster, depending on the Service type) to access the webapp Pods without needing to know the individual Pod IP addresses. It adds a layer of abstraction that enhances the scalability and flexibility of your application.The flow of traffic would typically be:External Request -> Service (webapp-service) -> Pod (webapp) -> Container (nginx:alpine) This setup allows you to scale your Deployment (adding or removing Pods) without changing how other components interact with your webapp, as they will always communicate through the Service.

  • Step 5: Create an Ingress
    If your cluster has an Ingress controller, you can create an Ingress resource:
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webapp-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: webapp.example.com
      http:
        paths: /
          - path: 
            pathType: Prefix
            backend:
              service:
                name: webapp-service
                port:
                  number: 80
EOF
+-------------------------------------------------+
|              Kubernetes Cluster                 |
|                                                 |
|  +-------------------------------------------+  |
|  |       ConfigMap: webapp-config            |  |
|  |                                           |  |
|  | Data:                                     |  |
|  | BACKGROUND_COLOR:  "#f0f0f0"              |  |
|  | MESSAGE: "Welcome to our configurable..." |  |
|  +-------------------------------------------+  |
|                                                 |
|  +-------------------------------------------+  |
|  |        ConfigMap: webapp-content          |  |
|  |                                           |  |
|  |   Data:                                   |  |
|  |   index.html: (HTML content)              |  |
|  |     - Uses ${BACKGROUND_COLOR}            |  |
|  |     - Uses ${MESSAGE}                     |  |
|  +-------------------------------------------+  |
|                                                 |
|  +-------------------------------------------+  |
|  |           Deployment: webapp              |  |
|  |                                           |  |
|  |  +-------------------------------------+  |  |
|  |  |                 Pod                 |  |  |
|  |  |  +-------------------------------+  |  |  |
|  |  |  |   Container: webapp           |  |  |  |
|  |  |  |                               |  |  |  |
|  |  |  |   - Image: nginx:alpine       |  |  |  |
|  |  |  |   - Port: 80                  |  |  |  |
|  |  |  |                               |  |  |  |
|  |  |  |   EnvFrom:                    |  |  |  |
|  |  |  |   ConfigMap: webapp-config    |  |  |  |
|  |  |  |                               |  |  |  |
|  |  |  |   VolumeMount:                |  |  |  |
|  |  |  |   Name: config                |  |  |  |
|  |  |  |   MountPath: /usr/share/...   |  |  |  |
|  |  |  +-------------------------------+  |  |  |
|  |  |                                     |  |  |
|  |  |  +-------------------------------+  |  |  |
|  |  |  |        Volume: config         |  |  |  |
|  |  |  |  ConfigMap: webapp-content    |  |  |  |
|  |  |  |  Key: index.html              |  |  |  |
|  |  |  |  Path: index.html             |  |  |  |
|  |  |  +-------------------------------+  |  |  |
|  |  +-------------------------------------+  |  |
|  +-------------------------------------------+  |
|                                                 |
|  +-------------------------------------------+  |
|  |        Service: webapp-service            |  |
|  |                                           |  |
|  |   Selector: app: webapp                   |  |
|  |   Port: 80 -> targetPort: 80              |  |
|  +-------------------------------------------+  |
|                                                 |
|  +-------------------------------------------+  |
|  |         Ingress: webapp-ingress           |  |
|  |                                           |  |
|  |   Host: webapp.example.com                |  |
|  |   Path: /                                 |  |
|  |   Backend: webapp-service:80              |  |
|  +-------------------------------------------+  |
+-------------------------------------------------+

This updated diagram now includes:

  1. The original webapp-config ConfigMap with BACKGROUND_COLOR and MESSAGE.
  2. The webapp-content ConfigMap containing the index.html template.
  3. The Deployment and Pod structure, showing how these ConfigMaps are used.
  4. The webapp-service Service that exposes the Pods.
  5. The new webapp-ingress Ingress resource, which:
    • Routes traffic for the host webapp.example.com
    • Directs all paths (/) to the webapp-service on port 80

The Ingress resource acts as an entry point for external traffic into the cluster. It provides:

  • Host-based routing: It routes traffic based on the webapp.example.com hostname.
  • Path-based routing: In this case, all paths (/) are routed to the backend service.
  • Integration with the Ingress Controller: The nginx.ingress.kubernetes.io/rewrite-target: / annotation is specific to the NGINX Ingress Controller, indicating that the path should be rewritten to / when forwarding to the backend.

The flow of traffic would now be: External Request -> Ingress Controller -> Ingress (webapp-ingress) -> Service (webapp-service) -> Pod (webapp) -> Container (nginx:alpine)This setup allows you to:

  1. Access your application from outside the cluster using a domain name (webapp.example.com).
  2. Potentially host multiple applications on the same IP address using different hostnames.
  3. Implement more complex routing rules if needed (e.g., routing different paths to different services).

Remember to ensure that:

  • The Ingress Controller is installed in your cluster.
  • The DNS for webapp.example.com is configured to point to your cluster's external IP.
  • Any necessary TLS certificates are configured if you want to enable HTTPS.

This Ingress resource completes the basic setup of a web application in Kubernetes, providing a full path for external traffic to reach your containerized application.

  • Step 6: Verify the deployment
    Check if all resources are created and running:
    kubectl get configmaps
    kubectl get deployments
    kubectl get pods
    kubectl get services
    kubectl get ingress

kubectl get configmaps

NAME                   DATA           AGE
kube-root-ca.crt        1             21m
webapp-config           2             18m
webapp-content          1             13m

kubectl get deployments

NAME      READY   UP-TO-DATE AVAILABLE   AGE
webapp     2/2       2           2       11m

kubectl get pods

NAME               READY   STATUS   RESTARTS   AGE
webapp-756448-8hz   1/1    Running     0      7m26s
webapp-756448-b6r   1/1    Running     0      7m33s

kubectl get services

NAME             TYPE      CLUSTER-IP      EXTERNAL-IP  PORT(S)   AGE
kubernetes      Cluster   IP 10.96.0.1       <none>     443/TCP   22m
webapp-service  Cluster   IP 10.107.192.80   <none>     80/TCP    5m20s

kubectl get ingress

NAME            CLASS   HOSTS        ADDRESS        PORTS   AGE
webapp-ingress         <none>  webapp.example.com    80    3m52s
  • Step 7: Access the application
    If you're using Minikube, you can use port-forwarding to access the application:
    kubectl port-forward service/webapp-service 8080:80

Then open a web browser and go to http://localhost:8080.If you're using an Ingress, add the following to your /etc/hosts file:
echo "127.0.0.1 web-app.info" | sudo tee -a /etc/hosts Then access the application at http://webapp.example.com.

  • Step 8: Modify the configuration
    Let's change the background color and message:
    kubectl edit configmap webapp-config

Change the BACKGROUND_COLOR to “#e0e0e0” and the MESSAGE to "Updated configuration!".

  • Step 9: Restart the Deployment to pick up the new configuration
    kubectl rollout restart deployment webapp
  • Step 10: Access the application again to see the changes

Day 5: Working with local K8s options

Docker Images in kind

Building a custom Docker image
  • Create a Dockerfile for your application.
  • Build the image: docker build -t your-image:tag .
Loading the image into kind cluster
  • Use the command: kind load docker-image your-image:tag
  • This copies the image from your local Docker daemon into the kind cluster.
Limitations and workarounds for Docker-in-Docker scenarios
  • kind runs Kubernetes inside Docker, which can complicate building images inside the cluster.
  • Workaround: Use kaniko or buildkit for in-cluster builds.
Creating deployments with custom images
  • Create a deployment YAML file (e.g., deployment.yaml) referencing your custom image:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: your-app
    spec:
      replicas: 1
      selector:
    matchLabels:
      app: your-app
    template:
      metadata:
        labels:
          app: your-app
      spec:
        containers:
          - name: your-app
            image: your-image:tag
            imagePullPolicy: Never
    
  • Apply the deployment:\ kubectl apply -f deployment.yaml

Working with Images in Minikube

Minikube provides several options for working with Docker images:

Using the Host Docker Daemon
  • Configure your terminal to use Minikube's Docker daemon:
    eval $(minikube docker-env)
  • Build your image. It will now be available to Minikube without additional steps.
Loading Images into Minikube
  • If you've built the image using your host's Docker daemon:
    minikube image load your-image:tag
  • This copies the image from your local Docker daemon into Minikube.
Creating Deployments with Custom Images
  • Create a deployment YAML file (e.g., deployment.yaml) referencing your custom image:
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: your-app
    spec:
      replicas: 1
      selector:
    matchLabels:
      app: your-app
      template:
    metadata:
      labels:
        app: your-app
    spec:
      containers:
        - name: your-app
          image: your-image:tag
          imagePullPolicy: IfNotPresent
    
  • Apply the deployment: kubectl apply -f deployment.yaml
Minikube-Specific Features
Built-in Docker Registry

Minikube includes a built-in Docker registry. To use it:

  • Enable the registry addon:
    minikube addons enable registry
  • Push your image to the Minikube registry:
    docker push \$(minikube ip):5000/your-image:tag
  • Update your deployment to use the registry image:
    image: localhost:5000/your-image:tag
Direct Image Building
  • Minikube can build images directly using its Docker daemon:
    minikube image build -t your-image:tag .
  • This builds the image inside Minikube, making it immediately available for use.
Monitoring and Troubleshooting
  • Check if your pods are running:
    kubectl get pods
  • If pods are not in the "Running" state, check the logs:
    kubectl logs \<pod-name\>
  • For more detailed troubleshooting, use:
    kubectl describe pod \<pod-name\>
  • To access the Minikube Docker daemon logs:
    minikube logs
Cleaning Up

To remove unused images and free up space:
minikube image rm your-image:tag

By following these steps, you can effectively work with custom Docker images in your Minikube cluster, allowing you to develop and test your Kubernetes deployments locally. Minikube offers more flexibility in terms of image handling compared to kind, making it a popular choice for local Kubernetes development.

Best Practices

  1. Use meaningful tags for your images, preferably based on git commit hashes or semantic versioning.
  2. When updating your application, build a new image with a new tag, then update your deployment to use the new image tag.
  3. For production-like setups, consider using a private Docker registry. Minikube can be configured to pull from private registries.

Week 2: Service Mesh Concepts and Python

Day 1-2: Service Mesh

Fundamentals

Core concepts of service mesh
  • A dedicated infrastructure layer for handling service-to-service communication.
  • Provides features like service discovery, load balancing, encryption, observability, traceability, authentication, and authorization.
Problems service meshes solve
  • Complexity in microservices communication
  • Lack of observability in distributed systems
  • Inconsistent security policies across services
  • Difficulty in implementing resilience patterns (circuit breaking, retries)
Evolution of ingress
  • From simple L7 load balancers to advanced API gateways
  • Integration with service mesh for consistent policy enforcement

Service Mesh Architecture

A service mesh consists of two primary components: the data plane and the control plane.

Data Plane

The data plane is composed of a network of lightweight proxies, typically deployed as sidecars alongside each service instance. These proxies intercept and manage all network traffic to and from the service.

Example:
Let's consider a simple e-commerce application with three microservices: Product, Order, and Payment. In a service mesh, each instance of these services would have a sidecar proxy deployed alongside it:

Product Service + Sidecar Proxy Order Service + Sidecar Proxy Payment Service + Sidecar Proxy

When the Order service needs to communicate with the Payment service, the request goes through the following path:

  1. Order service -> Order's sidecar proxy
  2. Order's sidecar proxy -> Payment's sidecar proxy
  3. Payment's sidecar proxy -> Payment service

This allows the mesh to control and observe all inter-service communication.

Control Plane

The control plane manages and configures the proxies to enforce policies, collect telemetry, and handle service discovery.

Example:
Using Istio as an example, the control plane consists of several components:

  • Pilot: Handles service discovery and traffic management
  • Citadel: Manages security and access policies
  • Galley: Validates configuration and distributes it to other components

The control plane would configure the sidecar proxies to implement specific routing rules, such as:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: payment-route
spec:
  hosts:
    - payment
  http:
    - route:
        - destination:
            host: payment
            subset: v1
          weight: 90
        - destination:
            host: payment
            subset: v2
          weight: 10

This configuration would route 90% of traffic to version 1 of the Payment service and 10% to version 2, enabling canary deployments or A/B testing.

Here is an example using Linkerd's control plane. This is simpler and consists of fewer components compared to Istio. The main components are:

  1. Destination: Handles service discovery and provides configuration to proxies
  2. Identity: Manages security and certificate issuance for mTLS
  3. Proxy Injector: Injects the Linkerd proxy as a sidecar

For traffic splitting in Linkerd, you would use either a TrafficSplit resource (if using the SMI extension) or an HTTPRoute resource (which is the preferred method going forward).
Here's an example using HTTPRoute:

apiVersion: policy.linkerd.io/v1beta2
kind: HTTPRoute
metadata:
  name: payment-route
  namespace: your-namespace
spec:
  parentRefs:
    - name: payment
      kind: Service
      group: core
      port: 8080
  rules:
    - backendRefs:
        - name: payment-v1
          port: 8080
          weight: 90
        - name: payment-v2
          port: 8080
          weight: 10

This configuration would achieve the same result as the Istio example, routing 90% of traffic to version 1 of the Payment service and 10% to version 2.

Key Features and Use Cases

Service Discovery and Load Balancing

Service meshes provide dynamic service discovery and intelligent load balancing.

Example:
In our e-commerce application, if we scale the Payment service to three instances, the service mesh would automatically discover these instances and distribute traffic among them. It could use advanced load balancing algorithms like least connections or weighted round-robin.

Traffic Management

Service meshes offer fine-grained control over traffic routing.

Example: Implementing a canary release for the Product service:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: product-canary
spec:
  hosts:
  - product
http:
  - match:
    - headers:
        user-agent:
          regex: ".*Chrome.*"
    route:
    - destination:
        host: product
        subset: v2
  - route:
    - destination:
        host: product
        subset: v1

This configuration routes all traffic from Chrome browsers to version 2 of the Product service, while all other traffic goes to version 1.

With Linkerd use HTTPRoute resource to define the traffic splitting:

apiVersion: policy.linkerd.io/v1beta2
kind: HTTPRoute
metadata:
  name: product-canary
  namespace: your-namespace
spec:
  parentRefs:
  - name: product
    kind: Service
    group: core
    port: 8080
  rules:
  - matches:
    - headers:
      - name: user-agent
        regex: \".\*Chrome.\*\"
    backendRefs:
    - name: product-v2
      port: 8080
- backendRefs:
  - name: product-v1
    port: 8080

This configuration routes all traffic from Chrome browsers to version 2 of the Product service, while all other traffic goes to version 1.
For more advanced canary deployments, you can use tools like Flagger with Linkerd. Flagger automates the process of creating new Kubernetes resources, watching metrics, and incrementally sending users to the new version.
Here's an example of how you might set up a Flagger canary for the Product service:

apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
  name: product
  namespace: test
spec:
  targetRef:
  apiVersion: apps/v1
  kind: Deployment
  name: product
  service:
    port: 8080
  analysis:
    interval: 30s
    threshold: 5
    maxWeight: 50
    stepWeight: 5
  metrics:
    - name: success-rate
      threshold: 99
      interval: 1m
    - name: latency
      threshold: 500
      interval: 1m

This configuration sets up a canary deployment that gradually increases traffic to the new version while monitoring success rate and latency.

Observability

Service meshes provide detailed insights into service-to-service communication.

Example:
Using Istio with Prometheus and Grafana, you can visualize request volume, latency, and error rates for each service. You might see a dashboard showing:

  • Request rate for Product service: 100 requests/second
  • 95th percentile latency for Order service: 250ms
  • Error rate for Payment service: 0.1%

This level of observability helps quickly identify and troubleshoot issues in the distributed system.

Linkerd provides similar observability capabilities to Istio, there are some differences in how it implements and presents these features.

  1. Using the Linkerd CLI:
    linkerd viz stat deploy -n your-namespace

This command would show you a table with metrics for each deployment, including:

  • Success rate
  • Request per second (RPS)
  • Latency (P50, P95, P99)
  1. Using the Linkerd dashboard:

You can access it by running:
linkerd viz dashboard

In the dashboard, you would see:

  • Request rate for Product service: 100 req/sec
  • 95th percentile latency for Order service: 250ms
  • Success rate for Payment service: 99.9% (which is equivalent to a 0.1% error rate)
Security

Service meshes can enforce mutual TLS (mTLS) encryption and fine-grained access policies.

**Example:
**Enforcing mTLS between all services:

apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: istio-system
spec:
  mtls:
    mode: STRICT

This configuration ensures all inter-service communication is encrypted and authenticated.

Linkerd automatically enables mTLS for all meshed services by default, so you don't need to explicitly configure it. However, if you want to ensure that only mTLS traffic is allowed, you can use Linkerd's authorization policies.

Challenges and Best Practices

While service meshes offer numerous benefits, they also introduce complexity and potential performance overhead.

Performance Considerations

The additional network hops introduced by sidecar proxies can increase latency. It's crucial to benchmark your application with and without the service mesh to understand the performance impact.

Best Practice:Start with a subset of your services in the mesh and gradually expand as you become more comfortable with the technology and its impact on your system.

Complexity Management

Service meshes add another layer to your infrastructure, which can increase operational complexity.

Best Practice:Invest time in your training.

Monitoring and Troubleshooting

While service meshes provide extensive observability, the volume of data can be overwhelming.

Best Practice:Define clear Service Level Objectives (SLOs) and set up alerts based on these. Use distributed tracing to debug complex issues across services.

In conclusion, service meshes offer powerful capabilities for managing microservices architectures, but they require careful planning and implementation. By understanding the core concepts and following best practices, organizations can leverage service meshes to build more resilient, observable, and secure distributed systems.

Day 3-4: Python for Kubernetes

Python basics review (if needed)
Data Types

Python has several built-in data types:

  • Numeric: int, float, complex
  • Sequence: list, tuple, range
  • Text: str
  • Mapping: dict
  • Set: set, frozense
  • Boolean: bool

Example:

Numeric Types

int (Integer)

age = 30
year = 2024
temperature = -5
x = 5

float (Floating-point)

pi = 3.14159
weight = 68.5
temperature = -2.8
y = 3.14

complex

z = 3 + 4j
w = complex(2, -3)

Sequence Types

list

fruits = ["apple", "banana", "cherry"]
numbers = [1, 2, 3, 4, 5]
mixed = [1, "two", 3.0, [4, 5]]

tuple

coordinates = (10, 20)
rgb = (255, 0, 128)
person = (\"John\", 30, \"London\")

range

numbers = range(5) # 0, 1, 2, 3, 4
even_numbers = range(0, 10, 2) # 0, 2, 4, 6, 8

Text Type

str (String)
name = "Alice"
message = 'Hello, World!'
multiline = """This is a
multiline string."""

Mapping Type

dict (Dictionary)

person = {"name": "Bob", "age": 25, "city": "Manchester"}
scores = {
    "Alice": 95,
    "Bob": 87,
    "Charlie": 92
}

Set Types

set

unique_numbers = {1, 2, 3, 4, 5}
fruits = {"apple", "banana", "cherry"}

frozenset

immutable_set = frozenset([1, 2, 3, 4, 5])

Boolean Type

bool

is_raining = True
has_licence = False
is_adult = age >= 18

Here are some examples of how these data types can be used in practice:

# Calculating area of a circle
radius = 5.0
area = pi * radius**2
print(f"The area of the circle is {area:.2f} square units")

# Working with lists
fruits.append("orange")
print(f"The second fruit is {fruits[1]}")

# Using a dictionary
print(f"{person['name']} is {person['age']} years old and lives
in {person['city']}")

# Set operations
a = {1, 2, 3, 4}
b = {3, 4, 5, 6}
print(f"Union: {a | b}")
print(f"Intersection: {a & b}")

# Boolean logic
if is_adult and not is_raining:
  print(\"Let\'s go for a walk!")

These examples demonstrate the basic usage of each data type. Remember that Python is dynamically typed, meaning you don't need to declare the type of a variable explicitly. The interpreter infers the type based on the value assigned to it.

[Control Structures]{.underline}

If-else statements:

if x > 0:
  print("Positive")
elif x < 0:
  print("Negative")
else:
  print("Zero")

For loops:

for i in range(5):
  print(i)

While loops:

count = 0
while count < 5:
  print(count)
  count += 1

[Functions]{.underline}

def greet(name):
  return f"Hello, {name}!"

message = greet("Alice")
  print(message)

[Classes]{.underline}

class Dog:
  def __init__(self, name):
    self.name = name

  def bark(self):
    return f"{self.name} says Woof!"

my_dog = Dog("Buddy")
print(my_dog.bark())
Python Package Management

pip

pip is the standard package manager for Python. It allows you to install and manage additional packages that are not part of the Python standard library.

Installing a package:

python3 -m pip install requests

Upgrading a package:

python3 -m pip install --upgrade requests

Python Virtual Environments

Virtual environments are isolated Python environments that allow you to install packages for specific projects without affecting your system-wide Python installation.

Creating a virtual environment:

python3 -m venv .venv

Here's a breakdown of what each part of the command does:

  • python3: This specifies that you are using Python 3 to execute the command. It ensures that the virtual environment is created using Python 3.
  • -m venv: The -m flag tells Python to run a module as a script. In this case, it runs the venv module, which is included in the standard library from Python 3.3 onwards, for creating virtual environments.
  • .venv: This is the name of the directory where the virtual environment will be created. The dot (.) at the beginning makes it a hidden directory on Unix-like systems, which is a common convention to keep your project directory tidy.
Activating a virtual environment:On Unix or MacOS:

source .venv/bin/activate

On Windows: .venv\\Scripts\\activate

Installing packages in a virtual environment:

Once activated, you can use pip to install packages, and they will be isolated to this environment.
pip install requests

Deactivating a virtual environment:

deactivate

Creating a requirements file:

To share your project's dependencies, you can create a requirements.txt file:
pip freeze > requirements.txt

Installing from a requirements file:

pip install -r requirements.txt

Remember, it's a good practice to use virtual environments for each of your Python projects to avoid conflicts between package versions. Explore pyenv

Kubernetes Python client library
  • Installation: pip install kubernetes
    This will allow Authentication and configuration, Creating, reading, updating, and deleting Kubernetes resources
Simple Python scripts for Kubernetes interaction

Here is an example to;

  • Listing pods in a namespace
  • Creating and managing deployments
  • Watching for changes in resources

Example script to list pods:

Create a virtual env

  • python3 -m venv .venv
  • source .venv/bin/activate
  • pip install kubernetes
  • Create testscript.py
from kubernetes import client, config

config.load_kube_config()
v1 = client.CoreV1Api()

pods = v1.list_pod_for_all_namespaces(watch=False)
for pod in pods.items:
  print(f"{pod.metadata.namespace}\t{pod.metadata.name}")

python3 testscript.py\

If running minikube the output may look like this

default debug-env
default webapp-6988595754-qnkqp
default webapp-6d989cd746-8wgzs
default webapp-cf544bc7c-24zpb
kube-system coredns-7db6d8ff4d-t46mv
kube-system etcd-minikube
kube-system kube-apiserver-minikube
kube-system kube-controller-manager-minikube
kube-system kube-proxy-jkgd5
kube-system kube-scheduler-minikube
kube-system storage-provisioner

You now have the basics to interact with a kubernetes cluster via python.
Link: https://github.com/kubernetes-client/python

Day 5: Helm Basics

Helm's Purpose and Architecture

Helm is a package manager for Kubernetes that simplifies the deployment and management of applications. It allows you to define, install, and upgrade even the most complex Kubernetes applications.
https://youtu.be/-Bq2BVdzydc < a good tutorial

Key Components:

  1. Helm Client: The command-line tool used to create, package, and manage charts.
  2. Charts: Packages of pre-configured Kubernetes resources.
  3. Releases: Instances of a chart running in a Kubernetes cluster.
Creating and Structure of a Helm Chart

Let's create a chart and examine its structure:
You will have needed to install helm

helm create mychart cd mychart

The chart structure:

mychart/
  Chart.yaml       # Metadata about the chart
  values.yaml      # Default configuration values
  charts/          # Directory for chart dependencies
  templates/       # Directory for template files
    deployment.yaml
    service.yaml
    ingress.yaml
    _helpers.tpl   # Template helpers
  .helmignore      # Patterns to ignore when packaging

Chart.yaml Example:

apiVersion: v2
name: mychart
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.16.0"

values.yaml Example:

replicaCount: 1

image:
  repository: nginx
  pullPolicy: IfNotPresent
  tag: ""
service:
  type: ClusterIP
  port: 80

ingress:
  enabled: false
Deploying Applications with Helm

To install a chart:
helm install myrelease ./mychart

To customize values during installation:
helm install myrelease ./mychart \--set service.type=LoadBalancer

Or using a custom values file:
helm install myrelease ./mychart -f custom-values.yaml

Advanced Helm Concepts
Hooks

Hooks allow you to intervene at certain points in a release's lifecycle. Here's an example of a pre-install hook:

apiVersion: batch/v1
kind: Job
metadata:
  name: -pre-install-job
  annotations:
    "helm.sh/hook": pre-install
spec:
  template:
    spec:
      containers:
      - name: pre-install-job
        image: busybox
        command: ['sh', '-c', 'echo Pre-install job running']
      restartPolicy: Never
Dependencies

You can define dependencies in theChart.yamlfile:

dependencies:
  - name: apache
    version: 1.2.3
    repository: https://charts.bitnami.com/bitnami

Then, update dependencies:

helm dependency update

Templating

Helm uses Go templates. Here's an example of a template using conditionals and loops:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: -deployment
spec:
  replicas: 
  selector:
    matchLabels:
      app: 
  template:
    metadata:
      labels:
        app: 
    spec:
      containers:
        - name: 
          image: ":"
          ports:
            - containerPort: 80
          env:
            - name: 
              value: 
Creating Helm Charts with Python Templates

While Helm natively uses Go templates, you can use Python to generate Helm charts dynamically.

Using Jinja2 for Templating

Here's an example of using Jinja2 to generate a Kubernetes manifest:

from jinja2 import Template

template = Template("""
apiVersion: apps/v1
kind: Deployment
metadata:
  name: -deployment
spec:
  replicas: 
  selector:
    matchLabels:
      app: 
  template:
    metadata:
      labels:
        app: 
    spec:
      containers:
        - name: 
          image: 
          ports:
            - containerPort: 
""")

rendered = template.render(
  name=\"myapp\",
  replicas=3,
  image=\"nginx:latest\",
  port=80
)

print(rendered)
Generating Kubernetes Manifests Dynamically

You can use Python to read configuration from various sources and generate Helm charts:

import yaml
from jinja2 import Template

def generate_chart(config):
    # Load templates
    deployment_template = Template(open('templates/deployment.yaml').read())
    service_template = Template(open('templates/service.yaml').read())

    # Render templates
    deployment = deployment_template.render(config)
    service = service_template.render(config)

    # Combine rendered templates
    chart = f"{deployment}\n---\n{service}"
    
    return chart

# Read configuration
with open('app_config.yaml', 'r') as f:
    config = yaml.safe_load(f)

# Generate chart
chart = generate_chart(config)

# Write chart to file
with open('generated_chart.yaml', 'w') as f:
    f.write(chart)
Integrating with CI/CD Pipelines

You can incorporate this Python-based chart generation into your CI/CD pipeline:

# Example GitLab CI job
generate_helm_chart:
    stage: build
    script:
        - pip install pyyaml jinja2
        - python generate_chart.py
    artifacts:
        paths:
            - generated_chart.yaml

This job would generate the Helm chart as part of your CI/CD process, allowing for dynamic chart creation based on your application's needs.These examples demonstrate how to create more complex Helm charts, use advanced features, and even integrate Python for dynamic chart generation.

Week 3: Istio Deep Dive

Day 1: Istio Basics

Installing Istio on your Kubernetes cluster
Download Istio

https://istio.io/latest/docs/setup/getting-started/#download
Mac can use brew brew install istionctl

Install Istio

istio provides a demo for testing and learning:

  • It installs more components than the default profile, including:
    • Istiod (the Istio control plane)
    • Ingress gateway
    • Egress gateway
  • It enables a set of features that are suitable for demonstrating Istio's capabilities.
  • It has higher resource requirements than the minimal or default profiles.
  • It's not recommended for production use due to its expanded feature set and resource usage.

istioctl install --set profile=demo -y

Enable automatic sidecar injection

kubectl label namespace default istio-injection=enabled

Istio's architecture and core components
Control Plane

istiod: Combines Pilot, Citadel, and Galley into a single binary

Pilot

Pilot is a crucial module within Istiod that focuses on service discovery and traffic management. It is responsible for:

  • Service Discovery: Registers services and manages their information, such as versions, IP addresses, and ports.
  • Traffic Management: Directs traffic to different service versions or instances based on defined rules.
  • Routing and Load Balancing: Routes traffic according to rules and balances load across services.

Pilot interacts with the data plane by configuring service proxies (like Envoy) to manage ingress and egress traffic effectively.

Citadel

Citadel is another component integrated into Istiod, primarily handling security aspects. It manages:

  • Certificate Management: Provides certificate-based authentication and authorization.
  • Security Policies: Enforces security policies based on service identity.

Galley

Galley was responsible for configuration management in Istio. It handled:

  • Configuration Verification and Distribution: Ensured the validity of configuration rules and distributed them to other Istio components.
  • Configuration Storage: Maintained properties and configuration information for Istio components.
Data Plane

Envoy proxy: Sidecar container deployed alongside each service

Addons
  • Prometheus: An open-source system for metrics collection and monitoring, storing data as time series with flexible querying capabilities.
  • Grafana: A platform for metrics visualization, providing a variety of visual representations to analyse time-series data from sources like Prometheus.
  • Jaeger or Zipkin: Tools for distributed tracing that help monitor and troubleshoot microservices by collecting and analysing trace data.
  • Kiali: A service mesh observability tool that visualizes the structure and health of an Istio service mesh, aiding in monitoring and troubleshooting.

Day 2: Istio Traffic Management

Exploring Istio's traffic management features

Virtual Services: Define routing rules for traffic

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews-route
spec:
  hosts:
    - reviews
  http:
    - route:
        - destination:
            host: reviews
            subset: v1
          weight: 75
        - destination:
            host: reviews
            subset: v2
          weight: 25

This configuration defines a VirtualService for managing HTTP traffic routing to different versions (subsets) of the reviews service. It splits traffic between two subsets, v1 and v2, with 75% going to v1 and 25% going to v2.

Destination Rules: Define policies that apply after routing

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: reviews-destination
spec:
  host: reviews
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2

This configuration defines a DestinationRule for the reviews service, specifying two subsets, v1 and v2. Each subset is identified by labels that correspond to versions of the service. These subsets are referenced in the Istio configuration of the VirtualService, to route traffic to specific versions of a service. This is useful for scenarios like canary deployments or A/B testing.

Gateways: Manage inbound and outbound traffic for the mesh

Implementing canary deployments and A/B testing
  1. Use VirtualService (as above) to split traffic between versions
  2. Gradually adjust weights to increase traffic to new version
  3. Monitor metrics to ensure new version performs as expected
Istio's load balancing and circuit breaking capabilities

Load Balancing: Configure in DestinationRule

spec:
  trafficPolicy:
    loadBalancer:
      simple: ROUND_ROBIN

Circuit Breaking: Define in DestinationRule

spec:
  trafficPolicy:
    outlierDetection:
    consecutiveErrors: 5
    interval: 5s
    baseEjectionTime: 30s

Day 3: Istio Security and Observability

Istio's security features
mTLS (Mutual TLS)
  • Enable cluster-wide: kubectl apply -f istio-1.x.x/samples/security/strict-mtls.yaml
  • Verify: istioctl x authz check <pod-name>
Authorization Policies
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: allow-read
spec:
  action: ALLOW
  rules:
  - to:
    - operation:
        methods: ["GET"]
Exploring Istio's observability stack
Prometheus
  • Access dashboard: istioctl dashboard prometheus
  • Query metrics using PromQL
Grafana
  • Access dashboard: istioctl dashboard grafana
  • Explore pre-configured Istio dashboards
Kiali
  • Access dashboard: istioctl dashboard kiali
  • Visualize service mesh topology and health
Jaeger/Zipkin
  • Access Jaeger UI: istioctl dashboard jaeger
  • Analyze distributed traces

Day 4-5: Deploying a Sample Application with Istio

Objective

Deploy a simple web application with Istio sidecar injection and implement basic traffic routing.

Prerequisites
  • Kubernetes cluster set up
  • Istio installed with demo profile
  • kubectl and istioctl configured
Enable Istio Sidecar Injection

First, let's enable Istio sidecar injection for the default namespace:
kubectl label namespace default istio-injection=enabled

(This can be verified with kubectl get namespace default --show-labels)

The command is used to enable automatic Istio sidecar injection for the default namespace in a Kubernetes cluster.

Key points about this command:

  1. Namespace-level control: By labeling a namespace, you're enabling Istio sidecar injection for all pods created in that namespace, unless overridden at the pod level.
  2. Automatic injection: When a namespace has this label, the Istio sidecar (Envoy proxy) will be automatically injected into all new pods deployed in that namespace.
  3. Existing workloads: This label only affects new pods. Existing workloads will need to be redeployed to get the sidecar injected.
  4. Override option: Even with this namespace-level setting, individual pods can opt out of injection using the sidecar.istio.io/inject: “false” annotation.
  5. Verification: After applying this label, you can verify it worked by deploying a new pod in the namespace and checking for the presence of the istio-proxy container.
  6. Reversibility: You can disable injection for the namespace by changing the label value to disabled or removing the label entirely.
Deploy a Sample Application

Create a file named sample-app.yaml with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
        version: v1
    spec:
      containers:
      - name: myapp
        image: nginx:1.14.2
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: myapp
spec:
  selector:
    app: myapp
   ports:
   - port: 80
     targetPort: 80

or to apply in one

cat <<EOF | kubectl -f -
  yaml
EOF

Deploy the application:

kubectl apply -f sample-app.yaml

Verify the deployment:

kubectl get pods

You should see two containers per pod (app + istio-proxy), indicating successful sidecar injection.

eg kubectl describe pod/\<pod name\>

You will see something like

Events:
Type    Reason    Age    From               Message
----    ------    ----   ----               -------
Normal  Scheduled  5m    default-scheduler  Successfully assigned default/myapp-7d4cbc4c78-mhdmd to minikube
Normal  Pulled     5m    kubelet            Container image
"docker.io/istio/proxyv2:1.23.2" already present on machine
Normal  Created    5m    kubelet            Created container istio-init
Normal  Started    5m    kubelet            Started container istio-init
Normal  Pulling    5m    kubelet            Pulling image "nginx:1.14.2"
Normal  Pulled     4m54s kubelet            Successfully pulled image "nginx:1.14.2"
in 885ms (5.074s including waiting). Image size: 102757429 bytes.
Normal  Created    4m54s kubelet            Created container myapp
Normal  Started    4m54s kubelet            Started container myapp
Normal  Pulled     4m54s kubelet            Container image
"docker.io/istio/proxyv2:1.23.2" already present on machine
Normal  Created    4m54s kubelet            Created container istio-proxy
Normal  Started    4m54s kubelet            Started container istio-proxy
Create a Virtual Service

Create a file named virtual-service.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myapp-route
spec:
  hosts:
  - myapp
  http:
  - route:
    - destination:
        host: myapp
				subset: v1

Apply the Virtual Service:

kubectl apply -f virtual-service.yaml

View with
kubectl get svc

A VirtualService in Istio is a custom resource definition (CRD) that allows you to configure how requests are routed to services within the Istio service mesh. It acts as a flexible and powerful tool for traffic management, enabling you to define routing rules that dictate how traffic should be directed to different service versions or destinations based on specified criteria.

Key Features of VirtualService

  • Traffic Routing.
  • Decoupling Requests and Destinations.
  • Advanced Traffic Management.
  • Integration with Other Istio Resources.
  • Internal and External Traffic Control.
Create a Destination Rule

Create a file named destination-rule.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
		name: myapp-destination
spec:
		host: myapp
		subsets:
		- name: v1
			labels:
				version: v1

Apply the Destination Rule:

kubectl apply -f destination-rule.yaml

verify with

k get destinationrules

Test the Routing

To test the routing, we'll need to access the application. For simplicity, let's use port-forwarding:

kubectl port-forward service/myapp 8080:80

Now, in another terminal, you can access the application:

curl http://localhost:8080

You should see the nginx welcome page.

Implement Canary Deployment

Let's update our application to version 2. Create a file named sample-app-v2.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myapp
      version: v2
  template:
    metadata:
      labels:
        app: myapp
        version: v2
    spec:
      containers:
      - name: myapp
        image: nginx:1.16.0
        ports:
        - containerPort: 80

Deploy version 2:

kubectl apply -f sample-app-v2.yaml

Update the virtual-service.yaml to split traffic:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: myapp-route
spec:
  hosts:
  - myapp
    http:
    - route:
      - destination:
          host: myapp
          subset: v1
        weight: 75
      - destination:
          host: myapp
          subset: v2
        weight: 25

Update the destination-rule.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: myapp-destination
spec:
  host: myapp
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2

Apply the updated configurations:

kubectl apply -f virtual-service.yaml

kubectl apply -f destination-rule.yaml

Now, when you access the application, 75% of the traffic will go to v1 and 25% to v2.

Testing can be run as for i in {1..200}; do echo \$(curl -s http://localhost:8080 \| grep \"version\"); sleep .5; done\

observability

Apply Prometheus kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/addons/prometheus.yaml

Apply kiali kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/addons/kiali.yaml

Access dashboard istioctl dashboard kiali

Conclusion

In this lesson, we've deployed a sample application with Istio, implemented basic traffic routing, and set up a canary deployment. This demonstrates some of Istio's core traffic management capabilities. In a real-world scenario, you would monitor the performance of both versions and gradually adjust the traffic split until you're confident in the new version's performance.Remember to clean up your resources after the lesson:

kubectl delete -f sample-app.yaml
kubectl delete -f sample-app-v2.yaml
kubectl delete -f virtual-service.yaml
kubectl delete -f destination-rule.yaml

This lesson provides a practical introduction to Istio's traffic management features. For more advanced scenarios, you could explore features like fault injection, circuit breaking, and more complex routing rules.

If using minikube a simple minikube delete will remove all existance of the cluster

Week 4: Linkerd and Practical Applications

Day 1: Linkerd Basics

Installing Linkerd on your Kubernetes cluster
Install CLI

https://linkerd.io/2.16/tasks/install/
Again Mac can use brew

curl --proto \'=https\' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh

export PATH=\$PATH:\$HOME/.linkerd2/bin

linkerd version

Alternatively, you can download the binary directly from the Linkerd releases page.

Install Linkerd on Your Minikube Cluster

linkerd install \--crds \| kubectl apply -f -

linkerd install \--set proxyInit.runAsRoot=true \| kubectl apply -f -

Validate cluster

linkerd check --pre

Install Linkerd

linkerd install \| kubectl apply -f -

Install viz

linkerd viz install \| kubectl apply -f -

linkerd viz check

linkerd viz dashboard

Linkerd's architecture and core components
Control Plane
  • controller: Manages and configures proxy instances
  • destination: Service discovery and load balancing
  • identity: Certificate management for mTLS
Data Plane

linkerd-proxy: Ultra-lightweight proxy (written in Rust)

Add-ons
  • Grafana: Metrics visualization
  • Prometheus: Metrics collection
Linkerd Features
Traffic management capabilities

Traffic Split:

apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
  name: web-split
spec:
  service: web-svc
  backends:
  - service: web-v1
    weight: 500m
  - service: web-v2
    weight: 500m

Retries and Timeouts: Configured via annotations

Linkerd's observability and security features
  • Automatic mTLS:
    Enabled by default for all meshed servicesb.
  • Metrics:
    Access via CLI or Grafana dashboards

linkerd viz stat deployment

  • Live Traffic View:

linkerd viz top

  • Traffic Inspection:

linkerd tap deployment/your-deployment

Day 2-4: Hands-on Exercise

Deploying and Managing emojivoto with Linkerd

Sheet here Linkerd in a Minikube Env

Deploy the emojivoto sample application

curl -sL https://run.linkerd.io/emojivoto.yml \| kubectl apply -f -

This command downloads the emojivoto application manifest and applies it to your Kubernetes cluster. Verify the deployment:

kubectl get pods -n emojivoto

Inject Linkerd into the application

kubectl get -n emojivoto deploy -o yaml \| linkerd inject - \| kubectl apply -f -

This command retrieves all deployments in the emojivoto namespace, injects the Linkerd sidecar, and reapplies the configuration. Verify the injection:

kubectl get pods -n emojivoto

You should now see two containers per pod (the application container and the Linkerd proxy).

Observe traffic
Install smi

helm repo add linkerd-smi https://linkerd.github.io/linkerd-smi

helm install smi linkerd-smi/linkerd-smi

The Service Mesh Interface (SMI) is a standard specification for service meshes on Kubernetes, providing a set of common APIs to enable interoperability between different service mesh implementations, allowing users to manage microservices communication without being tied to a specific provider.

linkerd viz stat -n emojivoto deploy

This command shows real-time metrics for your deployments, including success rate, requests per second, and latency.

Visualize the service mesh

linkerd viz dashboard

This opens the Linkerd dashboard in your default browser. Explore the various sections to see detailed metrics, topology, and live calls.

In a terminal create port fowarding kubectl -n emojivoto port-forward svc/web-svc 8080:80

Create traffic for i in {1..20000}; do curl -s http://localhost:8080 ; done

Implement a traffic split for canary deployment

First, let's create a new version of the voting service:

cat \<\<EOF \| kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: voting-v2
  namespace: emojivoto
spec:
  replicas: 1
  selector:
    matchLabels:
      app: voting-svc
      version: v2
  template:
    metadata:
      labels:
        app: voting-svc
        version: v2
    spec:
      containers:
        - name: voting-svc
          image: buoyantio/emojivoto-voting-svc:v11
          env:
            - name: GRPC_PORT
              value: "8080"
          ports:
            - containerPort: 8080
EOF

or

(kubectl get deployments web -n emojivoto -o yaml > web-deployment.yaml ; sed -i 's/name: web/name: web-v2/' web-deployment.yaml sed -i 's/image: emojivoto-web:v1/image: emojivoto-web:v2/' web-deployment.yaml ; kubectl apply -f web-deployment.yaml ;rm web-deployment.yaml)

Now, create a TrafficSplit to gradually shift traffic:

cat <<EOF | kubectl apply -f -
apiVersion: split.smi-spec.io/v1alpha2
kind: TrafficSplit
metadata:
  name: voting-split
  namespace: emojivoto
spec:
  service: voting-svc
  backends:
    - service: voting
      weight: 900
    - service: voting-v2
      weight: 100
EOF

This configuration sends 90% of traffic to the original version and 10% to the new version.

run kubectl get -n emojivoto deploy -o yaml | linkerd inject - | kubectl apply -f -

Observe the traffic split

linkerd viz stat -n emojivoto deploy voting voting-v2

You should see traffic being split between the two versions according to the weights specified in the TrafficSplit resource.

Gradually increase traffic to the new version

As you gain confidence in the new version, you can update the TrafficSplit to increase traffic to v2:

cat <<EOF | kubectl apply -f -
apiVersion: split.smi-spec.io/v1alpha2
kind: TrafficSplit
metadata:
  name: voting-split
  namespace: emojivoto
spec:
  service: voting-svc
  backends:
    - service: voting
      weight: 500m
    - service: voting-v2
      weight: 500m
EOF

This updates the split to 50/50 between the two versions.

Monitor the canary deployment

Use the Linkerd dashboard or CLI to monitor the performance of both versions:

linkerd -n emojivoto stat deploy voting voting-v2

Keep an eye on success rates, latency, and request volumes to ensure the new version is performing as expected.

(In dashboard services → voting-svc will show the split and successes)

Conclusion

In this hands-on exercise, you've:

  1. Deployed the emojivoto sample application
  2. Injected Linkerd into the application
  3. Observed traffic using Linkerd's CLI and dashboard
  4. Implemented a canary deployment using TrafficSplit
  5. Monitored the performance of both versions during the canary rollout

This exercise demonstrates Linkerd's key features for traffic management and observability, providing a practical introduction to service mesh concepts and canary deployments.

Day 5: Service Mesh Comparison

Comparing Istio, Linkerd, and other service mesh solutions
Istio
  • Pros: Feature-rich, powerful traffic management
  • Cons: Complex, resource-intensive
Linkerd
  • Pros: Lightweight, simple, fast
  • Cons: Fewer advanced features
Consul Connect
  • Pros: Integrates well with HashiCorp ecosystem
  • Cons: Less mature as a full service mesh
NGINX Service Mesh
  • Pros: Builds on familiar NGINX technology
  • Cons: Relatively new, smaller community
When to choose one service mesh over another
  • Choose Istio for complex, feature-rich requirements
  • Choose Linkerd for simplicity and performance
  • Consider Consul Connect if already using HashiCorp tools
  • NGINX Service Mesh if familiar with NGINX and need basic mesh features

Week 5: Practical Project

Designing and implementing a microservices application
  1. Create 3-4 simple microservices (e.g., frontend, backend, database)
  2. Containerize each service with Docker
  3. Create Kubernetes manifests for each service
Deploying the application using Helm
  1. Create a Helm chart for the entire application
  2. Use subchart for each microservice
  3. Define configurable values in values.yaml
Implementing service mesh features
  1. Choose either Istio or Linkerd based on your preference
  2. Implement traffic routing between service versions
  3. Set up mTLS between services
  4. Configure observability (metrics, tracing)
Creating Python scripts for automation
  1. Script to deploy/update the Helm release
  2. Script to check service health and metrics
  3. Script to perform canary deployments

This comprehensive deep dive covers the entire 4-week training plan, providing a solid foundation in Kubernetes, service mesh technologies, and related tools. Remember to practice hands-on with each concept and refer to official documentation for the most up-to-date information.

Additional Resources and Best Practices

  • Throughout the training, refer to official documentation for each technology
  • Join community forums or discussion groups for each technology
  • Consider working on a personal project that incorporates all these technologies
  • Explore real-world use cases and examples
  • Practice hands-on exercises daily

Tips for Successful Service Mesh Adoption

  1. Start your service mesh journey early to allow your knowledge to grow organically as your microservices landscape evolves.
  2. Avoid common design and implementation pitfalls by thoroughly understanding each technology.
  3. Leverage your service mesh as the mission control of your multi-cloud microservices landscape.
  4. Consider starting with a sample project to evaluate which service mesh solution you prefer before standardizing across all services.
  5. Use service mesh as a ‘bridge’ while decomposing monolithic applications into microservices.
  6. Implement service mesh incrementally, starting with the components you need most.

By following this training plan, you'll gain a solid foundation in service mesh concepts, Kubernetes, Helm, and Python, with practical experience in both Istio and Linkerd. Remember to adapt the pace and depth of each topic based on your prior knowledge and learning speed.

Tools

k9s : https://enix.io/en/blog/k9s/

jq : https://jqlang.github.io/jq/

kubectl : https://kubernetes.io/docs/tasks/tools/

docker: https://docs.docker.com/engine/install/

Minikube Linkerd

This is a tutorial start

Geeky Blinder 2024-10-11

Here is a part of a tutorial I wrote for linkerd use and a 5 Week DevOps training plan

Content

Minikube
Install Linkerd
Linkerd on Your Minikube Cluster
Linkerd Features
- Dashboard - Traffic Management
- Security - Monitoring
- Debugging - Tap
- Top
Cleaning Up

Prerequisites

Set Up Minikube

Start Minikube

minikube start

Verify Minikube Status

minikube status

Deploy a Sample Application

Deploy Emojivoto

To see Linkerd in action, use the emojivoto application provided by Linkerd.
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/emojivoto.yml | kubectl apply -f -

Expose the pod

kubectl -n emojivoto port-forward svc/web-svc 8080:80 You should now be able to view the website at localhost:8080

Verify the Application

kubectl get pods -n emojivoto

Install Linkerd

Install the Linkerd CLI

You can install the Linkerd CLI using the following command:
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh export PATH=$PATH:$HOME/.linkerd2/bin linkerd version Alternatively, you can download the binary directly from the Linkerd releases page.

Pre-Installation Check

linkerd check --pre

Install Linkerd on Your Minikube Cluster

linkerd install --crds | kubectl apply -f - linkerd install --set proxyInit.runAsRoot=true | kubectl apply -f -

Verify Linkerd Installation

linkerd check

Inject Linkerd Proxies

To enable Linkerd for your application, inject the Linkerd proxies into your pods.
kubectl get deployments -n emojivoto -o yaml | linkerd inject - | kubectl apply -f -

This command retrieves the deployments in YAML format, injects the Linkerd sidecar, and applies the modified configuration to the Kubernetes cluster
source k8s , source linkerd.

The command curl -sL run.linkerd.io/emojivoto.yml | linkerd inject -\ scans the emojivoto manifest file, skips the rest of the configurations in the manifest, and then injects linkerd-proxy proxies into each deployment in the pod.
With the kubectl apply -f - command, the emojivoto configuration was re-applied in our cluster and the sidecars were successfully injected.

Explore Linkerd Features

Dashboard

For additional observability features, install the Linkerd Viz extension: linkerd viz install | kubectl apply -f -
linkerd viz check
linkerd viz dashboard
This sets up the visualization tools, including Prometheus, and launches the Linkerd dashboard

Traffic Management

Linkerd allows you to manage traffic between services. Here’s an example of how to split traffic between two versions of a service:

Using bash
# Create a new deployment for the v2 version of the web service
kubectl get deployments web -n emojivoto -o yaml > web-deployment.yaml ; sed -i 's/name: web/name: web-v2/' web-deployment.yaml sed -i 's/image: emojivoto-web:v1/image: emojivoto-web:v2/' web-deployment.yaml ; kubectl apply -f web-deployment.yaml ;rm web-deployment.yaml

# Inject Linkerd proxies into the new deployment
kubectl get deployments web-v2 -n emojivoto -o yaml | linkerd inject - | kubectl apply -f -

# Split traffic between v1 and v2

cat <<EOF | kubectl apply -f -
apiVersion: policy.linkerd.io/v1beta2
kind: HTTPRoute
metadata:
  name: web-split
  namespace: emojivoto
spec:
  parentRefs:
    - name: web-svc
      kind: Service
      group: core
      port: 80
  rules:
    - backendRefs:
        - name: web
          port: 80
          weight: 50
        - name: web-v2
          port: 80
          weight: 50
EOF

This may look complicated but essentially, cats the manifest and pipes this to the apply command

Security

Linkerd provides mTLS encryption out of the box. You can verify this by checking the Linkerd dashboard or using linkerd tap.
linkerd viz tap -n emojivoto deploy/web\

This will now start to listen for traffic. If you click on one of the emojis on the website you will see traffic and here you will notice tls=true\

ie: rsp id=31:9 proxy=out src=10.244.0.60:59620 dst=10.244.0.58:8080 tls=true :status=200 latency=959µs

Monitoring and Debugging

Linkerd Tap

We have just used this option linkerd viz tap to see the traffic flowing through your services in real-time.
linkerd viz tap -n emojivoto deploy/web

Linkerd Top

Use linkerd top to see the top-level metrics for your services.
linkerd viz top -n emojivoto deploy/web
As you can see a great method to see the issues and traffic including issues. Here you can see I have selected the VoteStuckOutTongueWinkingEye, VoteDoughnut and VoteSunglasses.

You will also see that the doughnut was not successful

Cleaning Up

When you’re done, clean up the resources created:\

# Delete the emojivoto application curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/emojivoto.yml | kubectl delete -f -\

# Uninstall Linkerd
linkerd viz uninstall | kubectl delete -f -
linkerd uninstall | kubectl delete -f -\

Troubleshooting Common Errors

ERROR: Error: No Objects Passed to Apply: Ensure you run the following command first to install the CRDs:

SOLUTION: linkerd install --crds | kubectl apply -f -
Then, proceed with the control plane installation

Additional Resources

# Linkerd Documentation: The official Linkerd documentation is a comprehensive resource.
# Linkerd Tutorials: The Linkerd tutorials provide hands-on guides for various scenarios.
# Minikube Documentation: The Minikube documentation can help you manage your local Kubernetes cluster.
# 5-Week Training Plan: Service Mesh, Kubernetes, and Related Technologies

All Change!

Ive changed direction, kinda

Geeky Blinder 2024-10-11

Well its been a while

Things have changed, and I hope for the better. I’ve taken a step in a different direction. Don’t get me wrong, I still wish to pursue a career in security, but I had to accept that the offers were not going to just come to me without experience or another avenue.

For background, I have worked in a number of start-ups, and they are a different environment to work in. They are fickle, risky, challenging but offer close-knit teams and a sense of adventure you don’t get elsewhere. Joining early will give an opportunity to grow, but you have to be comfortable with change. You need to be happy to fill any space that is required. If you are a dev and that is all you do, then a start-up may not be the place for you. When the server breaks, if you aren’t prepared to roll up your sleeves and get your hands dirty, you’re not right for it. This can be the attractiveness of a start-up: not being a one-trick pony.

Well, I was in a start-up, but it had grown and was no longer a real start-up. I won’t go deeply into it, but it had lost its way in many places. Successful? Yeah. It was almost making money and was on the path to greatness, but this was at the expense of the people. It may catch this and try to turn it around, but it had made many of the mistakes start-ups make and was way too top-heavy and no longer nimble. Push down mentality, and there was a lot of unhappiness and anger that after threats of leavings, union action and lots of letters, emails, meetings, surveys (oh the the constant surveys that are so skewed to get the answers they want, or stats that are skewed) they were trying to change, but in completely the wrong way. Just to cover the type of skew they used in an update.

  • Latest “We have a happiness rating of 45%” : that is as good as the rest of the industry and so we are happy.
  • 1yr earlier “We have a happiness rating of 85%” : We are very proud with this and lead the industry.

A trick employed (an old one) is to have Q1 just before this, to be:

  • Highlight a recent moment that has made you happy at work
    then
  • Q2: On a scale of 1 to 4, how would you describe your overall sense of happiness?
    • 1: Unhappy - I never experience joy and satisfaction in my daily life.
    • 2: Happy - I generally feel positive and appreciate the good moments, even when faced with challenges.
    • 3: Slightly Happy - I often find reasons to smile and enjoy life’s little pleasures, contributing to my overall well-being.
    • 4: Very Happy - I experience a deep sense of joy and fulfillment, embracing each day with enthusiasm and positivity.

Appraisal : You are scored from 4 :

  • 1 = Below expectation
  • 2 = Meeting or exceeding expectation
  • 3 = Working well beyond expectation
  • 4 = You lead the way.

This was so skewed that only those with a 4 were considered for pay rises or promotion plus there was a cap on the numbers you could give, promotions and raises.

Everything is about being as good as others in the industry, there used to be a desire to be the best, lead the way and exceed. This change to be as good as the industy after so long leading was to many, failing and a degrade of the values and drive.

They made changes so did new stuff but then blamed the workforce for not joining in, people are sick of the place and people. Imagine telling your partner that they never take you out, and they then send YOU a ticket to the cinema, it’s not the same. So when a company has lost its care for the people, to then offer opportunities to spend time with the execs, yet they are time-restricted and scheduled, it’s not the same as hanging out together. Just a huge disconnect - the heart was gone. It needed a change of leadership and genuine honesty, care and regain trust.

So, as I say this in a past manner, I have left there. I was exceedingly loyal, gave chance after chance for the promises I was given to come to fruition, but all were, it appears, lies. There were too many career managers riding a gravy train. If you are unsure what I mean, these are people that manage for the money and are only interested in career progression, to get to the next level and pay increase. They are not interested in the company or the people. Generally, they have great CVs as they have worked for many companies in good positions. They always appear to have been ready for the next challenge. This all falls on its arse when they have to really do the job. When they meet a company where honesty was the foundation it’s built on, they don’t like being held to account. Some had been successful using a manner that was very different and counter to the company values, this was never challenged. There were bullies that were not held to account but had another layer placed under them, racists that were reported but never sacked just moved (promoted :O). This caused a huge swathe of leavers that all gave thier reasons for leaving but no action was taken to stem the flood of leavers.

As I said, this isn’t my first rodeo in a start-up, and this is a mistake start-ups make time and time again, until the IPO or takeover, and then it’s a slow decline or just a corporate train ride/wreck. So, it’s no longer nimble, I can’t trust anyone, there is no real career progression, so time to move. I want to work in security, and I don’t have the experience, only a desire. I have a full-time role that is taking all my time up because I care about the company still. I still care and felt a lot of heart there.

I have moved to another start-up: Why? Well, because it is at the phase of real care and growth. Is it security? Well, no and maybe. It is at the point where everyone has every job, so it does include security and could grow to a full-time role. even if it doesnt, im finding this a challenge and really enjoying it.
Now, I’m so much happier. I have space to learn, experience and grow.

This blog is going to have to change as I’m not concentrating on security. I’m at present concentrating on Kubernetes and tools around it; I’m now in the GCP world and AI.
I’ll get back to security, and I hope to start enjoying the learning as opposed to feeling it is a bind I need to chase to get away from the hell hole I was in and be a career choice. Looking forward: if I take the role of security here, I will organically gain experience but im happier than ive ever been. Even the imposter syndrome is edging away a bit.

Well, ill be hopefully back to more regular updates, and they may be covering other subjects but its all important. Understand K8s and Helm, Keda, Gitlab-ci and it is all stuff that needs securing. Understanding as much as possible is important. Just like a dev, learn one language well and you can then turn your hand to others easier.

Well its been months, Hows it going!

Not well

Geeky Blinder 2024-04-07

My Training: Slow Progress, Steady Growth

Embarking into the world of cyber security is exciting and challenging. For someone like me its daunting and I see the level of knowledge I dont have (no dunning kruger here matey).

My pursuit of training has been met with a reality check – progress is slow, but the baby steps I make are undeniably rewarding. Firstly, I delved into platforms like Hack The Box and TryHackMe, enticed by promises of hands-on experience and real-world simulations. The allure of solving challenges and honing my skills was irresistible. Yet, as I immersed myself in these virtual environments, I quickly realised that its far from easy, as expected but even with walkthroughs its slow going. Trying to understand the process and methods it really fun but its very slow and I have this inbuilt feeling that Im too slow and do feel time is running out to start the career but I need real world knowlwdge.

As said its fun and brings into use lots of bit from University that were covered in cyber sec modules. The programing donr in Uni is now needing to be pulled back from the rear of my mind to present. All very exciting and I feel each small step is one step nearer to mastery (still not confident). Some of the skills im picking up is making me look at things in a different manner. Like perusing website input boxes as a place for manipulation and a weak point.

TryHackMe has been great, it has presented me with lots of machines, each presenting unique obstacles to overcome. Concepts that seemed straightforward on paper have practice befiddles me. In exploiting vulnerabilities, I found myself grappling with networking protocols, cryptography, and other principles, building the conplexity as we go. Each step has needed patience and ultimately a walkthrough but I can feel the confidence building as I look at the attack surfaces and have some skills to at least start to look for issues before tapping out. THM, has offered a guided approach, with structured learning paths and interactive tutorials, yet some of the boxes have has me scratching my head for hours, But with each step and success is a huge level of satisfaction that adds to my determination to continue.

Ive also reached out to some online groups and events to find some support. The cyber security genre is daughting and so far has pushed me away. I do feel there is a wish to be open but there alot of people that are chasing money over the passion that may be the issue. In the events and communities, those at the leading edge speak with such passion and when you walk away from this to the general workforce there are many just giving the industry a bad rap. Joining local meetups and online forums exposed me to a wealth of knowledge and expertise but has also highlighted the width of the field and the daunting task of keeping pace with its rapid evolution. Conversations with seasoned professionals served as both inspiration and humbling reminders of how much I had yet to learn. When you do get to speak to the seasoned persons, they are welcoming, further down though, the dunning kruger really kicks in as lack of knowledge, shouting louder to appear clever and push away anyone that will find out is creeping in as in other areas of the tech industry.

Im slowly seeing each troubled step as a lesson, teaching me resilience and fortitude. I think im getting easier with the slow pace of my journey, recognising that deep knowledge cant be rushed. As I continue my training, I am reminded of a quote by Bruce Lee: “I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times.” I hope its not about how quickly I progress, but rather the depth of understanding and expertise I cultivate along the way. ‘

So, to my fellow aspiring hackers facing similar struggles, I offer this advice: Embrace the journey, celebrate the victories, and learn from the defeats. Rome wasn’t built in a day, and neither is a master hacker. Slow progress is still progress, and with perseverance, we will reach our destination.

A person is not judged by how many times they are knocked down, but how many times they gets up to keep fighting on. OSCP seens a long way away but I will get there, I just might jump on the HTB Cert first.

Now i need two more docs to level up the three columns

Hacking my way to OSCP!

Not another wanna be hacker!

Geeky Blinder 2024-01-16

Learning Hacking and Pursuing the OSCP Certification

So, another moron thinking they could be a HACKER. As someone passionate about digital security, I’ve decided to embark pursue a career and follow along on an exciting journey into the world of ethical hacking. My ultimate goal? Attaining the Offensive Security Certified Professional (OSCP) certification. Im going to write up my notes and you could follow me on this adventure into cybersecurity, its challenges, and my pursuit of knowledge.

Introduction

My intentions are purely ethical (Honest guvnor). I did cover cyber security at university and so this is the next step for me. I love digital forensics, started down the route of reverse engineering, played with some other areas but as with most uni modules, they are quick and high level. Its now time to break away from my present area of tech and follow what I really wish to be doing. The genre is wide, as wide as any area of tech. Id like to hit offensive/red team.

Why Learn Hacking?

Learning hacking isn’t just about gaining unauthorized access to systems; it’s about understanding how they work and how to secure them. With the increasing frequency and sophistication of cyber-attacks, ethical hackers play a pivotal role in safeguarding digital landscapes. Id like to be part of that fight back, keep people safe and learn.

Getting Started

To kickstart my journey, I’m diving into the basics of networking, operating systems, and programming languages. Understanding the foundations is key to becoming a proficient ethical hacker. I have signed up to Try Hack Me website. Its not cheap but i believe you do need to invest and back yourself at times. I have possibly made a mistake as all guidance seems to point at Hack the Box but you work with what you have. Security wasnt part of the company I work at when I started so changing direction is hard but I have made my intentions clear and have company backing to scratch my itch (as long as I keep doing my existing job).

Learning Resources

A couple books ive been advised to look at

Books:

  • “Hacking: The Art of Exploitation” by Jon Erickson
  • “Metasploit: The Penetration Tester’s Guide” by David Kennedy.

Online Platforms:

Utilise platforms like

  • Hack The Box
  • TryHackMe
  • OverTheWire

for hands-on practice.

Building Practical Skills

Theory is essential, but practical experience is where the real learning happens. I plan to immerse myself in real-world scenarios, honing my skills through simulated environments and challenges. As said, TryHackMe is my go to as we speak. I have a life so have limited freetime (I work to live). I really wish to just enjoy work (as best you can).

Preparing for OSCP

The Offensive Security Certified Professional (OSCP) is a respected certification in the cybersecurity field. To prepare, I’ll be dedicating time to THM and then the OSCP syllabus, engaging in labs, and working on vulnerable machines to develop the practical skills required for the exam.

Challenges and Rewards

Undoubtedly, this journey will be challenging. Hurdles will emerge, and problem-solving skills will be put to the test. Yet, the satisfaction of overcoming challenges and contributing to a safer digital world makes it all worthwhile.

Conclusion

As I document my progress, struggles, and victories, I will be smashing my note on the THM pathways here, I invite you to join me on this learning adventure. Whether you’re a seasoned professional or a fellow enthusiast, your insights and support are invaluable. Together, let’s explore the exciting and ever-evolving field of ethical hacking.

Stay tuned for updates, and let’s hack responsibly! 💻🔒

Hacking my notes on my way to OSCP!

Hacker Dumps!

Geeky Blinder 2024-01-16

Stay tuned for updates, and let’s hack responsibly! 💻🔒

The race to IPO

The tech start up, a poisoned and crazy journey

Geeky Blinder 2024-01-12

The Perils of Startup Life: A Personal Perspective

While the allure of going public may seem like the pinnacle of success, the reality often unveils a different story

My journey has led me through the treacherous landscape of startups aiming for the coveted Initial Public Offering (IPO). While the allure of going public may seem like the pinnacle of success, the reality often unveils a different story—a tale of trials and tribulations that can jeopardise the very core of a company and the human toll is devastating.

The Initial Dream

In its infancy, a startup is a collective of passionate minds, driven by innovation and a shared vision to disrupt industries. Part of this dream is that you may witness the metamorphosis into a publicly-traded entity, reaping the collective rewards of tireless dedication. However, the path to an IPO is fraught with challenges, and the toll it takes on the individuals within the company is often overlooked.

The Funding Conundrum

Startups, in their pursuit of IPO, frequently find themselves entangled in a complex web of venture capitalists, angel investors, and institutional funding. The relentless pressure to meet valuation targets and attract investors can result in compromises that not only undermine the very ethos of the startup but also lead to the loss of valued team members along the way.

The Sacrifice of Innovation

As the focus shifts from innovation to meeting financial milestones, startups may inadvertently sacrifice the essence of what made them unique. The relentless pursuit of profit margins and shareholder value can stifle creativity, hindering the very innovation that propelled the company into the limelight.

The Shattered Togetherness

Startups are celebrated for their dynamic and close-knit cultures. However, the quest for an IPO strains these bonds, transforming the once-familial atmosphere into a rigid corporate structure. The loss of togetherness is palpable, leaving employees feeling detached from the company’s initial vision and each other.

Greed takes over and the vultures sail in. The people that created this unicorn are pushed aside for those that have travelled the IPO course, those with a laser focus on the IPO goal, they have no values, they are not part of the company, they are here to benefit themselves only. They will not be here after IPO as they take their chunk from the work others have given and leave the shatter shell of this once great place. For those that stayed, endure the long hard road, to look at the remains and try to resusitate it.

I know from experience, it will never reach the lofty hights again. It will have the meat taken from the bones, ive been through this all before and remained to see the slow death. It may remain in name, it may shape shift but it will never be the peacock it once was.

I know of a few companies that have steadfast to their values, they remain great places to be and generally, stay private or get bought and then ruined (unless an equally great place merges (rare))

Many will get rich financially but these a rotten to the core and poor inside.

The Human Cost of Meeting Expectations

The run to an IPO initiates a Sisyphean race against time. Startups grapple with the demands of scaling operations, ensuring profitability, and complying with regulations. The pressure to meet the expectations of shareholders and analysts can lead to rushed decisions, resulting in the loss of both talented individuals and the core values that defined the company. There is the build up of teams and the busines, then the cash out and lay off to colour the books, lives wrecked as the values disappear as the greed takes over and the once trusted leaders turn their backs on the loyal teams. True colours show and it is ugly. There isnt a build of a great company anymore but an illusion to sell to a fickle market, smoke and mirrors, an illusion of profitbility of a company that really was built with x number but now has a skeleton, over worked, under paid, hurt workforce that is not sustainable long term.

This is a tale that is old, but no one bucks the trend, cyclic behaviour and greed is what we have.

Conclusion

I’ve witnessed firsthand the human toll of the pursuit of an IPO and the shattered remains that exist after. The initial dream of creating something extraordinary often gives way to the erosion of togetherness, the loss of valued team members, and the compromise of the very values that defined the company. It’s crucial for founders and stakeholders alike to navigate this journey with caution, ensuring that the very essence of the startup isn’t lost amidst the chaos and they may get rich but think is this hollow person who you really set out to be? is this the legacy you wish to have, is this the type of values you would like to be applied to your children, would you treat them as such? When you are seeing resources over people, is this who you set out to be.

In the grand tapestry of startup life, the cost of an IPO should never be measured solely in financial terms. The true tragedy lies in the loss of the human connection, the disintegration of shared values, and the collective spirit that once defined the startup’s identity.

Be true to you, live a proud life.

Well it went wrong big time! NSFW

Well if you dont work, you cant F*CK UP!

Geeky Blinder 2023-06-02

So, you’ve made a BIG mistake?

There are little mistakes made each day that go unnoticed but every now and again a bigger one is made. So what do you do?

Well, if you are that person who sits back and rides the coat tails of others, pops a helpful line into slack and walks away as an incident is in progress, just points out what is only known with hindsight…. Fuck you
These are those that like to get noticed for being helpful!. You cant really say fuck off you are being an arse, as “what? i was just helping” ~ is the reply, but we all know what you are up to.

I was asked on an FLT course along time ago - “if you crash into a door, what do you do?”. Everyone said, you ensure the area is safe, ensure you park the vehicle in a safe place, exit the vehicle in a safe manner, ensure the damage is reported… blah blah blah…
The instructor said NO!!!. You will quickly look around to see if anyone has seen you. It is human nature. It is no different in tech.

There are little mistake made each day that go unnoticed but every now and again a bigger one is made. So what do you do:
What you should do and what you do are different. You WILL try to fix it quickly. What you should do is inform everyone as soon as you are aware.

My advise in life has always been tell the truth. People will respect you more, will trust you more and it is so much easier to live as you have not to remember the lies.
What do you think your peers would prefer though

  • OMG im F’ed up and I will need to fix it
    OR
  • I did F up but I fixed it so we are good.

I suspect the latter BUT that all depends on time and pressure.
If the issue is seen, then questions will be asked so immediate declaration is best, if your peers don’t like being spooked for no reason then a quick fix is best.

So what should you do

I go with a small time frame. If you F Up bigtime, collect the issue, the possible solutions and if possible rectify but if you cant, declare NOW.
I don’t know whether that is the best solution but that is what id prefer if I was the one getting the news:

  • Ive made this issue
  • It is due to this
  • Ive tried this
  • I believe this will fix the issue

But wont that damage my rep?

Yes
There you go. That is it.

Honestly, those that do not make mistakes are either Liars or Lazy (or have great PR)
If you are working hard and pushing the limits, things will happen, that is tech and testing and trying new things is the way we get better.
Its all about how you handle it. If you have made this mistake 3 times then maybe you should ask yourself a question or two because it can be assured your boss will BUT, if you have a logical reason why you took an approach, can indicate you did what you thought was right, and you stick around to fix the issue….
Well you are a good engineer and one id be happy to work beside. Dust yourself off, learn from that lesson.

So who will (does) piss you off

Now I will add that with 20/20 vision that is hindsight, some of those safety blankets you made, some of your decisions may not in the cold light of day hold up to scrutiny and fold like an origami frog BUT, you had them and you have learned a lesson on how not to do it and so have improved.

So why the rant in the first paragraph.
Well there are plenty of people who enjoy others misfortune. They are scum but they are there. They are not helping but looking for there own glory.
Imagine, you have created a huge script, spent days perfecting it, adding output to indicate errors, monitored it, tweaking it and then it run on the live system…. BOOM It doesn’t behave as expected but due to your hard work, the output identifies the issue, the safety blanket you put in place means that it can be remedied quickly but it will mean the issue does effect a number of people temporarily. So, here we go.

Type 1:
Hey ive seen this error!!!, and they are happy to back off while it is fixed, even extending the offer privately of help if needed.
These people are great, admittedly their mood may change and become a little less patient if the issue becomes a longer issue yet understand that if it does become large, it will become a large issue that will not be helped by them getting gnarly

Type 2:
Hey ive seen this error, and start to tell you how inconvenient it is to them and how safetys should have been in place and the roll back system should be more robust and that they will need to escalate this up the next couple of level….
Actually they are right and that will all be in the lessons learned and how we incrementally improve systems BUT - fuck off.
Its not the right time, what are they playing at.
Are they helping NO, are they trying to express how hard they are working and how important they are… again - fuck off.

Type 3:
Oh I see you have an issue, if youd of done x or y this wouldnt of happened, just offering this for the future.
Not now dickhead, you are not helping and sidetracking the fix but yeah
1: thanks
2: Yeah youve got your name out there as knowing lots, albeit on an issue that has occurred, as opposed to not yet occurred when we were working on it - which is where we came from.

Type 4:
If you need help im here!!!.. you reach out and they are nowhere to be seen.
Well done, you got your name in there… jerk

Type 5:
Shit, id love to help but i have a family emergency.
Odd that these only occur when there is an issue!!! but you got your name in the list for people to see…

Type 6:
Hey, @manager1, 2 and 3. the issue Geeky is having ive found the cause and here is the solution.
Thanks, but why not be a team player, offer the help to those fixing the issue as opposed to the managers.
Good managers knows instinctively that you are a no good arsehole but you may fall lucky and find a sucker.
You are an arse and the scourge of any industry, nobody likes you nor trusts you. You know those close colleagues you have? they are keeping their enemies closer, that is all.

I can list lots more but my advise to you is, privately offer help, be patient. That’s it, its not hard.

Advice and moan over :D

I need a gmail app on my desktop!

Ah, you can make one in a few clicks!

Geeky Blinder 2023-05-06

Need a Google app?

I was looking for a Gmail app. I was astounding that there isnt one.
You can use email apps but what if there was a simpler solution.

This is quick :P

Here we go

You are going to need to have chrome installed booooo!
I know its not always the first choice but it is handy to have a second browser, so you can check issues.
So:

  • Open chrome and go to your Google Mail (Gmail)
  • In the menu bar find the three vertical dots (near your boat race in the top right)
  • Click on the icon
  • Go to ‘More tools’ Changed by Google, now use ‘Save and Share’
  • Create Shortcut
  • The window that pops up
    • Enter a name for your shortcut but ensure that you tick the box for ‘Open as window’
    • Click ‘Create’

Locate the created Gmail icon, click and now you have a GMail app :D On the Mac you can set it up to open on log in and stay in the dock - right clicking and selected the options

There you go, quite a quick one.

Mental State

Yeah, everybody has an issue!

Geeky Blinder 2023-04-26

No-one said this blog was just tech!

Well, everybody is depressed arnt they??

Is mental health a bandwagon that has become fashionable? It is a shame that it can be seen this way. Fed up, down in the dumps, a bit sad!! all are moods but they are not depression.

Looking for a description (not mine) -
Depression is a serious mental health condition, characterised by persistent feelings of sadness, hopelessness, and a loss or lack of interest in activities. Depression can be caused by many factors including genetics, life events including chemical imbalances of the brain.

Symptoms include:

  • Feeling persistently sad or empty
  • Loss of interest
  • Sleep disturbances, insomnia or oversleeping
  • Fatigue or lack of energy
  • Change of appetite
  • Change of weight
  • Difficulty concentrating, making decisions, memory issues like remembering things
  • Feeling worthless, guilty or helplessness
  • Thoughts of death or suicide
  • A feeling of whats the point

Depression can be treated with a combination of therapy, medication, and lifestyle changes. Therapy, such as cognitive-behavioural therapy or talk therapy, can help individuals identify negative thought patterns and develop coping mechanisms. Medication, such as antidepressants, can help correct chemical imbalances in the brain. Lifestyle changes, such as exercise, healthy eating, and stress reduction, can also be helpful in managing depression.

If you or someone you know is struggling with depression, it is important to seek help from a healthcare professional. Depression is a treatable condition, and with the right support and resources, individuals can recover and live fulfilling lives.

A little like a star sign, we can all get it to fit

Those symptoms could fit anyone couldn’t they. Tired a lot? yeah! could be depression Feel worthless? Could be depression… or they could be in the first case, drinking a lot, over working, bad mattress, second, your partner is a bad person, grief for a recent loss - There are lots of reasons.

So, Yeah I have depression

I’ve heard this so many times.

  • “I suffer from depression!!!!”
  • “Oh, When were you diagnosed?”

when I hear

  • “Oh, I self diagnosed”

Im like - what, I bet your pardon!!!!

  • “Why you being an arse and so rude?”
  • “Its my autism, i cant help it”…
    Shut up, persons who have autism arnt arseholes!!! youre just being a dick and looking for an excuse.

I’ve heard this with all the “trendy” terms/issues we have these days

  • Autism
  • Gender issues
  • Dyslexia
  • Asperger
  • the list goes on…..

Believe me, I have empathy for genuine cases but everybody seems to be looking for an issue, an excuse for something.
If you genuinely have any of these issues or concerns, you need to see a professional.

So, I have been diagnosed and its not the first time and here I will tell you my story (some parts of my story are redacted for privacy etc, but i hope this will help if you are in a similar situation)

How did I find out

When I was younger, early 20s, I found out.

I used to be, as a child, quite withdrawn, quiet. I wont cover it here but my childhood kinda sucked but I was by the age of around 14 kinda happy with me. I’d got to a point where I enjoyed being nice.
I found I could smile at an old lady on the bus and this would make her smile. This made me happy and being nice made me feel good.

I was still a quiet person and felt my life was pre planned due to social suroundings and it was just to be like all the others around the local area and that was fine.
I took an apprentice job and it went, in hindsight, as it generally does for a young, non worldly wise, young man - just ok but i was happy.
I had done a lot of growing between 18 and 20 (felt like a long time then), I gained a lot of confidence. I realised I could take on anything, speak to anyone etc.
Then it all starts going wrong.

I’ve always been quite trusting and had taken a job where I made a good friend. We would spend dinners together, laugh a lot, talked about our partners, advised each other and we were to grow in the workplace and life together.
Promotion came and they turned. I was so hurt. They lied about many things and this got me in trouble at work and demoted. I was so hurt and didn’t know how to take this.
Id lost my outside of work best friend as they had found a partner who took a shine to me and made inappropriate asks of me. I told my friend of these comments and some how i was the bad one?.

It was all going a bit shit, my life was folding in on itself, I left I had no-one to reach out to, felt such a disappointment to my family, I had lost control.

I met a new person and we connected. This was a bad decision and I knew it. It wasn’t the set up I was meant to connect with, it wasnt the plan my mother had but it felt good.
They were in a shitty space, I was in a shitty space. We could sit and talk for hours about anything.

I eventually told my mother and this was the start of a whole collapse of my relationship with her, thing were just getting worse.

I’m missing chunks out like my partners children hating me, being overlooked for promotions but ultimately I went to the doctors to talk about a medical condition and it turned out that I was suffering from what was referred to as a complete mental breakdown (this is not the term you can use now :D)
They advised me to take a break from everything and take some strong tranquilizers. I told my mother, and she said “Get a grip and don’t tell anyone at work” - It would of been career suicide to say this at that time, and so I got a grip, took a two-week holiday, through the pills in the bin and painted on a smile and never spoke of the demons again.

Honestly, I wanted to die. Yeah, weak, running away, cowards way out…. these are many terms used and thoughts people have.
How easy would it be, no more pain, no more sadness, no more loneliness.

Why did I stay? Because my partner made me feel special and made me laugh and one of my partners children, a little girl that doted on me. She really loved me, I felt like I was her world, and we giggled a lot… I couldnt do it to her.

Now I can skip a lot as it was grey. I was never the same person. I had to wrestle each day just to get through, I became a passenger in life with those around me dictating what I did, how, where etc. I had moments of the old me where I would run with a little confidence but it would last days at most. I became unkind to myself and numb to life.
I was scarred and it was never going away, so many times I thought about dying but that would only help me, only take my ache away - when I love, I love deep and never wanted to hurt others.

So how do I describe the feeling of depression. If you tied a bungee rope to your back and started running. It would be easy at first and then…. it gets tighter and harder. Eventually, boing, you get pulled back into the dark pit (depression).
That is how my life was but it became a comfortable pain. It became an evil buddy. I felt that I wouldn’t be me without it.

Now, we had a child. If you have children, you will know the feeling of oh my god, I am the rock this thing leans on FOREVER.
I started to feel trapped in life, just working to keep my family but, at times had some happiness, but the monster of depression ruins every part of your life.

  • You have a kid? - what happens if I die
  • You are happy? - that means Im gonna be sad when this period ends
  • You do a good job - that means this is the new baseline and so is always expected
    Every positive is a negative and it is exhausting.

So, things fell to shit big time when the company I worked for got sold. I had done well and had FINALLY found a happy place. I was happy at home, I loved my job, loved my teammates and the pay was good. See, every time its good means my life is gonna be shit and go downhill.

I took a chance and I went to Uni… I smashed it out of the park but guess what, imposter syndrome and depression are cruel.

The pressure was immense but the time was great. I became something like the person I wanted to be but by now, no one needs me.
Children are grown up, my wife is not interested. This is probably because things in life and I had changed, and without talking you ain’t gonna know where people are mentally/emotionally.
Some of this sounds uncoordinated, but that is the mindset, like a ping pong ball in the mind.

So what now. I get a job in a cutting edge tech company and it is very different from my old life and makes a huge personal change. Anyway, these new workspaces are very different from my old space and openess is a thing.
I found I am really good at advising others and found that I was a good listener. I was speaking to a workmate one day and they were telling me about their depression then out of blue he said "do you take medication for your depression?". OMG. I had never spoken of my issues and so was taken aback and that made a crack in the dam.
Times had changed and it was easier to be open but old habits die hard and I couldnt tell people of my struggles. As far as I was concerned, its embarrassing to be weak emotionally (my perception).

Oops

One day I was talking to a medical professional on behalf of my son and inquired about mental health therapy that was advertised and I had a brief moment of feeling happy to speak about being fed up with feeling low.
I immediately regretted it and became embarrassed. I had issues from my childhood, issues from my teens, issues long after, all had been greyed out and hidden away but I also really felt that there was no point in life.
The medical professional just happened to be a mental health therapist, taking time away from therapy (They take time off as it can be quite taxing to be a therapist), and I was immediately assessed.
Its kinda lucky as I would of backed out if I had a chance, so being immediately assessed, I couldn’t and a week later, I was in front of my therapist. cognitive-behavioural therapy (CBT) was the chosen therapy.

What was involved?

Quick version: You dont get treated. You get taught to address thoughts in a different manner and are helped to treat yourself.

Longer Version: CBT, or cognitive-behavioral therapy, is a type of talk therapy that is used to treat a variety of mental health conditions, including depression, anxiety, and PTSD.
CBT is based on the idea that our thoughts, feelings, and behaviors are interconnected, and that by changing our thoughts and behaviors, we can change the way we feel.

CBT is typically conducted in a one-on-one session with a licensed therapist. During a CBT session, the therapist and client work together to identify negative thought patterns and behaviors that may be contributing to the client’s mental health concerns. The therapist then teaches the client strategies to challenge and reframe negative thoughts, and to replace unhelpful behaviors with more positive ones.

For example, if someone with social anxiety is afraid to attend social events because they believe they will be judged by others, the therapist might use CBT to help them challenge this belief by asking them to think about times when they have attended social events and had positive experiences. The therapist might also teach the client relaxation techniques or social skills to help them feel more confident in social situations.

CBT typically involves weekly sessions for several months, although the length and frequency of therapy can vary depending on the individual’s needs. CBT has been shown to be an effective treatment for a variety of mental health conditions, and it can be used in combination with medication or other treatments for optimal results.

So, i am going to pass on some of the tools given to me. Like many things, there are many tools in the therapy box of tricks, what works for you may not work for someone else. Therefore, you may need a butter knife to undo your screw, ill use a chisel, others may need the flathead screwdriver.

A term also used is talking therapy and this is where it starts, Talking.

NOW, I had always been very cynical of therapy but, you end up speaking to someone that is invested in you and its surprisingly easy to talk.

So why is this different than talking to a pal.
If you talk to a pal/partner, they care deeply, and they wish to help, they want to fix the issue. Your therapist doesn’t, they want to give you the tools to fix it.

Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime." “

Workpal Example.

  • You: Bob, How do you do so and so?
  • Bob: Here are mate, Give it here, I'll do it
    Problem solved - It is done with care and compassion but you are not any further forward

With CBT, tools are offered to help you and here are a couple that helped me.

One is chatting and having no direct feedback. The received theme from my point of view was that my emotions were questioned and where needed, challenged or affirmed.

To give an example, lets try when questioning your job abilities.
Chatting: I may express the thought that im not good enough to do my role.
This isn’t affirmed, nor is this challenged. This would now create a number of questions of why you feel this way, what qualifies you for the role, how did you get the role, what have you done that is good, what have you seen go wrong, pro’s v cons’s.

Another tool is the perception report.
This asks you to record a feeling or emotion, measure your feelings at this time, thoughts or feelings around this and then a number of differing perceptions. Here is what an entry would look like.

  • Feeling : My partner didnt say they loved me when I ended our phone call
  • Emotion and Measure : Sad 60%, angry 70%, questioning 20%
  • Thoughts or Feelings : I always say I love them, is that not important to them?, Why dont they care, are my emotions not important to them, They have been distant, have they found someone else?
  • Perceptions : This is built around a set of asks :
    • What would a person who cared for themself think or say
    • When you are 80, will it matter
    • What three good things could come from this
    • What would a close friend say…..
      Your answers could be:
      • Will this matter when your 80? Well they say it most times so in the scheme of things, prob not
      • What could come as good? Its a sign that it is no longer just a habit but said when meant
      • It will mean more when it is said

There are a lot of other perceptions and these can be considered with your therapist, this allows you to explore the answers and push the questions more but, it is a helpful thing to do with or without a therapist.

Is it intrusive? You could fill this in at the time that each thing occurs but ultimately, you could do it twice a day eg dinner and bedtime or even just like a diary and once a day… I hear you say but I might miss some bits.

Well if you dont remember it at nighttime, does it really matter? One other benefit is I always seemed to have a busy mind and my sleeping pattern was awful: Writing you day down, plans for tomorrow, worries, allows you to clear your mind and sleep better.

If you then assess the list yourself once a week, you can see what really matters, you may see that with a change in perspective, actually, all the negativity can be changed to positive.
Lets take a look

Your partner didnt phone on a night out.
Lets go negative :(.
They are having intimate time with another person, they dont care about me and have forgotten me, They are drunk and vulnerable.

Lets now switch :
They are having fun and you should be happy for them, They love you so much they dont need to be attached at the hip and will love you more.

Just this switch can change so many things. Repetition is the key to challenging the auto response you have to a scenario.

Just visiting a note from earlier regarding my going to uni and my wifes distance or perceived none interest. One thing my therapy taught me was its not all about you :D. Think why, think about others thoughts and how this can effect them and then you. To place myself in my wifes shoes (she has smaller feet so its difficult :D). The husband that has for so long been on a linier path, steady, you know whats happening from one week to the next. All this has changed. They have a new set of friends, all young, they are looked upto, they are going out more, they are changing. How scary must that be for her.

Final Bit

Only you have control of you. Small changes you make can make huge changes to your life.

My last bits of advice:

  • Therapy will hurt and bring up things you have buried or dont want to cover
  • don’t make it secret.

Ouch

I dont want to cover all my things but we all have baggage. You will cover this, you will pick up the box, open it, look inside and repackage the goods in a better way. To make an example, imagine if the box is childhood. Your perception would be from a childs POV. Now youre an adult.
You will have a very different perception now, you can re-assess.

You will be given chance to talk it over, work in through and make peace with the box.
You will also find better ways to deal with life in general.

Inform

Your therapy should not be a secret. Ok, you dont need to be that person that is telling everybody like they have a special badge. No one really cares but let your partner know whats going on.

Imagine if you nightly walk into the kitchen and ask your partner if they need help making dinner.
They reply no thanks.
You respond with slight hurt that you’d like to help but they just advise that they want to do it on their own…
you sulk away in a huff.

Now after some therapy, you’ve realised that this making dinner time is “their” wind down time and letting them unwind is good
So, now you change and you ensure that they know you’d like to get involved and you love them and they only need to ask and you’ll be there.

If you dont make them aware,— title: “Mental State” subtitle: “Yeah, everybody has an issue!” author: “Geeky Blinder” avatar: “img/authors/geeky.jpg” image: “img/mentalhealth.jpeg” date: 2023-04-26 tags: mental health depression brainfix —

No-one said this blog was just tech!

Well, everybody is depressed arnt they??

Is mental health a bandwagon that has become fashionable? It is a shame that it can be seen this way. Fed up, down in the dumps, a bit sad!! all are moods but they are not depression.

Looking for a description (not mine) -
Depression is a serious mental health condition, characterised by persistent feelings of sadness, hopelessness, and a loss or lack of interest in activities. Depression can be caused by many factors including genetics, life events including chemical imbalances of the brain.

Symptoms include:

  • Feeling persistently sad or empty
  • Loss of interest
  • Sleep disturbances, insomnia or oversleeping
  • Fatigue or lack of energy
  • Change of appetite
  • Change of weight
  • Difficulty concentrating, making decisions, memory issues like remembering things
  • Feeling worthless, guilty or helplessness
  • Thoughts of death or suicide
  • A feeling of whats the point

Depression can be treated with a combination of therapy, medication, and lifestyle changes. Therapy, such as cognitive-behavioural therapy or talk therapy, can help individuals identify negative thought patterns and develop coping mechanisms. Medication, such as antidepressants, can help correct chemical imbalances in the brain. Lifestyle changes, such as exercise, healthy eating, and stress reduction, can also be helpful in managing depression.

If you or someone you know is struggling with depression, it is important to seek help from a healthcare professional. Depression is a treatable condition, and with the right support and resources, individuals can recover and live fulfilling lives.

A little like a star sign, we can all get it to fit

Those symptoms could fit anyone couldn’t they. Tired a lot? yeah! could be depression Feel worthless? Could be depression… or they could be in the first case, drinking a lot, over working, bad mattress, second, your partner is a bad person, grief for a recent loss - There are lots of reasons.

So, Yeah I have depression

I’ve heard this so many times.

  • “Oh, so when were you diagnosed?”
    If i recieve a reply of
  • “Oh, I self diagnosed!”
    Im like, What, i beg your pardon!!!

  • “Why you being an arse and so rude?”
  • “Its my autism, i cant help it”…
    Shut up, persons who have autism arnt arseholes!!! youre just being a dick and looking for an excuse.

I’ve heard this with all the “trendy” terms/issues we have these days

  • Autism
  • Gender issues
  • Dyslexia
  • Asperger
  • the list goes on…..

Believe me, I have empathy for genuine cases but everybody seems to be looking for an issue, an excuse for something.
If you genuinely have any of these issues or concerns, you need to see a professional.

So, I have been diagnosed and its not the first time and here I will tell you my story (some parts of my story are redacted for privacy etc, but i hope this will help if you are in a similar situation)

How did I find out

When I was younger, early 20s, I found out.

I used to be, as a child, quite withdrawn, quiet. I wont cover it here but my childhood kinda sucked but I was by the age of around 14 kinda happy with me. I’d got to a point where I enjoyed being nice.
I found I could smile at an old lady on the bus and this would make her smile. This made me happy and being nice made me feel good.

I was still a quiet person and felt my life was pre planned due to social suroundings and it was just to be like all the others around the local area and that was fine.
I took an apprentice job and it went, in hindsight, as it generally does for a young, non worldly wise, young man - just ok but i was happy.
I had done a lot of growing between 18 and 20 (felt like a long time then), I gained a lot of confidence. I realised I could take on anything, speak to anyone etc.
Then it all starts going wrong.

I’ve always been quite trusting and had taken a job where I made a good friend. We would spend dinners together, laugh a lot, talked about our partners, advised each other and we were to grow in the workplace and life together.
Promotion came and they turned. I was so hurt. They lied about many things and this got me in trouble at work and demoted. I was so hurt and didn’t know how to take this.
Id lost my outside of work best friend as they had found a partner who took a shine to me and made inappropriate asks of me. I told my friend of these comments and some how i was the bad one?.

It was all going a bit shit, my life was folding in on itself, I left I had no-one to reach out to, felt such a disappointment to my family, I had lost control.

I met a new person and we connected. This was a bad decision and I knew it. It wasn’t the set up I was meant to connect with, it wasnt the plan my mother had but it felt good.
They were in a shitty space, I was in a shitty space. We could sit and talk for hours about anything.

I eventually told my mother and this was the start of a whole collapse of my relationship with her, thing were just getting worse.

I’m missing chunks out like my partners children hating me, being overlooked for promotions but ultimately I went to the doctors to talk about a medical condition and it turned out that I was suffering from what was referred to as a complete mental breakdown (this is not the term you can use now :D)
They advised me to take a break from everything and take some strong tranquilizers. I told my mother, and she said “Get a grip and don’t tell anyone at work” - It would of been career suicide to say this at that time, and so I got a grip, took a two-week holiday, through the pills in the bin and painted on a smile and never spoke of the demons again.

Honestly, I wanted to die. Yeah, weak, running away, cowards way out…. these are many terms used and thoughts people have.
How easy would it be, no more pain, no more sadness, no more loneliness.

Why did I stay? Because my partner made me feel special and made me laugh and one of my partners children, a little girl that doted on me. She really loved me, I felt like I was her world, and we giggled a lot… I couldnt do it to her.

Now I can skip a lot as it was grey. I was never the same person. I had to wrestle each day just to get through, I became a passenger in life with those around me dictating what I did, how, where etc. I had moments of the old me where I would run with a little confidence but it would last days at most. I became unkind to myself and numb to life.
I was scarred and it was never going away, so many times I thought about dying but that would only help me, only take my ache away - when I love, I love deep and never wanted to hurt others.

So how do I describe the feeling of depression. If you tied a bungee rope to your back and started running. It would be easy at first and then…. it gets tighter and harder. Eventually, boing, you get pulled back into the dark pit (depression).
That is how my life was but it became a comfortable pain. It became an evil buddy. I felt that I wouldn’t be me without it.

Now, we had a child. If you have children, you will know the feeling of oh my god, I am the rock this thing leans on FOREVER.
I started to feel trapped in life, just working to keep my family but, at times had some happiness, but the monster of depression ruins every part of your life.

  • You have a kid? - what happens if I die
  • You are happy? - that means Im gonna be sad when this period ends
  • You do a good job - that means this is the new baseline and so is always expected
    Every positive is a negative and it is exhausting.

So, things fell to shit big time when the company I worked for got sold. I had done well and had FINALLY found a happy place. I was happy at home, I loved my job, loved my teammates and the pay was good. See, every time its good means my life is gonna be shit and go downhill.

I took a chance and I went to Uni… I smashed it out of the park but guess what, imposter syndrome and depression are cruel.

The pressure was immense but the time was great. I became something like the person I wanted to be but by now, no one needs me.
Children are grown up, my wife is not interested. This is probably because things in life and I had changed, and without talking you ain’t gonna know where people are mentally/emotionally.
Some of this sounds uncoordinated, but that is the mindset, like a ping pong ball in the mind.

So what now. I get a job in a cutting edge tech company and it is very different from my old life and makes a huge personal change. Anyway, these new workspaces are very different from my old space and openess is a thing.
I found I am really good at advising others and found that I was a good listener. I was speaking to a workmate one day and they were telling me about their depression then out of blue he said "do you take medication for your depression?". OMG. I had never spoken of my issues and so was taken aback and that made a crack in the dam.
Times had changed and it was easier to be open but old habits die hard and I couldnt tell people of my struggles. As far as I was concerned, its embarrassing to be weak emotionally (my perception).

Oops

One day I was talking to a medical professional on behalf of my son and inquired about mental health therapy that was advertised and I had a brief moment of feeling happy to speak about being fed up with feeling low.
I immediately regretted it and became embarrassed. I had issues from my childhood, issues from my teens, issues long after, all had been greyed out and hidden away but I also really felt that there was no point in life.
The medical professional just happened to be a mental health therapist, taking time away from therapy (They take time off as it can be quite taxing to be a therapist), and I was immediately assessed.
Its kinda lucky as I would of backed out if I had a chance, so being immediately assessed, I couldn’t and a week later, I was in front of my therapist. cognitive-behavioural therapy (CBT) was the chosen therapy.

What was involved?

Quick version: You dont get treated. You get taught to address thoughts in a different manner and are helped to treat yourself.

Longer Version: CBT, or cognitive-behavioral therapy, is a type of talk therapy that is used to treat a variety of mental health conditions, including depression, anxiety, and PTSD.
CBT is based on the idea that our thoughts, feelings, and behaviors are interconnected, and that by changing our thoughts and behaviors, we can change the way we feel.

CBT is typically conducted in a one-on-one session with a licensed therapist. During a CBT session, the therapist and client work together to identify negative thought patterns and behaviors that may be contributing to the client’s mental health concerns. The therapist then teaches the client strategies to challenge and reframe negative thoughts, and to replace unhelpful behaviors with more positive ones.

For example, if someone with social anxiety is afraid to attend social events because they believe they will be judged by others, the therapist might use CBT to help them challenge this belief by asking them to think about times when they have attended social events and had positive experiences. The therapist might also teach the client relaxation techniques or social skills to help them feel more confident in social situations.

CBT typically involves weekly sessions for several months, although the length and frequency of therapy can vary depending on the individual’s needs. CBT has been shown to be an effective treatment for a variety of mental health conditions, and it can be used in combination with medication or other treatments for optimal results.

So, i am going to pass on some of the tools given to me. Like many things, there are many tools in the therapy box of tricks, what works for you may not work for someone else. Therefore, you may need a butter knife to undo your screw, ill use a chisel, others may need the flathead screwdriver.

A term also used is talking therapy and this is where it starts, Talking.

NOW, I had always been very cynical of therapy but, you end up speaking to someone that is invested in you and its surprisingly easy to talk.

So why is this different than talking to a pal.
If you talk to a pal/partner, they care deeply, and they wish to help, they want to fix the issue. Your therapist doesn’t, they want to give you the tools to fix it.

Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime." “

Workpal Example.

  • You: Bob, How do you do so and so?
  • Bob: Here are mate, Give it here, I'll do it
    Problem solved - It is done with care and compassion but you are not any further forward

With CBT, tools are offered to help you and here are a couple that helped me.

One is chatting and having no direct feedback. The received theme from my point of view was that my emotions were questioned and where needed, challenged or affirmed.

To give an example, lets try when questioning your job abilities.
Chatting: I may express the thought that im not good enough to do my role.
This isn’t affirmed, nor is this challenged. This would now create a number of questions of why you feel this way, what qualifies you for the role, how did you get the role, what have you done that is good, what have you seen go wrong, pro’s v cons’s.

Another tool is the perception report.
This asks you to record a feeling or emotion, measure your feelings at this time, thoughts or feelings around this and then a number of differing perceptions. Here is what an entry would look like.

  • Feeling : My partner didnt say they loved me when I ended our phone call
  • Emotion and Measure : Sad 60%, angry 70%, questioning 20%
  • Thoughts or Feelings : I always say I love them, is that not important to them?, Why dont they care, are my emotions not important to them, They have been distant, have they found someone else?
  • Perceptions : This is built around a set of asks :
    • What would a person who cared for themself think or say
    • When you are 80, will it matter
    • What three good things could come from this
    • What would a close friend say…..
      Your answers could be:
      • Will this matter when your 80? Well they say it most times so in the scheme of things, prob not
      • What could come as good? Its a sign that it is no longer just a habit but said when meant
      • It will mean more when it is said

There are a lot of other perceptions and these can be considered with your therapist, this allows you to explore the answers and push the questions more but, it is a helpful thing to do with or without a therapist.

Is it intrusive? You could fill this in at the time that each thing occurs but ultimately, you could do it twice a day eg dinner and bedtime or even just like a diary and once a day… I hear you say but I might miss some bits.

Well if you dont remember it at nighttime, does it really matter? One other benefit is I always seemed to have a busy mind and my sleeping pattern was awful: Writing you day down, plans for tomorrow, worries, allows you to clear your mind and sleep better.

If you then assess the list yourself once a week, you can see what really matters, you may see that with a change in perspective, actually, all the negativity can be changed to positive.
Lets take a look

Your partner didnt phone on a night out.
Lets go negative :(.
They are having intimate time with another person, they dont care about me and have forgotten me, They are drunk and vulnerable.

Lets now switch :
They are having fun and you should be happy for them, They love you so much they dont need to be attached at the hip and will love you more.

Just this switch can change so many things. Repetition is the key to challenging the auto response you have to a scenario.

Just visiting a note from earlier regarding my going to uni and my wifes distance or perceived none interest. One thing my therapy taught me was its not all about you :D. Think why, think about others thoughts and how this can effect them and then you. To place myself in my wifes shoes (she has smaller feet so its difficult :D). The husband that has for so long been on a linier path, steady, you know whats happening from one week to the next. All this has changed. They have a new set of friends, all young, they are looked upto, they are going out more, they are changing. How scary must that be for her.

Final Bit

Only you have control of you. Small changes you make can make huge changes to your life.

My last bits of advice:

  • Therapy will hurt and bring up things you have buried or dont want to cover
  • don’t make it secret.

Ouch

I dont want to cover all my things but we all have baggage. You will cover this, you will pick up the box, open it, look inside and repackage the goods in a better way. To make an example, imagine if the box is childhood. Your perception would be from a childs POV. Now youre an adult.
You will have a very different perception now, you can re-assess.

You will be given chance to talk it over, work in through and make peace with the box.
You will also find better ways to deal with life in general.

Inform

Your therapy should not be a secret. Ok, you dont need to be that person that is telling everybody like they have a special badge. No one really cares but let your partner know whats going on.

Imagine if you nightly walk into the kitchen and ask your partner if they need help making dinner.
They reply no thanks.
You respond with slight hurt that you’d like to help but they just advise that they want to do it on their own…
you sulk away in a huff.

Now after some therapy, you’ve realised that this making dinner time is “their” wind down time and letting them unwind is good
So, now you change and you ensure that they know you’d like to get involved and you love them and they only need to ask and you’ll be there.

If you dont make them aware, it would be a huge shock to your partner if they didn’t know you are taking therapy and some changes may happen.

Like many things, you get out what you put in.

Keep safe, look after yourself and remember, if you had a bad back, you would go to the doctor. Well a bad mental state is no different. it would be a huge shock to your partner if they didn’t know you are taking therapy and some changes may happen.

Like many things, you get out what you put in.

Keep safe, look after yourself and remember, if you had a bad back, you would go to the doctor. Well a bad mental state is no different.

Create your own Blog

Github and AWS R53!

Geeky Blinder 2023-04-21

Want to have a bash at creating a blog?

Well, this isn’t the first time ive thought about creating a blog.

I had previously looked at using HUGO. This worked well, and i wrote a few bits but never published this. I just never continued and have since thought about it a number of time.

So this time I, as id forgotten what id used last time, I was looking for a static page blog creator. WordPress came up as an option (Yeah, slap me). I decided to use AWS and here i started looking at setting up an EC2, R53, EIP, RDS etc… I Wanted to be a smarty-pants so , stupidly started writing the terraform to do this. Advise, that is dumb when you are working out how something is made. Create a POC (proof of concept), get that all working and then start writing terraform.

Whilst looking at an error and creating the repository on git, I came across Jekyll. Another learning point is the overwhelming desire to follow a path that you start down. Get out of being precious, you’re not always right first time.

So I swallowed my pride and started to look at Jekyll. It was so easy, download, execute and test. Add to that I found that there are loads of themes you can find and these were just as easy, download and you’re more or less done.

Simples

So this isn’t an in depth guide but a great starter. Jekyll website has great walkthroughs so i would be doing them an injustice by trying to recreate that but i will give you the VERY quick and easy way. Prerequisite : a GitHub account

  1. Hit Jekyll Themes website
  2. Select any theme you fancy
  3. Go to GitHub and create a repository names.github.io {GitHub Walkthrough}
  4. Push the code up to the repo/upload the code manually
  5. Now your website should be up and running.

Well that was easy wasn’t it. Now how do we edit this. If you open the _posts folder in the code you will see a bunch of test blogs.

The ‘date’ then ‘name’ is advised, eg 2023-04-31-name-of-file.markdown.

You will see that the start/head of the file will contain:

    ---
    title: "The Title Which Is Shown On The Main Page"
    subtitle: "Sub Title to Title Headline"
    author: "You I Guess :D"
    avatar: "img/authors/you.jpg"
    image: "img/blogimage.png" <<<< Image shown on the blog icon
    date:   1999-01-01 <<<< Date shown on the page 
    tags: website R53 github webpage gitpage <<<<< Some tags  \O/
    ---

The date is when the file will be shown. So you could push this file earlier and it will be hidden until that date. I wouldnt rely on it as i havnt checked if its taken from client or server but it is interesting and it does seem to work.

As you can see the page is in markdown and can be created with quick formatting. A good starter is rip tutorial or ideal is GitHubs markdown instructions

Some folders may be different and need creating dependent on the theme you use. In my case I needed extra folders creating.

So this is the quick fire, whizz through to get a basic site up. Next we will follow through with a slightly deeper walkthrough. In this we will set up Jekyll on our machine and run this locally, so we can see immediate changes before upload.

Jekyll

Stolen from jekylls site

~ $ gem install bundler jekyll
~ $ jekyll new my-awesome-site
~ $ cd my-awesome-site
~ $ bundle exec jekyll serve
# => Now browse to http://localhost:4000 

There you go, its running local on your machine…..

Weeeelllll, it wasn’t that simple for me. First, Jekyll requires the following:

Ruby version 2.5.0 or higher RubyGems GCC and Make

So off I go to install these. I was running PopOS (Ubuntu) So, requirements page I installed all according to the instructions for Ubuntu. Now I owe you an apology. I had issues and I didnt document them to place here but suffice to say it was painful to get to run. I did a bit of googling on the error that the installed tool was not found and fixed it… sorry.

Once installed it is as easy as running as above Jekyll new my-site Then cd to the new folder named ny-site and run bundle exec Jekyll serve If you then open your browser and type in http://localhost:4000 or 127.0.0.1:4000 - you should see your new site. (You may get a warning that you site is insecure, dont worry and bypass this, it is due to no ssl certificate but we trust ourself and its local only (no outside world). If you are worried you can pass your code through VirusTotal)

You can edit as instructed and each new update will automagically be seen in your browser.

When you have finished, input Ctrl c to stop the server. When you are happy you can push the changes to GitHub as normal, give it a couple of minutes and POW, new site update.

Wait…. One good thing now that you have Jekyll install is that you can check out the themed site we made earlier. cd to the theme folder, in here run Jekyll serve and now in your browser you can assess the themed version of a site that you can adapt to suite.

Your URL

Lastly we will set up a URL of our own using AWS and Route 53. Well, this was my real goal unless you chose a great GiHub name or it is your name and you used as a CV. No point having a site named hairymary6767.github.io!!.

So your gonna need an AWS account. If you dont have one, well off you go and get one. Theres a lot of free stuff for you to play with and learn. Im new to this so ive looked at my AWS and id say it will cost no more than £10. I have two urls and a bunch of other stuff and my bill is £11 - I’ll update this if it changes significantly.

  1. In your repository, create a file (no need for an extension).In this file add your url eg geekyblinder.co.uk
  2. git add this file ( git add . && git commit -m 'CNAME record' && git push )
  3. Now we can configure our R53
    • 3a. Log into the AWS console
    • 3b. Click onto the Route 53 dashboard
    • 3c. Click on Hosted zones
    • 3d. Click on the domain you are using
    • 3e. Click Create Record
    • 3f. Ignore the Name field
    • 3g. Under the Type dropdown, select A record
    • 3h. Set Alias toggle to No
    • 3i. Enter into the text area the IPs
        185.199.108.153 
        185.199.109.153 
        185.199.110.153 
        185.199.111.153
      
    • 3j. Click Save Record Set.
  4. Click on create Record again
    • 4a. In the Name field enter www
    • 4b. Under Type select A
    • 4c. Set Alias toggle to Yes
    • 4d. In the Alias field, select the domain we previously set up eg geekyblinder.co.uk
    • 4e. Save the Record
  5. Now configure GitHub page to use you URL
    • 5a. On your GitHub account and then the settings tab
    • 5b. Go to the GitHub Pages section
    • 5c. In custom domain, enter your domain: geekyblinder.co.uk
    • 5d. Now verify the URL by adding some DNS TXT records
      • 5da. Go to AWS R53 and create a new record.
      • 5db. Create a TXT record in your DNS
      • 5dc. Add hostname _github-pages-challenge-yourgit.yoururl.com
      • 5dd. Add the provided code eg 12a3b4567cdef8901g234h567890123 for the value of the TXT record.
      • 5de. Wait until your DNS configuration changes.
        • This could take up to 24 hours to propagate is the warning you will see but its usually minutes.
        • You can exit and come back to verify by the use of the three dots against the name
      github challenge record
      • 5df. Click verify
  6. Now it may take a little time for the records to propogate
  7. Now check that the https:// and http:// versions of your site are now working plus then with and without www. eg

BTW - its set to forward http to https

Imposter Syndrome - Really

is it just me - No!

Geeky Blinder 2023-04-09

Imposter Syndrome - Really, I’m sick of hearing about it.

Yeah, everybody has jumped on this bandwagon. Everyone is Soooo vulnerable and precious.

Well, not everybody is. I’ve met a few characters in tech. My original exposure was to a lot of people who were helpful, genuine and a real team. This is me looking back to see the bigger picture.

As time has gone on, I’ve met some that really are just imposters. They plough through life by ignoring they’re wrong, just bullying, not listening to others, and making the same mistakes again and again. This will make you very good at doing what you do and then, moving on before you are found out. That is not imposter syndrome, this is a very different thing.

In my original exposure to the tech scene, all people were unique, but understood that you can’t know everything. For the first time, having people that I see as geniuses ask for my help was the time I shit myself. Well, this is one of the moments when I felt like an imposter.

When did i see it for the first time?

I had never heard the term “imposter syndrome” until one day a friend at university said they felt I was a big sufferer.

What is the syndrome?

Try this link: Very Well Mind - Imposter Syndrome or  Try this: Mike Cannon-Brookes, an ultra successful guy, lets you in on how being clever or successful can knock you.

I’d worked and been successful before university. I was older than everyone in my cohort. This really adds pressure. I couldn’t fail, it was so important that I didn’t get it wrong. 

I was lucky that I met some very clever people. I felt out of my depth (as you should at university). I thought that these people were carrying me. They said, I was helping them and excelling in some areas. This was very hard for me to believe, surely I was lucky (lucky, fluked, liked <- all terms you use to dismiss that you’re actually very good and/or learning a lot) and that part was an easy bit that I knew, or I had done something similar, there were so many reasons you can get in your head to dismiss success.

I got my job, I did well, and I still excel in this role. This isn’t how I felt though.  I have, after a number of years, learned to get a grip of this runaway self sabotage.

But look at those people who are super confident.

Have you seen that person who tells everybody how good they are, they really believe it. At first, you may believe them, but then you see that they are not quite as good as they think!! This is the Dunning-Kruger effect. Put simply, you may have heard that term “a little knowledge can be dangerous”. Well, here it is, they are blown away by their knowledge and think they are creating magic. Well, I won’t knock that, as I too have been amazed by what can be done, BUT, this is where we diverge. I see how far I’ve come, but I also see how much I don’t know. The other person just doesn’t see the upcoming learning curve. As a now experienced techie, I see these newbies come in, and I like that excitement and I would always encourage these people to keep learning.  This is also where I explain my struggles with imposter syndrome. They are generally knocked sideways as they see me as a goal (I’m looking at what they are achieving and thinking wow too).

That is exactly what I want to explain. Nice, clever, genuine, honest people, know what they know, they also appreciate the effort and work that goes into learning and will always appreciate those around them. They know where they are going and where they have been. They are open and caring to those coming up behind them to join the tech team. That can also expose them to imposter syndrome as they are held up high. Eventually they get comfortable knowing that the journey of learning will never end (just as you think, I’ve got this, you find something that slaps you around the chops and makes you humble again). They do get comfortable with what they know, as subject-matter expert and knowing that they can’t know EVERYTHING, and getting happy with saying, I don’t know. They are also happy to realise, as my ex-boss told me, that you ain’t employed because you know it all, but because of your ability to, and that there is a lot more to you than just what you know. Did you get the job because you are a genius or because you are nice? Probably both. If you knew nothing, you wouldn’t be where you are, but if you know a lot and people can work with you, that’s better than a genius who is arrogant, can’t hold a conversation, etc.

Now, I am never going to tell you how to control your self-doubt, how it happens, or what to do for you, but I can share about me, and maybe that can help, but mostly, remember that you’re not alone. Persons you speak to who are successful, clever, a person you feel knows it all. Honestly, unless it has gone to their head, and they have become an arrogant person (again, is that a manner to protect themselves from being found out (in their minds)), all those people have doubts. They may have worked through it and become comfortable with it, but, they all suffer.

So

Well, my point is that imposter syndrome is nothing special; it doesn’t make you different; it actually makes you just like everybody else. Do you breathe? Well, so does everybody else. We now have a name for a natural thing, it is what drives you on, makes you humble, and makes you a nicer person. None of this is to dismiss the feeling, it is real, just know, if there is someone you can share this with, do it. If there isnt, know that almost all are feeling the same.

Will you get over it

Maybe but most probaby, youll just get used to feeling this way. Youll have times when you are on fire, feel youve learned loads then have days when you think, I have no idea what I’m doing. On those low days, just look back, see all the amazing things you have done. Take a moment, have a short break and come back to your issue.

Ive just watched Lewis Capaldi on Netflix - even he suffers.

Look after yourself, work to live, balance is the key

Here are a few links to some resources:

techtello imposter syndrome
Imposter syndrome of a dev at dev.to, auther cglikpo
therapywithabby.co.uk imposter-syndrome-in-a-new-job
willowtreeapps.com imposter-syndrome-in-design-what-it-is-and-how-to-overcome-it

Blogs are the new social!

Its all cyclic!

Geeky Blinder 2023-03-06

So, whats all this about?

Well, i have entered the tech scene and see so many blogs that I hit on a daily basis with those hidden nuggets of knowledge.

So why a blog

As i go about my day, I have a lot of these moments where i find something that I think others may fnd interesting, get them out of a bind. Theres time when a good rant is all I need, so here we are. Then I think others might be interested in anothers journey in tech, self concerns etc

Who am I.

Entered the tech scene!!!. Oh i hear, another halfwit thinking they can jump on the tech bandwaggon. Well its not quite like that, for many years id had an interest in tech. Lots of playing over the years, hacking websites, playing and hacking games, fixing stuff (breaking stuff), building stuff. I’ve played with windows and linux plus anything that I could get into. Ive worked in the manufacturing scene for a long time and linked alot of inopperable schemes together and dropped all the way back to using win3.1 for printing, macros in MSoffice, UNIX, C#, lots of wierd and wonderful tech bits. So, oppotunity knocked and I decided to go to uni and take a BSc in CompSci and Cyber Sec.

Now, im really in the tech scene. I work for a real tech company.

Thats it. I know past stuff that im trying to fit into my new existance, know new stuff that im seeing if it is useful and the stuff im learning is the new horizon. Its daunting, exciting and so often I wonder, can I do this… Obviously yeah and id like to share all this with you :D

So here we are, a new outlet for me to share with anyone who is interested (or no doubt talk to myself). It will contain,

  • bits of my week
  • bits ive learned
  • vents about tech and goings on
  • feeling
  • wierdness
  • more or less, anything kinda tech related (or even not)

as always, the line - These opinions are all my own, they in no way constitute any link to the company I work for

Now the dull bit is over, let me get to work on writing a blog So, should it be

  • SVB (Silicon Valley Bank)
  • LTT - Hes better looking with a beard for one, but is he a tech persons guilty pleasure to watch or a real tech god. Lets be honest, he needs to just get a grip of linux
  • How WFH is good or bad
  • Back stabbers
  • What is a tech role and the wide range
  • Women in tech and how to fix it
  • Are we sick of hearing about ChatGPT - will it really put us out of work

So many things, so short on time. Some, dare we cover :/

Be nice and make someone smile today