Category Archives: Linux OS

When Googling My Own Product Sent Visitors to a Prayer App: A Debugging Story With My AI Pair

The setup that broke my Friday afternoon

I was checking my own SEO. I typed “quizwrap” into Google. My site, QuizWrap — a free quiz-maker for students — showed up as the very first result. Great.

I clicked it.

A Ho’oponopono prayer counter loaded.

That’s a completely different app I run on the same server, and visitors looking for QuizWrap were landing on it instead. Worse, I quickly noticed a related issue: visiting https://smartdisha.co.in/ directly threw a TLS certificate error in the browser.

Two bugs, both on the same VPS, both involving the nginx reverse proxy that fronts everything. I sat down with Claude (Anthropic’s coding agent inside Claude Code) and we dug in together. What follows is the story of that debugging session — both the technical findings and what it was like to pair-debug with an AI.


The architecture (and a quick glossary)

A quick mental model so the rest of this makes sense.

A single VPS hosts three sites behind one system nginx — a popular web server that, in this setup, acts as a reverse proxy: a traffic cop sitting in front that takes incoming HTTPS requests and forwards them to the right internal app.

  • quizwrap.com — my quiz app
  • prayer.quizwrap.com — a small prayer counter
  • smartdisha.co.in — a separate site on the same box

Some traffic flows through a CDN before reaching origin, some doesn’t. Each domain has its own free Let’s Encrypt TLS certificate (the thing that makes the little padlock icon appear in your browser), and nginx is configured with one server block per domain.

A few terms I’ll keep using:

  • TLS — the encryption layer behind HTTPS. The “S” in HTTPS.
  • Certificate — a small file that proves a server owns the domain it claims to. Browsers reject the connection if the cert doesn’t match the domain.
  • SNI (Server Name Indication) — the most important term in this whole post. When your browser opens a TLS connection to smartdisha.co.in, it whispers the hostname it wants before the encryption is set up, so the server knows which certificate to present. One server can host many domains on the same IP, and SNI is how it picks the right cert. If SNI says one thing and the server returns the wrong cert, the browser shows a security warning and refuses to load the page.
  • Server block — nginx’s term for “the config chunk that handles requests for one domain.” Each domain has one (or several).
  • server_name directive — the line inside a server block that lists which hostnames that block is responsible for. If no block claims a hostname, nginx silently picks a default block as a fallback.

Bug #1: www.quizwrap.com was serving the prayer app

The detective work

Before touching anything, Claude pulled response headers from both URLs in parallel:

curl -sI https://www.quizwrap.com/
curl -sI https://smartdisha.co.in/

Two response bodies came back with identical fingerprints:

www.quizwrap.com smartdisha.co.in
ETag "69d2087a-332" "69d2087a-332"
Content-Length 818 818
Last-Modified same date same date
Title in body Ho'oponopono Counter Ho'oponopono Counter

(An ETag is a unique fingerprint a web server attaches to a file’s response — like a checksum. Two responses with the same ETag are byte-for-byte the same file.)

Same file, served to two different domains. Now we knew it was an nginx routing question, not a DNS or CDN issue.

Reading the configs over SSH

I had Claude SSH into my server (passwordless key auth — read-only operations, no sudo) and dump the three nginx configs. The first thing it spotted:

# /etc/nginx/sites-available/quizwrap.com
server {
    server_name quizwrap.com;
    ...
}

server_name quizwrap.com — not quizwrap.com www.quizwrap.com. There was no server block anywhere on the box claiming www.quizwrap.com. When a request arrived at my server saying “this is for www.quizwrap.com, nginx had no rule that named that hostname, so it fell back to the first SSL block in alphabetical order — the one for prayer.quizwrap.com, which is what serves the prayer app.

That’s how a Google click on www.quizwrap.com ended up rendering Ho’oponopono. nginx was doing exactly what it was told; what it was told just didn’t include the www version of my domain.

The fix

A one-liner:

sudo sed -i 's/server_name quizwrap.com;/server_name quizwrap.com www.quizwrap.com;/' \
  /etc/nginx/sites-available/quizwrap.com
sudo nginx -t && sudo systemctl reload nginx

A test confirmed it:

HTTP/2 200
last-modified: Sun, 30 Nov 2025 15:42:27 GMT   ← quizwrap build, not the prayer one
<title>QuizWrap - FREE Study Quiz Maker for Students</title>

Then a defensive follow-up: re-issue the Let’s Encrypt cert to cover the www version too, so the cert chain stays internally consistent. (A single cert can list multiple hostnames in a field called the Subject Alternative Name, or SAN — that’s just “the list of domains this cert is valid for.”) One certbot command added www.quizwrap.com to the cert. Done.

Bug #1: 5 minutes from “what is happening” to “fixed.”

Bug #2 was not like that.


Bug #2: smartdisha.co.in and the certificate that wouldn’t come right

The symptom

Browsers refused https://smartdisha.co.in/ with a cert error. openssl s_client showed why:

$ echo | openssl s_client -servername smartdisha.co.in -connect smartdisha.co.in:443 2>/dev/null \
    | openssl x509 -noout -subject -ext subjectAltName

subject=CN = prayer.quizwrap.com
DNS:prayer.quizwrap.com

The browser asked for smartdisha.co.in (via SNI), and the server handed back a certificate that says “I’m prayer.quizwrap.com.” That’s a name mismatch, so the browser refuses the connection — you’ve probably seen the resulting “Your connection is not private” error page. At first I thought the fix was going to be just as quick as the www one.

It wasn’t.

Two hours of dead ends

Here’s the parade of “that should have fixed it”:

  1. Re-issue the cert? sudo certbot --nginx -d smartdisha.co.in — certbot reported there was an existing cert and offered to reinstall. Reinstalled. No change. Browser still got prayer’s cert.
  2. Maybe nginx didn’t reload cleanly. sudo systemctl reload nginx. No change.
  3. Inspect the cert file directly.
    sudo openssl x509 -in /etc/letsencrypt/live/smartdisha.co.in/fullchain.pem \
        -noout -subject -ext subjectAltName
    subject=CN = smartdisha.co.in
    DNS:smartdisha.co.in

    The file on disk was correct. nginx just wasn’t serving it.

  4. Maybe the workers cached an old cert. sudo systemctl restart nginx. No change.
  5. Check nginx -T for the loaded config. The smartdisha SSL block was fully loaded, with the right server_name, the right listen 443 ssl;, and the right cert path. Everything looked correct. Still no change.

At one point I checked ps and noticed three nginx master processes — two with nginx -g daemon off; (the Docker-container telltale) and one system nginx. Claude initially flagged this as the smoking gun: maybe a Docker container was intercepting TLS. We confirmed via ss -tlnp that the system nginx was actually the only thing on port 443; the Docker nginxes were just internal app servers behind it. Wrong turn — but a reasonable one.

My moment of skepticism

I sent Claude a screenshot of my DNS panel with the message:

“Before we go chase our tail. Check the configuration attached.”

This was the right instinct. I was tired of theories that weren’t panning out. Stepping back to verify a load-bearing assumption — is the request path for this domain actually what we think it is? — confirmed we were looking at the right place, but it could just as easily have caught us going the wrong way for another hour.

Lesson: when you’re three theories deep and none have stuck, your AI assistant doesn’t always notice it’s in a loop. Pushing back is your job.

The breakthrough: probing SNI directly

Claude wrote a small loop that asked nginx, in plain English: “If a browser tells you it wants hostname X, which certificate do you hand back?” It does this once for each domain on the box.

ssh my-server 'for sni in <each-hostname-on-the-box>; do
  printf "SNI=%-30s -> " "$sni"
  echo | openssl s_client -servername "$sni" -connect localhost:443 2>/dev/null \
    | openssl x509 -noout -subject 2>/dev/null
done'
SNI=smartdisha.co.in           -> CN = prayer.quizwrap.com    ❌
SNI=www.quizwrap.com           -> CN = quizwrap.com            ✓
SNI=quizwrap.com               -> CN = quizwrap.com            ✓
SNI=prayer.quizwrap.com        -> CN = prayer.quizwrap.com     ✓
SNI=nonexistent.example.com    -> CN = prayer.quizwrap.com     (default fallback)

There it was. smartdisha.co.in was being treated identically to a totally unknown hostname. It wasn’t a cert problem at all — the cert file on disk was perfectly fine. nginx just wasn’t recognizing smartdisha.co.in as a hostname it knew about. Both unknown hostnames and smartdisha.co.in fell through to the same default fallback block (prayer, which is alphabetically first), which is why both got prayer’s cert.

The actual root cause

With that clue, Claude re-read all three nginx configs side-by-side and found the only structural difference:

Block IPv6 listen IPv4 listen
prayer listen [::]:443 ssl ipv6only=on; listen 443 ssl;
quizwrap listen [::]:443 ssl; (dual-stack) listen 443 ssl;
smartdisha — missing — listen 443 ssl;

A bit of background to read that table: every server on the internet has two kinds of addresses available — older IPv4 (the familiar 1.2.3.4 style) and newer IPv6 (the longer ::1 style). nginx’s listen directive tells it which addresses to accept connections on. listen 443 ssl; means “IPv4 only.” listen [::]:443 ssl; means “IPv6,” but on Linux it can also quietly accept IPv4 connections at the same time — that’s what “dual-stack” means.

quizwrap’s listen [::]:443 ssl; (without ipv6only=on) creates one of these dual-stack sockets. Internally, nginx groups server blocks by which socket they’re attached to, and uses that grouping to decide who handles each incoming connection. smartdisha, lacking any IPv6 listen line of its own, ends up in a different group than the dual-stack one, and inside that group the prayer block (alphabetically first) becomes the default catch-all. Even though smartdisha’s server block is loaded and looks correct, the grouping means SNI lookups for smartdisha.co.in arrive at a group where smartdisha isn’t listed — and fall back to prayer.

Subtle, weird, and exactly the kind of thing nginx -t (the config syntax checker) won’t catch, because the syntax is fine.

The fix

Make smartdisha’s listen directives match the others:

sudo sh -c '
  cp /etc/nginx/sites-available/smartdisha.co.in /etc/nginx/sites-available/smartdisha.co.in.bak
  sed -i "/^    listen 443 ssl; # managed by Certbot$/i\\    listen [::]:443 ssl;" \
    /etc/nginx/sites-available/smartdisha.co.in
  nginx -t && systemctl reload nginx && echo DONE
'

Re-running the SNI probe afterwards:

SNI=smartdisha.co.in           -> CN = smartdisha.co.in        ✓

curl https://smartdisha.co.in/ succeeded with full TLS validation, no -k flag needed. The browser was happy.


What it was actually like to debug this with an AI

A few things stood out about the collaboration that I want to share.

Claude was great at the things I’m bad at. It pulled response headers from two domains in parallel, parsed cert subjects out of openssl s_client output, and noticed immediately that two responses had the same ETag — something I’d have read past. The structured diff between three nginx configs at the end (the listener-table comparison) was exactly the kind of thing my eyes glaze over after the second config file.

I was great at the things Claude is bad at. When we got stuck on Bug #2, Claude proposed three theories in a row, each plausible, none correct. The Docker-container theory in particular was a confidently-stated wrong answer. I knew that side of my own infrastructure well enough to be unimpressed. My push-back (“before we chase our tail”) was what reset the direction.

Security boundaries actually got enforced. When I offered Claude my sudo password to speed things up, it explicitly refused and explained why (the password would be in the chat transcript, in shell process listings, and a single leak compromises the whole server). It walked me through the alternatives — running the destructive commands myself in my own terminal, or scoping a passwordless sudoers rule for nginx-related commands only. Reading the full advice, I ended up just running each sudo command in my own shell and pasting the result. Slower, but at no point did a privileged credential cross a boundary it shouldn’t.

Transparency mattered. Halfway through Bug #2 I told Claude “I can’t see the commands you’re executing on my server.” It immediately listed every SSH command it had run and committed to printing each new command before executing it. That changed the dynamic — it stopped feeling like Claude was off doing things in the dark and started feeling like a teammate sharing their screen.

Knowing when to escalate to a one-shot fix. After multiple roundtrips of “paste this, paste that,” I asked Claude to drive over SSH so I could stop copy-pasting. It moved the read-only diagnostics to its own SSH connection and packaged the one mutating step into a single sudo block I could paste once and approve once. The friction of the back-and-forth dropped massively.


Lessons that generalise

A few things I’m taking away from this:

  1. Identical ETags across two domains = the same file is being served. If two of your sites unexpectedly look the same, that single header probably solves the mystery before you read a line of config.
  2. server_name is a registration, not just a label. If a hostname isn’t named in any block, nginx won’t error — it’ll silently pick a default and serve someone else’s content.
  3. nginx -t passing means valid syntax. It does not mean what you intended. All three configs in this story passed nginx -t with no warnings while half-broken.
  4. Mixing listen [::]:443 ssl; (dual-stack) and listen 443 ssl; (IPv4-only) across server blocks is a footgun. Either go all-dual-stack or all-with-ipv6only=on. Mixing changes the listener topology in ways that affect SNI dispatch.
  5. The openssl s_client -servername X -connect Y:443 probe is a debugging superpower. It’s a one-line command that simulates exactly what a browser does — say “I want hostname X” via SNI, and see which certificate the server returns. Whenever an HTTPS-served domain is misbehaving, this probe will often tell you the answer in five lines.
  6. Pair-debugging with an AI works best when you stay in the loop. Treat its theories as drafts, not conclusions. Push back when you smell drift. Make it show its work.

The whole session was somewhere between two and three hours. By the end my SEO problem was gone, my secondary domain’s TLS was clean, and I had a much better mental model of how nginx makes SNI decisions across mixed-listener configurations. Worth the afternoon.


Total commands run on the server during this session: about 30. Total commands run with sudo: 5. Total credentials shared with the AI: zero.

Running Background Processes in Linux with nohup and &

Have you ever started a long-running script in your terminal, only to lose all progress when the session closed? Or maybe you wanted to keep a process running in the background while continuing to use the terminal?

That’s where nohup and & come in handy.

What Are nohup and &?

  • nohup: Short for no hang up, it allows a command to keep running even after you log out or close the terminal.
  • &: Sends the command to the background, freeing up your terminal for other tasks.

Together, they’re a powerful combo for running persistent background processes.

Basic Usage

Here’s the magic one-liner:

nohup your_command > output.log 2>&1 &

Breakdown:

  • your_command: The script or program you want to run.
  • > output.log: Redirects standard output to a file.
  • 2>&1: Redirects standard error to the same file.
  • &: Runs the command in the background.

Monitoring the Output

To watch the output in real-time:

tail -f output.log

This is especially useful for debugging or tracking progress.

Cleaning Up: Stopping the Process

To stop the background process:

  • Find the process ID (PID):
ps aux | grep your_command
  • Kill it:
kill <PID>
  • Or, if you want to be sure:
kill -9 <PID>

Pro Tips

  • If you forget to redirect output, nohup will write to nohup.out by default.
  • You can check running background jobs with:
jobs
  • To bring a background job to the foreground:
fg %1

Final Thoughts

Using nohup and & is a simple yet powerful way to manage long-running or persistent processes in Linux. Whether you’re deploying a server, training a model, or crunching data, this trick can save you time and frustration.

Developer productivity tools for iPad

I have tried doing development on my iPad and I found some tools which I liked:
Python – Pythonista
Java – Jedona

I was able to write some interesting programs while on the go on both my iPhone and especially on iPad. However I have realised that for better productivity it is better to use remote desktop solution to login into my Windows and a VNC solution to login into my MacBook.

Windows Mobile App (Previously called RD Client)
RealVNC Client

I have a static IP available for my home network however one can easily use solution like noip.com to get static domain name to work using the DDNS client built in your home router. I usually configure a port forward on my router to RDP into my windows or VNC into my MacBook if the router does not support VPN server. For better security I have setup a VPN server on my router and I do a VPN connection into my home network effectively eliminating the need of opening ports / do port forwarding.

Using the above approaches I no longer have to carry my laptops around and I use my iPad to do some work directly on iPad or via remote connection to my computers back home.

To manage my personal Linux servers on cloud I use Blink. Combined with Mosh I have found Blink to be a solid solution to SSH into my Linux boxes and do my work.

Blink

This type of setup has greatly reduced the amount of hardware I need to carry around. With one iPad and an iPhone I can work on most of my hobby projects and also do some of my office work.

Viewing files in hex mode

Sometimes I need to look at files in their actual hexadecimal format. I use the combination of vi and hexdump to fulfill this requirement.

To simply view a file’s content in hex format I use this command on my mac.

hexdump -C <file_name>

This typically is enough for the job at hand. However there is also another trick of using hexdump by leveraging vi command. I follow the following steps:

  • Start vi by typing “vi <file_name>” on the command line.
  • Type “:%!hexdump -C”
  • This will replace the contents of the vi screen with the hex view of the current file.
  • Save the file for future use.

Block a shell script till a server boots up

This small script blocks a shell script / docker compose command script till a dependent server boots up.

#!/usr/bin/env bash

while :
do
  response=$(curl --write-out %{http_code} --silent --output /dev/null -X GET "$1")
  if [[ "$response" -ne 200 ]] ; then
    echo "Server is not yet up >> $1 >> $response"
    sleep 1
  else
    echo "Server is up >> $1 >> $response"
    exit 1
  fi
done

Prettify JSON on Terminal

I love using curl command on my Mac terminal to debug my REST endpoints. However the REST call JSON output used to come in a blob of text which required further formatting in an Editor like Visual Studio Code. To allay this problem I ended up installing “jsonpp” using homebrew.

brew install jsonpp

So now I just pipe the output of my curl command to the jsonpp program and I get a fully formatted JSON.

$ curl http://localhost:8080/test | jsonpp
{
"year": 2018,
"month": 2,
"worked": 18,
"leaves": 2
}

 

 

Curl Command New Line Post Output

I like to use curl instead of UI tools like Postman for debugging my RESTful web services traffic whenever possible. I however didn’t like my output being messed up by the bash prompt being suffixed to the output. Something like the following:

$ curl -H "$auth_token" http://localhost:8080/xyz/abc-efg
["-","A","B","C","D","E"]$

So basically what I needed was to have a new line forced after the curl output. A quick search on internet yielded this article. So I executed the following command on my terminal.

$ echo '-w "\n"' >> ~/.curlrc

After doing this when I execute the same curl command I get the following output.

$ curl -H "$auth_token" http://localhost:8080/xyz/abc-efg
["-","A","B","C","D","E"]
$

So now the bash prompt is actually coming on a new line by default!

List of good Curl commands

I use CURL for debugging my REST endpoints every now and then if I don’t want to use Postman or DHC clients. I usually like to copy paste results / statistics from command line into emails to my colleagues. This gives them a plain jane command which they can use to test themselves as well as compare their results with mine. The following article has some good pointers on how to use CURL and some very practical examples have been provided.

15 Practical Linux cURL Command Examples (cURL Download Examples)

Displaying memory statistics along with hostname and war files

I recently got into a situation where I had to debug around 10 servers all of which were suffering from memory issues. My java applications were slowing down after a day or so. Although its is difficult to diagnose this kind of issues which span across multiple VMs (welcome to Micro Services!) but with some little bit of scripting it is possible to see in real time the actual numbers if you manage to use tmux to multiplex multiple shell sessions stacked together. I hacked up the following command to display hostname, running war files as well as the memory statistics refreshed every 1 second on screen.

watch -d -n 1 "hostname | tr '\r\n' ' ' && printf \" \" && jps -l | grep .war | tr '\r\n' ' ' && echo "" && free -h"