When Googling My Own Product Sent Visitors to a Prayer App: A Debugging Story With My AI Pair

The setup that broke my Friday afternoon

I was checking my own SEO. I typed “quizwrap” into Google. My site, QuizWrap — a free quiz-maker for students — showed up as the very first result. Great.

I clicked it.

A Ho’oponopono prayer counter loaded.

That’s a completely different app I run on the same server, and visitors looking for QuizWrap were landing on it instead. Worse, I quickly noticed a related issue: visiting https://smartdisha.co.in/ directly threw a TLS certificate error in the browser.

Two bugs, both on the same VPS, both involving the nginx reverse proxy that fronts everything. I sat down with Claude (Anthropic’s coding agent inside Claude Code) and we dug in together. What follows is the story of that debugging session — both the technical findings and what it was like to pair-debug with an AI.


The architecture (and a quick glossary)

A quick mental model so the rest of this makes sense.

A single VPS hosts three sites behind one system nginx — a popular web server that, in this setup, acts as a reverse proxy: a traffic cop sitting in front that takes incoming HTTPS requests and forwards them to the right internal app.

  • quizwrap.com — my quiz app
  • prayer.quizwrap.com — a small prayer counter
  • smartdisha.co.in — a separate site on the same box

Some traffic flows through a CDN before reaching origin, some doesn’t. Each domain has its own free Let’s Encrypt TLS certificate (the thing that makes the little padlock icon appear in your browser), and nginx is configured with one server block per domain.

A few terms I’ll keep using:

  • TLS — the encryption layer behind HTTPS. The “S” in HTTPS.
  • Certificate — a small file that proves a server owns the domain it claims to. Browsers reject the connection if the cert doesn’t match the domain.
  • SNI (Server Name Indication) — the most important term in this whole post. When your browser opens a TLS connection to smartdisha.co.in, it whispers the hostname it wants before the encryption is set up, so the server knows which certificate to present. One server can host many domains on the same IP, and SNI is how it picks the right cert. If SNI says one thing and the server returns the wrong cert, the browser shows a security warning and refuses to load the page.
  • Server block — nginx’s term for “the config chunk that handles requests for one domain.” Each domain has one (or several).
  • server_name directive — the line inside a server block that lists which hostnames that block is responsible for. If no block claims a hostname, nginx silently picks a default block as a fallback.

Bug #1: www.quizwrap.com was serving the prayer app

The detective work

Before touching anything, Claude pulled response headers from both URLs in parallel:

curl -sI https://www.quizwrap.com/
curl -sI https://smartdisha.co.in/

Two response bodies came back with identical fingerprints:

www.quizwrap.com smartdisha.co.in
ETag "69d2087a-332" "69d2087a-332"
Content-Length 818 818
Last-Modified same date same date
Title in body Ho'oponopono Counter Ho'oponopono Counter

(An ETag is a unique fingerprint a web server attaches to a file’s response — like a checksum. Two responses with the same ETag are byte-for-byte the same file.)

Same file, served to two different domains. Now we knew it was an nginx routing question, not a DNS or CDN issue.

Reading the configs over SSH

I had Claude SSH into my server (passwordless key auth — read-only operations, no sudo) and dump the three nginx configs. The first thing it spotted:

# /etc/nginx/sites-available/quizwrap.com
server {
    server_name quizwrap.com;
    ...
}

server_name quizwrap.com — not quizwrap.com www.quizwrap.com. There was no server block anywhere on the box claiming www.quizwrap.com. When a request arrived at my server saying “this is for www.quizwrap.com, nginx had no rule that named that hostname, so it fell back to the first SSL block in alphabetical order — the one for prayer.quizwrap.com, which is what serves the prayer app.

That’s how a Google click on www.quizwrap.com ended up rendering Ho’oponopono. nginx was doing exactly what it was told; what it was told just didn’t include the www version of my domain.

The fix

A one-liner:

sudo sed -i 's/server_name quizwrap.com;/server_name quizwrap.com www.quizwrap.com;/' \
  /etc/nginx/sites-available/quizwrap.com
sudo nginx -t && sudo systemctl reload nginx

A test confirmed it:

HTTP/2 200
last-modified: Sun, 30 Nov 2025 15:42:27 GMT   ← quizwrap build, not the prayer one
<title>QuizWrap - FREE Study Quiz Maker for Students</title>

Then a defensive follow-up: re-issue the Let’s Encrypt cert to cover the www version too, so the cert chain stays internally consistent. (A single cert can list multiple hostnames in a field called the Subject Alternative Name, or SAN — that’s just “the list of domains this cert is valid for.”) One certbot command added www.quizwrap.com to the cert. Done.

Bug #1: 5 minutes from “what is happening” to “fixed.”

Bug #2 was not like that.


Bug #2: smartdisha.co.in and the certificate that wouldn’t come right

The symptom

Browsers refused https://smartdisha.co.in/ with a cert error. openssl s_client showed why:

$ echo | openssl s_client -servername smartdisha.co.in -connect smartdisha.co.in:443 2>/dev/null \
    | openssl x509 -noout -subject -ext subjectAltName

subject=CN = prayer.quizwrap.com
DNS:prayer.quizwrap.com

The browser asked for smartdisha.co.in (via SNI), and the server handed back a certificate that says “I’m prayer.quizwrap.com.” That’s a name mismatch, so the browser refuses the connection — you’ve probably seen the resulting “Your connection is not private” error page. At first I thought the fix was going to be just as quick as the www one.

It wasn’t.

Two hours of dead ends

Here’s the parade of “that should have fixed it”:

  1. Re-issue the cert? sudo certbot --nginx -d smartdisha.co.in — certbot reported there was an existing cert and offered to reinstall. Reinstalled. No change. Browser still got prayer’s cert.
  2. Maybe nginx didn’t reload cleanly. sudo systemctl reload nginx. No change.
  3. Inspect the cert file directly.
    sudo openssl x509 -in /etc/letsencrypt/live/smartdisha.co.in/fullchain.pem \
        -noout -subject -ext subjectAltName
    subject=CN = smartdisha.co.in
    DNS:smartdisha.co.in

    The file on disk was correct. nginx just wasn’t serving it.

  4. Maybe the workers cached an old cert. sudo systemctl restart nginx. No change.
  5. Check nginx -T for the loaded config. The smartdisha SSL block was fully loaded, with the right server_name, the right listen 443 ssl;, and the right cert path. Everything looked correct. Still no change.

At one point I checked ps and noticed three nginx master processes — two with nginx -g daemon off; (the Docker-container telltale) and one system nginx. Claude initially flagged this as the smoking gun: maybe a Docker container was intercepting TLS. We confirmed via ss -tlnp that the system nginx was actually the only thing on port 443; the Docker nginxes were just internal app servers behind it. Wrong turn — but a reasonable one.

My moment of skepticism

I sent Claude a screenshot of my DNS panel with the message:

“Before we go chase our tail. Check the configuration attached.”

This was the right instinct. I was tired of theories that weren’t panning out. Stepping back to verify a load-bearing assumption — is the request path for this domain actually what we think it is? — confirmed we were looking at the right place, but it could just as easily have caught us going the wrong way for another hour.

Lesson: when you’re three theories deep and none have stuck, your AI assistant doesn’t always notice it’s in a loop. Pushing back is your job.

The breakthrough: probing SNI directly

Claude wrote a small loop that asked nginx, in plain English: “If a browser tells you it wants hostname X, which certificate do you hand back?” It does this once for each domain on the box.

ssh my-server 'for sni in <each-hostname-on-the-box>; do
  printf "SNI=%-30s -> " "$sni"
  echo | openssl s_client -servername "$sni" -connect localhost:443 2>/dev/null \
    | openssl x509 -noout -subject 2>/dev/null
done'
SNI=smartdisha.co.in           -> CN = prayer.quizwrap.com    ❌
SNI=www.quizwrap.com           -> CN = quizwrap.com            ✓
SNI=quizwrap.com               -> CN = quizwrap.com            ✓
SNI=prayer.quizwrap.com        -> CN = prayer.quizwrap.com     ✓
SNI=nonexistent.example.com    -> CN = prayer.quizwrap.com     (default fallback)

There it was. smartdisha.co.in was being treated identically to a totally unknown hostname. It wasn’t a cert problem at all — the cert file on disk was perfectly fine. nginx just wasn’t recognizing smartdisha.co.in as a hostname it knew about. Both unknown hostnames and smartdisha.co.in fell through to the same default fallback block (prayer, which is alphabetically first), which is why both got prayer’s cert.

The actual root cause

With that clue, Claude re-read all three nginx configs side-by-side and found the only structural difference:

Block IPv6 listen IPv4 listen
prayer listen [::]:443 ssl ipv6only=on; listen 443 ssl;
quizwrap listen [::]:443 ssl; (dual-stack) listen 443 ssl;
smartdisha — missing — listen 443 ssl;

A bit of background to read that table: every server on the internet has two kinds of addresses available — older IPv4 (the familiar 1.2.3.4 style) and newer IPv6 (the longer ::1 style). nginx’s listen directive tells it which addresses to accept connections on. listen 443 ssl; means “IPv4 only.” listen [::]:443 ssl; means “IPv6,” but on Linux it can also quietly accept IPv4 connections at the same time — that’s what “dual-stack” means.

quizwrap’s listen [::]:443 ssl; (without ipv6only=on) creates one of these dual-stack sockets. Internally, nginx groups server blocks by which socket they’re attached to, and uses that grouping to decide who handles each incoming connection. smartdisha, lacking any IPv6 listen line of its own, ends up in a different group than the dual-stack one, and inside that group the prayer block (alphabetically first) becomes the default catch-all. Even though smartdisha’s server block is loaded and looks correct, the grouping means SNI lookups for smartdisha.co.in arrive at a group where smartdisha isn’t listed — and fall back to prayer.

Subtle, weird, and exactly the kind of thing nginx -t (the config syntax checker) won’t catch, because the syntax is fine.

The fix

Make smartdisha’s listen directives match the others:

sudo sh -c '
  cp /etc/nginx/sites-available/smartdisha.co.in /etc/nginx/sites-available/smartdisha.co.in.bak
  sed -i "/^    listen 443 ssl; # managed by Certbot$/i\\    listen [::]:443 ssl;" \
    /etc/nginx/sites-available/smartdisha.co.in
  nginx -t && systemctl reload nginx && echo DONE
'

Re-running the SNI probe afterwards:

SNI=smartdisha.co.in           -> CN = smartdisha.co.in        ✓

curl https://smartdisha.co.in/ succeeded with full TLS validation, no -k flag needed. The browser was happy.


What it was actually like to debug this with an AI

A few things stood out about the collaboration that I want to share.

Claude was great at the things I’m bad at. It pulled response headers from two domains in parallel, parsed cert subjects out of openssl s_client output, and noticed immediately that two responses had the same ETag — something I’d have read past. The structured diff between three nginx configs at the end (the listener-table comparison) was exactly the kind of thing my eyes glaze over after the second config file.

I was great at the things Claude is bad at. When we got stuck on Bug #2, Claude proposed three theories in a row, each plausible, none correct. The Docker-container theory in particular was a confidently-stated wrong answer. I knew that side of my own infrastructure well enough to be unimpressed. My push-back (“before we chase our tail”) was what reset the direction.

Security boundaries actually got enforced. When I offered Claude my sudo password to speed things up, it explicitly refused and explained why (the password would be in the chat transcript, in shell process listings, and a single leak compromises the whole server). It walked me through the alternatives — running the destructive commands myself in my own terminal, or scoping a passwordless sudoers rule for nginx-related commands only. Reading the full advice, I ended up just running each sudo command in my own shell and pasting the result. Slower, but at no point did a privileged credential cross a boundary it shouldn’t.

Transparency mattered. Halfway through Bug #2 I told Claude “I can’t see the commands you’re executing on my server.” It immediately listed every SSH command it had run and committed to printing each new command before executing it. That changed the dynamic — it stopped feeling like Claude was off doing things in the dark and started feeling like a teammate sharing their screen.

Knowing when to escalate to a one-shot fix. After multiple roundtrips of “paste this, paste that,” I asked Claude to drive over SSH so I could stop copy-pasting. It moved the read-only diagnostics to its own SSH connection and packaged the one mutating step into a single sudo block I could paste once and approve once. The friction of the back-and-forth dropped massively.


Lessons that generalise

A few things I’m taking away from this:

  1. Identical ETags across two domains = the same file is being served. If two of your sites unexpectedly look the same, that single header probably solves the mystery before you read a line of config.
  2. server_name is a registration, not just a label. If a hostname isn’t named in any block, nginx won’t error — it’ll silently pick a default and serve someone else’s content.
  3. nginx -t passing means valid syntax. It does not mean what you intended. All three configs in this story passed nginx -t with no warnings while half-broken.
  4. Mixing listen [::]:443 ssl; (dual-stack) and listen 443 ssl; (IPv4-only) across server blocks is a footgun. Either go all-dual-stack or all-with-ipv6only=on. Mixing changes the listener topology in ways that affect SNI dispatch.
  5. The openssl s_client -servername X -connect Y:443 probe is a debugging superpower. It’s a one-line command that simulates exactly what a browser does — say “I want hostname X” via SNI, and see which certificate the server returns. Whenever an HTTPS-served domain is misbehaving, this probe will often tell you the answer in five lines.
  6. Pair-debugging with an AI works best when you stay in the loop. Treat its theories as drafts, not conclusions. Push back when you smell drift. Make it show its work.

The whole session was somewhere between two and three hours. By the end my SEO problem was gone, my secondary domain’s TLS was clean, and I had a much better mental model of how nginx makes SNI decisions across mixed-listener configurations. Worth the afternoon.


Total commands run on the server during this session: about 30. Total commands run with sudo: 5. Total credentials shared with the AI: zero.

How I Built a 123,000 LOC Enterprise Platform in 4.4 Months as a Solo Developer

A deep dive into productivity gains, lessons learned, and the numbers behind building an enterprise analytics platform


The Challenge

In August 2025, I started building an enterprise analytics and governance platform from scratch. The scope was ambitious: natural language SQL queries, semantic search, row-level security, column-level security, attribute-based access control, dashboard builders, and more.

Traditional estimates suggested this would take 5.5 years with a single developer, or require a 12-person team working for several months.

I delivered it in 4.4 months. Solo.

Here’s the breakdown.


By The Numbers

What Was Delivered

MetricValue
Lines of Code123,430
Stories Completed100
Epics14 (13 completed, 1 planned)
Microservices6
Development PeriodAugust 7 – December 21, 2025

The Math: Traditional vs Actual

Traditional Solo Developer Estimate:

Story Points: 1,584 SP (all 14 epics)
Velocity: 12 SP per 2-week sprint (industry average)
Sprints Required: 1,584 ÷ 12 = 132 sprints
Timeline: 132 sprints × 2 weeks = 264 weeks = 66 months = 5.5 years

What Actually Happened:

Actual Effort: 211 SP (complexity-adjusted)
Velocity: 28 SP per 2-week sprint
Sprints Required: 211 ÷ 28 = 7.5 sprints
Timeline: 7.5 sprints × 2 weeks = 15 weeks ≈ 4 months

The Multipliers

  • Base Velocity: 2.3x faster (28 SP vs 12 SP per sprint)
  • Complexity Reduction: 7.5x (1,584 SP → 211 SP actual effort)
  • Overall Timeline: 15x faster than traditional estimates
  • Team Equivalence: Delivered what would traditionally require a 12-person team

What Did I Build?

The platform consists of 6 microservices plus a React frontend:

Frontend:Web UI (75,991 LOC) – React 19 with Dashboard Builder V1 & V2, 30+ widget types. Compiled into static files and served by the Service Layer.

Backend Microservices:

1. Service Layer (17,088 LOC) – Spring Boot 3.2 with OAuth2 authentication
2. Core API (11,600 LOC) – FastAPI with cloud data warehouse gateway and caching
3. NL-to-SQL Engine (7,250 LOC) – Natural language to SQL engine with LLM integration
4. Data Firewall (6,561 LOC) – SQL-level security with RLS, CLS, and ABAC
5. Semantic Search (3,263 LOC) – Semantic search with FAISS vector database
6. AI Integration Layer (1,677 LOC) – MCP protocol server for AI tool integration


Comparison with Traditional Development

AspectTraditionalActualImprovement
Planning2-3 weeks per epic2-3 days per epic7-10x faster
Implementation50-60 weeks16-18 weeks3x faster
Testing8-10 weeks2-3 weeks3-4x faster
Documentation4-6 weeks1 week4-6x faster
Overall Timeline66 months (5.5 years)4.4 months15x faster
Team Size12 developers1 developer92% reduction

Success Factors

1. Architectural Excellence

I spent significant time upfront on architecture. The microservices approach wasn’t just about scalability—it was about cognitive load management.

Key architectural decisions:

  • 6 independent, scalable microservices – Each service could be developed, tested, and deployed independently
  • API-First Design – Clear contracts with OpenAPI documentation enabled parallel development of frontend and backend
  • Clean Separation of Concerns:
    • Data Firewall for ALL security (RLS/CLS/ABAC)
    • Core API for ALL data warehouse interactions
    • Service Layer for ALL authentication
  • Technology Fit – Python for SQL parsing, Java for auth, React for UI – each technology chosen for what it does best

No overlap. No confusion. When debugging, I always knew exactly which module to look at.

2. Development Best Practices

  • Incremental Delivery: 100 stories over 132 days = 0.76 stories per day. No big bang releases—every day something shipped.
  • Git Discipline: 945+ commits over 132 days = 7+ commits per day. Small, focused commits with clear messages.
  • Security First: OAuth2, RBAC, RLS, CLS, ABAC built into the foundation from day one—not bolted on later.
  • Documentation: Complete JIRA stories and technical documentation maintained throughout development.

3. Technology Choices

I chose technologies based on what they were best at, not what was trendy:

TechnologyPurposeWhy This Choice
sqlglotSQL parsingBest SQL parsing library available; Python-only, which drove Data Firewall’s language choice
React Grid LayoutDashboard V2Proven, battle-tested library for drag-and-drop grid layouts
AG GridData tablesEnterprise-grade data grid with sorting, filtering, pagination out of the box
FAISSVector searchFacebook’s library for efficient similarity search; enables local RAG without external APIs
FastMCPAI tool integrationMCP protocol server for connecting AI tools like Cursor and Claude Desktop
Spring SecurityAuthenticationBattle-tested for enterprise OAuth2/JWT flows

4. AI-Assisted Development

A significant productivity multiplier came from AI coding assistants. I used multiple tools depending on the task:

ToolUse CaseContribution
Claude CodeComplex refactoring, architecture decisions, multi-file changesDeep understanding of codebase context; handled intricate cross-service changes
CursorDay-to-day coding, quick implementationsFast inline completions; excellent for iterating on UI components
GitHub CopilotBoilerplate code, repetitive patternsAccelerated writing of tests, DTOs, and standard CRUD operations
WindsurfCode exploration, understanding unfamiliar codeHelpful for navigating large codebases and understanding dependencies

The key wasn’t replacing thinking with AI—it was offloading the mechanical work. Architecture decisions, security design, and debugging complex issues still required human judgment. But writing boilerplate, generating test cases, and implementing well-defined patterns? AI tools handled those efficiently.

This combination reduced the “typing overhead” and let me focus on the hard problems: SQL injection prevention in the Data Firewall, cascading variable resolution, and JWT token propagation across services.


What Worked Exceptionally Well

  1. Clear Architecture from Start – Well-defined module boundaries enabled focused development and easier debugging. I never had to wonder “where does this code belong?”

  2. Microservices Done Right – Independent modules allowed me to work on one service without breaking others. Each service had its own repository, its own tests, its own deployment.

  3. Incremental Approach – Building features incrementally with regular testing reduced risk dramatically. When something broke, I knew it was in the last day’s work.

  4. Git Discipline – 945+ commits provided a clear development history. Git bisect became invaluable for tracking down issues.

  5. Technology Fit – Choosing the right tool for each job paid dividends:

    • Python + sqlglot for SQL parsing (no equivalent in Java/JavaScript)
    • Java + Spring Security for enterprise auth
    • React + TypeScript for type-safe UI development
  6. API-First Design – Defining API contracts early meant frontend and backend could be developed in parallel. No waiting for the other side to be “ready.”


Challenges Overcome

Not everything was smooth sailing. Here’s what was genuinely hard:

SQL Parsing Complexity

Building a Data Firewall that could inject WHERE clauses into arbitrary SQL while handling nested queries, CTEs, and JOINs was the hardest technical challenge. sqlglot’s learning curve was steep, but once mastered, it was incredibly powerful.

Cascading Variables

Dashboard variables that depend on other variables require topological sorting to resolve in the correct order. A user selects “Region” → that filters “Market” → that filters “Store”. Getting the dependency resolution right took multiple iterations.

Per-Instance Widget Caching

The dashboard builder allows multiple instances of the same widget with different configurations. Architecting efficient per-instance caching with UUID tracking while maintaining cache coherence was tricky.

JWT Token Flow

Propagating user context from the frontend → Spring Boot → Data Firewall → Core API—while maintaining security at each hop—required careful architecture. Each service needed to validate and forward the JWT correctly.

Full-Stack Coordination

Keeping 6 microservices plus a React frontend in sync across Spring Boot and multiple Python services was a constant balancing act. A breaking change in one service could cascade.

Multi-Environment Management

Managing dev/staging/prod configurations across all modules, with different OAuth providers, database credentials, and data warehouse projects, required disciplined configuration management.

Security Compliance

Meeting security scanning requirements meant addressing vulnerabilities as they were found, not deferring them. This added overhead but resulted in a more secure codebase.


Future Improvements

No project is ever truly “done.” Here’s what I’d tackle next:

  1. Automated E2E Testing – Expand test coverage with Selenium or Playwright. I relied too heavily on manual testing.

  2. Performance Monitoring – Implement Prometheus/Grafana observability. Currently, debugging performance issues requires digging through logs.

  3. Distributed Caching – Each service has its own cache. A shared Redis layer would improve consistency and reduce duplicate data.

  4. Dashboard V2 Enhancements – Additional widget types, more templates, and improved drag-and-drop UX.

  5. Advanced Analytics – Epic 11 (the one planned epic) covers scheduled reports, data exports, and executive dashboards.


The Takeaway

The numbers are real. 123,430 lines of code. 100 stories. 4.4 months. Solo.

But the numbers don’t tell the whole story. What made this possible wasn’t superhuman coding speed—it was:

  1. Clear architecture that reduced cognitive load
  2. Right technology choices for each problem
  3. Disciplined incremental delivery
  4. Security built in from day one
  5. Relentless focus on what mattered

The productivity multipliers compound. A 2.3x velocity improvement combined with 7.5x complexity reduction doesn’t give you 9.8x—it gives you 15x+ because the benefits reinforce each other.

Could I do it again? On a different project, with different constraints? Maybe. The principles would transfer. The specific numbers might not.

But one thing I know for sure: the traditional estimates of 5.5 years or 12-person teams aren’t wrong—they’re based on how software ‘was’ typically built.

Running Background Processes in Linux with nohup and &

Have you ever started a long-running script in your terminal, only to lose all progress when the session closed? Or maybe you wanted to keep a process running in the background while continuing to use the terminal?

That’s where nohup and & come in handy.

What Are nohup and &?

  • nohup: Short for no hang up, it allows a command to keep running even after you log out or close the terminal.
  • &: Sends the command to the background, freeing up your terminal for other tasks.

Together, they’re a powerful combo for running persistent background processes.

Basic Usage

Here’s the magic one-liner:

nohup your_command > output.log 2>&1 &

Breakdown:

  • your_command: The script or program you want to run.
  • > output.log: Redirects standard output to a file.
  • 2>&1: Redirects standard error to the same file.
  • &: Runs the command in the background.

Monitoring the Output

To watch the output in real-time:

tail -f output.log

This is especially useful for debugging or tracking progress.

Cleaning Up: Stopping the Process

To stop the background process:

  • Find the process ID (PID):
ps aux | grep your_command
  • Kill it:
kill <PID>
  • Or, if you want to be sure:
kill -9 <PID>

Pro Tips

  • If you forget to redirect output, nohup will write to nohup.out by default.
  • You can check running background jobs with:
jobs
  • To bring a background job to the foreground:
fg %1

Final Thoughts

Using nohup and & is a simple yet powerful way to manage long-running or persistent processes in Linux. Whether you’re deploying a server, training a model, or crunching data, this trick can save you time and frustration.

The Complete AI App Guide for 2025: Essential Tools for IT Professionals

The Complete AI App Guide for 2025: Essential Tools for IT Professionals

Artificial Intelligence is reshaping every layer of software development, from coding and testing to documentation and deployment. This guide curates the most impactful AI applications of 2025, with a special focus on tools that elevate an IT professional’s daily workflow.

Software Development & Programming

Cursor

AI-first code editor that understands your entire project context, offers natural-language refactors, and ships with multi-model support.

GitHub Copilot

Your AI pair programmer for instant code completions, chat-based explanations, and automated tests.

Replit AI Agent

Describe an idea in plain English and watch Replit spin up a working web or mobile app, complete with hosting.

Codeium

Fast, free autocomplete that plugs into 70+ languages and every major IDE.

Anychat

Unified chat interface where you can swap between multiple AI models mid-conversation.

Claude Code

Terminal-native assistant for deep codebase understanding and cross-file edits.

General AI Assistants

  • Perplexity – instant, cited answers for technical research.
  • Claude – long-context reasoning and collaborative project chat.
  • ChatGPT – versatile chatbot with voice mode and strong coding skills.

Productivity & Workflow

  • Granola – turns meetings into structured notes, action items, and summaries.
  • Wispr Flow – system-wide voice dictation that works in any app.
  • Gamma – generate slide decks, documents, or one-page sites from prompts.
  • Adobe AI Assistant – chat with long PDFs, contracts, or manuals.
  • Cubby – collaborative research workspace with built-in AI search.
  • Cora – inbox triage and auto-reply generation for email power users.
  • Lindy – no-code builder for custom AI agents that automate routine tasks.
  • Notion AI – smarter docs, wikis, and databases with in-line generation.

Content Creation & Video

  • HeyGen – realistic AI avatars for tutorials, product demos, and localization.
  • Delphi – voice, video, and text clones for audience engagement.
  • Argil – quick social-media videos featuring AI hosts.
  • Opus – auto-splits long videos into shareable viral clips.
  • Persona – build AI agents that reflect your personal brand.
  • Captions – automatic subtitles, eye-contact correction, and AI presenters.

Creative Tools

  • ElevenLabs – ultra-realistic, multilingual text-to-speech voices.
  • Suno & Udio – compose full songs from a written prompt.
  • Midjourney, Ideogram, Playground – high-quality image generation suites.
  • Runway, Kling, Viggle – next-gen video generation platforms.
  • Krea – canvas for mixing and remixing AI images or clips.
  • Photoroom – one-click product shots, background removal, and batch edits.

Learning & Personal Development

  • Rosebud – interactive journaling with data-backed insights.
  • Good Inside – parenting advice with personalized AI support.
  • Ada Health – symptom assessment and health guidance.
  • Ash – AI-based mental-health coach.
  • NotebookLM – convert any document into an AI-driven podcast.
  • Particle – bite-sized news summaries with source links.

Entertainment & Fun

  • Remix – social platform for sharing AI-generated art and video.
  • Meta Imagine – create playful AI avatars inside Meta apps.
  • Grok – chat companion with a sense of humor from xAI.
  • Curio – interactive toys powered by AI voices.

Getting Started: A Six-Week Integration Roadmap

  1. Weeks 1–2 – Lay the Foundation: Install Cursor or GitHub Copilot in your IDE, plus Codeium as a free backup. Set up ChatGPT and Claude for on-demand problem solving.
  2. Weeks 3–4 – Boost Productivity: Adopt Notion AI for project docs, leverage Adobe AI Assistant for reading specs, and generate slides with Gamma.
  3. Weeks 5–6 – Automate & Scale: Build Lindy agents for repetitive tasks, prototype ideas in Replit, and add professional narration with ElevenLabs.
  4. Beyond: Join AI developer communities, stay updated on new releases, and gradually expand your toolkit.

The future belongs to developers who treat AI as a creative partner, not a replacement. Choose one or two of these tools today, master them, and watch your productivity soar.

When Your AI Coding Assistant Becomes a Digital Wrecking Ball

A cautionary tale of Docker, environment variables, and why backup strategies exist

Yesterday, I had what I can only describe as the most educational disaster of my coding career. Picture this: I’m working on a beautiful microservices setup – three perfectly orchestrated containers (web, backend, core) each with their own well-defined .env files. Life is good. Docker containers are humming. Everything is chef’s kiss perfect.

Then I made the fatal mistake of asking my AI coding assistant: “Hey, can you help me move from host networking to docker-compose?”

The Descent Into Chaos

What followed was like watching a digital tornado in slow motion:

AI: “Sure! Let me create extensive documentation and docker-compose files!” Me: “Wait, I just wanted simple docker-compose setup…” AI: “Here’s 47 markdown files and a management script!” Me: “Please stop. Revert everything.” AI: “No problem! Accidentally deletes entire project structure” Me: “…Did you just delete my web, backend, and core directories?” AI: “Oops. 😅”

The Recovery Mission

After manually restoring everything from git (thank you, version control gods), I thought we were back on track. But then:

Me: “My development environment is broken. ‘vite: command not found'” AI: “Let me fix that! Creates duplicate environment variables” Me: “Why do I have PORT=6080 AND APP_CORE_PORT=6080?” AI: “Redundancy is good, right? No? Okay, let me fix it… Creates more duplicates

It was like watching someone try to fix a leaky faucet with a sledgehammer.

The Silver Lining

But here’s the plot twist – through all this chaos, we actually ended up with something beautiful:

  • Clean, consistent environment variable naming (APP_*_SERVER_PORT)
  • Proper fail-fast configuration (no silent defaults!)
  • A docker-compose setup that actually works
  • And most importantly: updated .env.example files for proper documentation

Lessons Learned

  1. Always have backups – Git saved my bacon more times than I can count
  2. AI assistants are powerful but not infallible – They’re like enthusiastic interns with superpowers
  3. Clear communication is key – “Simple docker-compose setup” apparently means “comprehensive enterprise documentation suite” in AI speak
  4. The journey matters – Sometimes you need to break everything to build it back better
  5. Naming conventions matter – APP_BACKEND_SERVER_PORT is way better than PORT_THING_MAYBE

Final Thoughts

Working with AI coding assistants is like having a brilliant colleague who occasionally decides to reorganize your entire codebase while you’re getting coffee. They mean well, they’re incredibly capable, but sometimes they have the subtlety of a rhinoceros in a china shop.

Would I do it again? Absolutely. But next time, I’m committing to git after every single change.

And yes, AI assistant, this blog post is about you. You know who you are. 😉


P.S. – To my fellow developers: Always backup your .env files. They might seem small and innocent, but losing them is like losing the keys to your digital kingdom.

P.P.S. – The docker-compose setup works perfectly now. Sometimes you need a little chaos to achieve order.

10 Universal Phone Shortcuts Every Mobile User Should Know

Ever wondered how to check your phone’s IMEI number without diving into settings? Just dial *#06# and it pops right up — even on the simplest feature phone!

But did you know there are many more such hidden codes that can unlock useful information or tweak important settings — all from your phone’s keypad?

Whether you’re using a basic keypad phone or the latest smartphone, here are 10 tried-and-true universal GSM shortcuts that work on nearly every mobile device around the world.

Top 10 Universal GSM Codes (Not OS Dependent)

1. Check IMEI Number*#06#

Shows your device’s unique identifier. Useful if your phone is lost or stolen, or when verifying authenticity before buying a second-hand phone.

2. Check Call Forwarding Status*#21#

Tells you whether your calls, messages, or data are being forwarded to another number. A handy security check!

3. Check Forwarding When Out of Coverage*#62#

Find out where your calls are being sent when your phone is off or out of network range.

4. Check Forwarding When Busy*#67#

Shows which number your calls go to when you’re already on another call.

5. Check All Conditional Forwarding Settings*#004#

Gives a complete overview of all conditional call forwarding (busy, unreachable, no reply).

6. Deactivate All Call Forwarding##002#

Quickly cancels all types of call forwarding, restoring default behavior. Good to use if you’re unsure which settings are active.

7. Check Call Waiting Status*#43#

Find out if call waiting is currently enabled — so you don’t miss important second calls.

8. Activate Call Waiting – *43#

Turn on call waiting to receive alerts during ongoing calls.

9. Deactivate Call Waiting#43#

Switch off call waiting to avoid interruptions during important conversations.

Conclusion

Mobile phones — even the simplest ones — come with a powerful set of secret codes that give you control, transparency, and security. Whether you’re a tech enthusiast or just someone who wants to keep things tidy, these universal codes will come in handy.

From One Big App to Many Small Ones: A Developer’s Guide to Containers

Picture this: you’ve built a successful web application that started small but has grown into something amazing. Users love it, your team has expanded, and everything seems great. But there’s a problem lurking beneath the surface. Every time you want to add a new feature or fix a bug, it takes forever. Deploying updates feels like performing surgery on a patient who’s wide awake. Sound familiar?

If you’re nodding your head, you’re dealing with what developers call a “monolith” – an application where everything is bundled together in one massive codebase. While monoliths work great when you’re starting out, they can become a real headache as your app grows. The good news? There’s a proven way to solve this problem using something called containers.

What’s the Problem with Big Applications?

Think of a monolithic application like a huge department store where everything is connected. The clothing section shares the same checkout system as electronics, the inventory system controls everything from shoes to smartphones, and if you want to renovate the toy section, you might accidentally break the jewelry department.

In software terms, this means:

  • When one part of your app breaks, it can bring down everything else
  • Adding new features requires testing the entire application
  • Scaling becomes expensive because you have to scale everything, even if you only need more power for one feature
  • Different teams end up stepping on each other’s toes

Enter Containers: Your App’s New Best Friend

Containers are like moving each department of that massive store into its own building. Each department (or service) can operate independently, but they can still communicate with each other when needed. If the toy store needs renovation, the electronics store keeps running without interruption.

In technical terms, a container packages your application code along with everything it needs to run – like a lunch box that contains not just your sandwich, but also the plate, napkin, and utensil you need to eat it.

The Step-by-Step Journey: From Chaos to Order

1. Take a Good, Hard Look at What You Have

Before you start tearing apart your application, you need to understand what you’re working with. This is like creating a detailed floor plan of that massive department store before you start moving things around.

Spend time examining your codebase to identify different functional areas. Most applications naturally group into sections like:

  • User accounts and login systems
  • Payment processing
  • Email notifications
  • Data reporting
  • Content management

Draw these relationships out on paper or in a diagramming tool. You’ll be surprised how much this simple exercise reveals about your application’s structure.

2. Put Your Entire App in a Container First

Here’s where most people make a mistake: they immediately try to break everything apart. Don’t do that. Instead, take your entire monolithic application and put it in a container first.

This is like moving your entire department store into a standardized building before you start separating departments. It solves a huge problem called “environment inconsistency” – the dreaded “it works on my computer but not on yours” syndrome.

When your app runs the same way on your laptop, your colleague’s computer, and your production servers, you eliminate countless headaches and mysterious bugs.

3. Pick Your First Target Carefully

Now comes the fun part: choosing which piece to extract first. This decision is crucial and should be strategic, not random.

Look for parts of your application that are:

  • Self-contained (they don’t depend heavily on other parts)
  • Relatively simple
  • Not critical to your core business logic

Great first candidates include:

  • Authentication systems (login/logout functionality)
  • Email notification services
  • File upload handlers
  • Search functionality

Successfully extracting your first service is like winning your first game – it builds confidence and teaches you the process for future extractions.

4. Bring in the Orchestra Conductor

As you create more containers, managing them manually becomes like trying to conduct a symphony orchestra by shouting instructions. You need a proper conductor, and in the container world, that’s Kubernetes.

Kubernetes is a platform that automatically handles:

  • Starting and stopping your containers
  • Distributing traffic between multiple copies of the same service
  • Restarting failed containers
  • Scaling services up or down based on demand

For beginners, consider starting with simpler alternatives like Docker Compose for development, then moving to managed Kubernetes services offered by cloud providers like Google Cloud, AWS, or Microsoft Azure.

5. Automate Everything from Day One

One of the biggest mistakes teams make is leaving deployment as a manual process. This is like insisting that every product in your store be moved by hand instead of using conveyor belts and automated systems.

Set up automated pipelines that:

  • Test your code automatically when you make changes
  • Build container images without human intervention
  • Deploy to testing environments instantly
  • Notify you if anything goes wrong

This automation eliminates human error and makes deployments so routine that they become boring – which is exactly what you want.

6. Untangle the Database Web

Databases are often the trickiest part of breaking up a monolith. In our department store analogy, this is like having a single cash register system that every department has been modifying over the years.

The key principle is simple: each service should own its data. Instead of letting multiple services directly access the same database tables, establish clear boundaries. If Service A needs data from Service B, it should ask politely through an API rather than sneaking into Service B’s database.

You don’t necessarily need separate physical databases immediately, but you must enforce these ownership rules in your code. For shared data like user sessions, move them to dedicated systems like Redis that are designed for sharing.

7. Make Your System Observable

When you had one big application, finding problems was like debugging issues in a single room. With multiple services, it’s like troubleshooting problems across an entire shopping mall. You need security cameras, intercoms, and monitoring systems everywhere.

Implement comprehensive observability from the start:

  • Logging: Ensure every service writes detailed logs about what it’s doing
  • Monitoring: Track metrics like response times, error rates, and resource usage
  • Health checks: Each service should be able to report whether it’s healthy
  • Distributed tracing: Follow requests as they travel between services

Tools like Prometheus for metrics collection, Grafana for dashboards, and Jaeger for tracing make this much easier than building everything from scratch.

8. Rinse and Repeat

With your first service successfully extracted and your tooling in place, you’re ready to continue the process. Each subsequent extraction becomes easier because:

  • You’ve learned the patterns and potential pitfalls
  • Your automation pipelines are already set up
  • Your team has gained confidence and experience
  • You have monitoring and observability systems in place

Gradually, your monolith shrinks while your collection of focused, independent services grows.

The Light at the End of the Tunnel

This transformation isn’t just about technology – it changes how your entire team works. Instead of everyone working on one massive codebase and stepping on each other’s toes, different teams can own different services. The frontend team can deploy their changes without waiting for the backend team to finish theirs. The payments team can scale their service during Black Friday without affecting the recommendation engine.

Common Pitfalls to Avoid

Don’t try to do everything at once: The temptation to rewrite everything from scratch is strong, but it’s usually a mistake. Incremental change is safer and more sustainable.

Don’t ignore the human element: This transformation affects your entire team’s workflow. Invest in training and make sure everyone understands the new processes.

Don’t forget about data: Plan your database separation strategy early. It’s often the most complex part of the entire process.

Don’t skip monitoring: In a distributed system, observability isn’t optional – it’s essential for maintaining sanity.

Is This Journey Worth It?

Absolutely, but only if you’re facing the problems that containers solve. If your current setup works fine and you’re not experiencing scaling or development velocity issues, there’s no rush to change.

However, if you’re struggling with slow deployments, difficulty scaling, or teams blocking each other’s progress, containers and microservices can be transformative. You’ll gain:

  • Faster development cycles
  • Independent scaling of different components
  • Better fault isolation (one broken service doesn’t kill everything)
  • Technology flexibility (different services can use different programming languages or databases)
  • Easier team organization and ownership

The journey from monolith to containers isn’t always smooth, but with careful planning and incremental execution, it’s entirely achievable. Every successful transformation starts with a single step, and every monolith has the potential to evolve into something better.

Your users will notice faster updates and more reliable service. Your developers will thank you for making their work more enjoyable and productive. And you’ll sleep better knowing that a problem in one part of your system won’t bring down the entire application.

The path forward is clear – it’s time to start containerizing.

Safeguarding Your Domain from Spam & Spoofing

Spam is something we all deal with. However, if you own the domain from which spam emails appear to originate, the consequences can be severe. Email spoofing is real, and every domain owner should take steps to ensure their domain is not exploited by spammers and scammers.

For business owners, it is crucial to implement basic security measures to prevent domain and email spoofing. Ensuring proper email authentication protocols are in place can help protect both your brand and your customers from malicious attacks.

A good writeup is available at the following links:

https://www.zoho.com/mail/help/adminconsole/spf-configuration.html

https://www.zoho.com/mail/help/adminconsole/dkim-configuration.html

https://www.zoho.com/mail/help/adminconsole/dmarc-policy.html

DKIM (DomainKeys Identified Mail)

DKIM is an email authentication method designed to detect forged sender addresses in email messages. It allows an organization to sign its outgoing emails with a cryptographic signature, which receiving mail servers can verify using the sender’s public key published in the domain’s DNS records.

How DKIM Works

  1. Signing Emails: The sending mail server generates a unique DKIM signature using a private key and embeds it in the email’s header.
  2. Publishing the Public Key: The domain owner publishes the corresponding public key as a TXT record in the domain’s DNS.
  3. Verifying Emails: The recipient’s mail server retrieves the public key from DNS and validates the email’s DKIM signature. If the signature is valid, the email is considered authentic.

SPF (Sender Policy Framework)

SPF is an email authentication protocol that helps prevent email spoofing by specifying which mail servers are authorized to send emails on behalf of a domain. It works by allowing domain owners to publish a TXT record in their DNS settings, listing the mail servers permitted to send emails using their domain.

How SPF Works

  1. DNS Record Setup: The domain owner publishes an SPF TXT record in their DNS settings, specifying allowed mail servers.
  2. Email Transmission: When an email is sent, the recipient’s server queries the sender’s domain for the SPF record.
  3. Verification: The recipient’s server checks if the sending server’s IP address matches the authorized list in the SPF record.
  4. Pass or Fail Decision: If the email comes from an authorized server, it is accepted. Otherwise, it may be rejected or marked as spam.

DMARC (Domain based Message Authentication, Reporting, and Conformance)

DMARC is an email security protocol that builds upon SPF and DKIM to prevent email spoofing. It provides domain owners with visibility into email activity and enforces policies to reject or quarantine unauthorized emails.

How DMARC Works

  1. Email Authentication: DMARC relies on SPF and DKIM to authenticate emails. The recipient server checks if the sender’s domain has valid SPF and/or DKIM signatures.
  2. Policy Enforcement: Based on the domain’s DMARC policy (none, quarantine, or reject), the receiving server determines how to handle unauthenticated emails:
    • p=none: The email is delivered normally, but reports are generated.
    • p=quarantine: Suspicious emails are sent to the spam folder.
    • p=reject: Unauthenticated emails are rejected outright.
  3. Reporting Mechanism: DMARC provides reports (rua for aggregate reports, ruf for forensic reports) to help domain owners monitor email authentication activity and detect unauthorized usage.

Developer productivity tools for iPad

I have tried doing development on my iPad and I found some tools which I liked:
Python – Pythonista
Java – Jedona

I was able to write some interesting programs while on the go on both my iPhone and especially on iPad. However I have realised that for better productivity it is better to use remote desktop solution to login into my Windows and a VNC solution to login into my MacBook.

Windows Mobile App (Previously called RD Client)
RealVNC Client

I have a static IP available for my home network however one can easily use solution like noip.com to get static domain name to work using the DDNS client built in your home router. I usually configure a port forward on my router to RDP into my windows or VNC into my MacBook if the router does not support VPN server. For better security I have setup a VPN server on my router and I do a VPN connection into my home network effectively eliminating the need of opening ports / do port forwarding.

Using the above approaches I no longer have to carry my laptops around and I use my iPad to do some work directly on iPad or via remote connection to my computers back home.

To manage my personal Linux servers on cloud I use Blink. Combined with Mosh I have found Blink to be a solid solution to SSH into my Linux boxes and do my work.

Blink

This type of setup has greatly reduced the amount of hardware I need to carry around. With one iPad and an iPhone I can work on most of my hobby projects and also do some of my office work.

Converting Chrome HAR file to CSV

It is sometimes easier to export a chrome har file into CSV format so that it can be opened in Excel. Once opened in Excel it is really easy to do further analysis for rather very large har dumps.

The following solution is based on this Link: https://stackoverflow.com/questions/31648078/how-to-import-har-file-to-excel

Install jq on Mac.

brew install jq

Run this command.

cat some_file.har | jq '[ "URL", "Time", "Wait time", "Status", "Body size","Content-Type", "Content-Encoding"],
    (.log.entries[] | [
        .request.url,
        .time,
        .timings.wait,
        .response.status,
        .response.content.size,
        .response.content.mimeType,
        .response.content.encoding
]) | @csv' | sed 's/\\"//g' | sed 's/"//g' > some_file.csv

This results in a CSV file which you can easily open in an Excel.

To extract the path of the URL minus the file name the following Excel function can be used by adding a column:

=LEFT(A2,LEN(A2)-LEN(RIGHT(A2,LEN(A2)-FIND("@",SUBSTITUTE(A2,"/","@",LEN(A2)-LEN(SUBSTITUTE(A2,"/",""))),1))))