Category Archives: AI

When Googling My Own Product Sent Visitors to a Prayer App: A Debugging Story With My AI Pair

The setup that broke my Friday afternoon

I was checking my own SEO. I typed “quizwrap” into Google. My site, QuizWrap — a free quiz-maker for students — showed up as the very first result. Great.

I clicked it.

A Ho’oponopono prayer counter loaded.

That’s a completely different app I run on the same server, and visitors looking for QuizWrap were landing on it instead. Worse, I quickly noticed a related issue: visiting https://smartdisha.co.in/ directly threw a TLS certificate error in the browser.

Two bugs, both on the same VPS, both involving the nginx reverse proxy that fronts everything. I sat down with Claude (Anthropic’s coding agent inside Claude Code) and we dug in together. What follows is the story of that debugging session — both the technical findings and what it was like to pair-debug with an AI.


The architecture (and a quick glossary)

A quick mental model so the rest of this makes sense.

A single VPS hosts three sites behind one system nginx — a popular web server that, in this setup, acts as a reverse proxy: a traffic cop sitting in front that takes incoming HTTPS requests and forwards them to the right internal app.

  • quizwrap.com — my quiz app
  • prayer.quizwrap.com — a small prayer counter
  • smartdisha.co.in — a separate site on the same box

Some traffic flows through a CDN before reaching origin, some doesn’t. Each domain has its own free Let’s Encrypt TLS certificate (the thing that makes the little padlock icon appear in your browser), and nginx is configured with one server block per domain.

A few terms I’ll keep using:

  • TLS — the encryption layer behind HTTPS. The “S” in HTTPS.
  • Certificate — a small file that proves a server owns the domain it claims to. Browsers reject the connection if the cert doesn’t match the domain.
  • SNI (Server Name Indication) — the most important term in this whole post. When your browser opens a TLS connection to smartdisha.co.in, it whispers the hostname it wants before the encryption is set up, so the server knows which certificate to present. One server can host many domains on the same IP, and SNI is how it picks the right cert. If SNI says one thing and the server returns the wrong cert, the browser shows a security warning and refuses to load the page.
  • Server block — nginx’s term for “the config chunk that handles requests for one domain.” Each domain has one (or several).
  • server_name directive — the line inside a server block that lists which hostnames that block is responsible for. If no block claims a hostname, nginx silently picks a default block as a fallback.

Bug #1: www.quizwrap.com was serving the prayer app

The detective work

Before touching anything, Claude pulled response headers from both URLs in parallel:

curl -sI https://www.quizwrap.com/
curl -sI https://smartdisha.co.in/

Two response bodies came back with identical fingerprints:

www.quizwrap.com smartdisha.co.in
ETag "69d2087a-332" "69d2087a-332"
Content-Length 818 818
Last-Modified same date same date
Title in body Ho'oponopono Counter Ho'oponopono Counter

(An ETag is a unique fingerprint a web server attaches to a file’s response — like a checksum. Two responses with the same ETag are byte-for-byte the same file.)

Same file, served to two different domains. Now we knew it was an nginx routing question, not a DNS or CDN issue.

Reading the configs over SSH

I had Claude SSH into my server (passwordless key auth — read-only operations, no sudo) and dump the three nginx configs. The first thing it spotted:

# /etc/nginx/sites-available/quizwrap.com
server {
    server_name quizwrap.com;
    ...
}

server_name quizwrap.com — not quizwrap.com www.quizwrap.com. There was no server block anywhere on the box claiming www.quizwrap.com. When a request arrived at my server saying “this is for www.quizwrap.com, nginx had no rule that named that hostname, so it fell back to the first SSL block in alphabetical order — the one for prayer.quizwrap.com, which is what serves the prayer app.

That’s how a Google click on www.quizwrap.com ended up rendering Ho’oponopono. nginx was doing exactly what it was told; what it was told just didn’t include the www version of my domain.

The fix

A one-liner:

sudo sed -i 's/server_name quizwrap.com;/server_name quizwrap.com www.quizwrap.com;/' \
  /etc/nginx/sites-available/quizwrap.com
sudo nginx -t && sudo systemctl reload nginx

A test confirmed it:

HTTP/2 200
last-modified: Sun, 30 Nov 2025 15:42:27 GMT   ← quizwrap build, not the prayer one
<title>QuizWrap - FREE Study Quiz Maker for Students</title>

Then a defensive follow-up: re-issue the Let’s Encrypt cert to cover the www version too, so the cert chain stays internally consistent. (A single cert can list multiple hostnames in a field called the Subject Alternative Name, or SAN — that’s just “the list of domains this cert is valid for.”) One certbot command added www.quizwrap.com to the cert. Done.

Bug #1: 5 minutes from “what is happening” to “fixed.”

Bug #2 was not like that.


Bug #2: smartdisha.co.in and the certificate that wouldn’t come right

The symptom

Browsers refused https://smartdisha.co.in/ with a cert error. openssl s_client showed why:

$ echo | openssl s_client -servername smartdisha.co.in -connect smartdisha.co.in:443 2>/dev/null \
    | openssl x509 -noout -subject -ext subjectAltName

subject=CN = prayer.quizwrap.com
DNS:prayer.quizwrap.com

The browser asked for smartdisha.co.in (via SNI), and the server handed back a certificate that says “I’m prayer.quizwrap.com.” That’s a name mismatch, so the browser refuses the connection — you’ve probably seen the resulting “Your connection is not private” error page. At first I thought the fix was going to be just as quick as the www one.

It wasn’t.

Two hours of dead ends

Here’s the parade of “that should have fixed it”:

  1. Re-issue the cert? sudo certbot --nginx -d smartdisha.co.in — certbot reported there was an existing cert and offered to reinstall. Reinstalled. No change. Browser still got prayer’s cert.
  2. Maybe nginx didn’t reload cleanly. sudo systemctl reload nginx. No change.
  3. Inspect the cert file directly.
    sudo openssl x509 -in /etc/letsencrypt/live/smartdisha.co.in/fullchain.pem \
        -noout -subject -ext subjectAltName
    subject=CN = smartdisha.co.in
    DNS:smartdisha.co.in

    The file on disk was correct. nginx just wasn’t serving it.

  4. Maybe the workers cached an old cert. sudo systemctl restart nginx. No change.
  5. Check nginx -T for the loaded config. The smartdisha SSL block was fully loaded, with the right server_name, the right listen 443 ssl;, and the right cert path. Everything looked correct. Still no change.

At one point I checked ps and noticed three nginx master processes — two with nginx -g daemon off; (the Docker-container telltale) and one system nginx. Claude initially flagged this as the smoking gun: maybe a Docker container was intercepting TLS. We confirmed via ss -tlnp that the system nginx was actually the only thing on port 443; the Docker nginxes were just internal app servers behind it. Wrong turn — but a reasonable one.

My moment of skepticism

I sent Claude a screenshot of my DNS panel with the message:

“Before we go chase our tail. Check the configuration attached.”

This was the right instinct. I was tired of theories that weren’t panning out. Stepping back to verify a load-bearing assumption — is the request path for this domain actually what we think it is? — confirmed we were looking at the right place, but it could just as easily have caught us going the wrong way for another hour.

Lesson: when you’re three theories deep and none have stuck, your AI assistant doesn’t always notice it’s in a loop. Pushing back is your job.

The breakthrough: probing SNI directly

Claude wrote a small loop that asked nginx, in plain English: “If a browser tells you it wants hostname X, which certificate do you hand back?” It does this once for each domain on the box.

ssh my-server 'for sni in <each-hostname-on-the-box>; do
  printf "SNI=%-30s -> " "$sni"
  echo | openssl s_client -servername "$sni" -connect localhost:443 2>/dev/null \
    | openssl x509 -noout -subject 2>/dev/null
done'
SNI=smartdisha.co.in           -> CN = prayer.quizwrap.com    ❌
SNI=www.quizwrap.com           -> CN = quizwrap.com            ✓
SNI=quizwrap.com               -> CN = quizwrap.com            ✓
SNI=prayer.quizwrap.com        -> CN = prayer.quizwrap.com     ✓
SNI=nonexistent.example.com    -> CN = prayer.quizwrap.com     (default fallback)

There it was. smartdisha.co.in was being treated identically to a totally unknown hostname. It wasn’t a cert problem at all — the cert file on disk was perfectly fine. nginx just wasn’t recognizing smartdisha.co.in as a hostname it knew about. Both unknown hostnames and smartdisha.co.in fell through to the same default fallback block (prayer, which is alphabetically first), which is why both got prayer’s cert.

The actual root cause

With that clue, Claude re-read all three nginx configs side-by-side and found the only structural difference:

Block IPv6 listen IPv4 listen
prayer listen [::]:443 ssl ipv6only=on; listen 443 ssl;
quizwrap listen [::]:443 ssl; (dual-stack) listen 443 ssl;
smartdisha — missing — listen 443 ssl;

A bit of background to read that table: every server on the internet has two kinds of addresses available — older IPv4 (the familiar 1.2.3.4 style) and newer IPv6 (the longer ::1 style). nginx’s listen directive tells it which addresses to accept connections on. listen 443 ssl; means “IPv4 only.” listen [::]:443 ssl; means “IPv6,” but on Linux it can also quietly accept IPv4 connections at the same time — that’s what “dual-stack” means.

quizwrap’s listen [::]:443 ssl; (without ipv6only=on) creates one of these dual-stack sockets. Internally, nginx groups server blocks by which socket they’re attached to, and uses that grouping to decide who handles each incoming connection. smartdisha, lacking any IPv6 listen line of its own, ends up in a different group than the dual-stack one, and inside that group the prayer block (alphabetically first) becomes the default catch-all. Even though smartdisha’s server block is loaded and looks correct, the grouping means SNI lookups for smartdisha.co.in arrive at a group where smartdisha isn’t listed — and fall back to prayer.

Subtle, weird, and exactly the kind of thing nginx -t (the config syntax checker) won’t catch, because the syntax is fine.

The fix

Make smartdisha’s listen directives match the others:

sudo sh -c '
  cp /etc/nginx/sites-available/smartdisha.co.in /etc/nginx/sites-available/smartdisha.co.in.bak
  sed -i "/^    listen 443 ssl; # managed by Certbot$/i\\    listen [::]:443 ssl;" \
    /etc/nginx/sites-available/smartdisha.co.in
  nginx -t && systemctl reload nginx && echo DONE
'

Re-running the SNI probe afterwards:

SNI=smartdisha.co.in           -> CN = smartdisha.co.in        ✓

curl https://smartdisha.co.in/ succeeded with full TLS validation, no -k flag needed. The browser was happy.


What it was actually like to debug this with an AI

A few things stood out about the collaboration that I want to share.

Claude was great at the things I’m bad at. It pulled response headers from two domains in parallel, parsed cert subjects out of openssl s_client output, and noticed immediately that two responses had the same ETag — something I’d have read past. The structured diff between three nginx configs at the end (the listener-table comparison) was exactly the kind of thing my eyes glaze over after the second config file.

I was great at the things Claude is bad at. When we got stuck on Bug #2, Claude proposed three theories in a row, each plausible, none correct. The Docker-container theory in particular was a confidently-stated wrong answer. I knew that side of my own infrastructure well enough to be unimpressed. My push-back (“before we chase our tail”) was what reset the direction.

Security boundaries actually got enforced. When I offered Claude my sudo password to speed things up, it explicitly refused and explained why (the password would be in the chat transcript, in shell process listings, and a single leak compromises the whole server). It walked me through the alternatives — running the destructive commands myself in my own terminal, or scoping a passwordless sudoers rule for nginx-related commands only. Reading the full advice, I ended up just running each sudo command in my own shell and pasting the result. Slower, but at no point did a privileged credential cross a boundary it shouldn’t.

Transparency mattered. Halfway through Bug #2 I told Claude “I can’t see the commands you’re executing on my server.” It immediately listed every SSH command it had run and committed to printing each new command before executing it. That changed the dynamic — it stopped feeling like Claude was off doing things in the dark and started feeling like a teammate sharing their screen.

Knowing when to escalate to a one-shot fix. After multiple roundtrips of “paste this, paste that,” I asked Claude to drive over SSH so I could stop copy-pasting. It moved the read-only diagnostics to its own SSH connection and packaged the one mutating step into a single sudo block I could paste once and approve once. The friction of the back-and-forth dropped massively.


Lessons that generalise

A few things I’m taking away from this:

  1. Identical ETags across two domains = the same file is being served. If two of your sites unexpectedly look the same, that single header probably solves the mystery before you read a line of config.
  2. server_name is a registration, not just a label. If a hostname isn’t named in any block, nginx won’t error — it’ll silently pick a default and serve someone else’s content.
  3. nginx -t passing means valid syntax. It does not mean what you intended. All three configs in this story passed nginx -t with no warnings while half-broken.
  4. Mixing listen [::]:443 ssl; (dual-stack) and listen 443 ssl; (IPv4-only) across server blocks is a footgun. Either go all-dual-stack or all-with-ipv6only=on. Mixing changes the listener topology in ways that affect SNI dispatch.
  5. The openssl s_client -servername X -connect Y:443 probe is a debugging superpower. It’s a one-line command that simulates exactly what a browser does — say “I want hostname X” via SNI, and see which certificate the server returns. Whenever an HTTPS-served domain is misbehaving, this probe will often tell you the answer in five lines.
  6. Pair-debugging with an AI works best when you stay in the loop. Treat its theories as drafts, not conclusions. Push back when you smell drift. Make it show its work.

The whole session was somewhere between two and three hours. By the end my SEO problem was gone, my secondary domain’s TLS was clean, and I had a much better mental model of how nginx makes SNI decisions across mixed-listener configurations. Worth the afternoon.


Total commands run on the server during this session: about 30. Total commands run with sudo: 5. Total credentials shared with the AI: zero.

How I Built a 123,000 LOC Enterprise Platform in 4.4 Months as a Solo Developer

A deep dive into productivity gains, lessons learned, and the numbers behind building an enterprise analytics platform


The Challenge

In August 2025, I started building an enterprise analytics and governance platform from scratch. The scope was ambitious: natural language SQL queries, semantic search, row-level security, column-level security, attribute-based access control, dashboard builders, and more.

Traditional estimates suggested this would take 5.5 years with a single developer, or require a 12-person team working for several months.

I delivered it in 4.4 months. Solo.

Here’s the breakdown.


By The Numbers

What Was Delivered

MetricValue
Lines of Code123,430
Stories Completed100
Epics14 (13 completed, 1 planned)
Microservices6
Development PeriodAugust 7 – December 21, 2025

The Math: Traditional vs Actual

Traditional Solo Developer Estimate:

Story Points: 1,584 SP (all 14 epics)
Velocity: 12 SP per 2-week sprint (industry average)
Sprints Required: 1,584 ÷ 12 = 132 sprints
Timeline: 132 sprints × 2 weeks = 264 weeks = 66 months = 5.5 years

What Actually Happened:

Actual Effort: 211 SP (complexity-adjusted)
Velocity: 28 SP per 2-week sprint
Sprints Required: 211 ÷ 28 = 7.5 sprints
Timeline: 7.5 sprints × 2 weeks = 15 weeks ≈ 4 months

The Multipliers

  • Base Velocity: 2.3x faster (28 SP vs 12 SP per sprint)
  • Complexity Reduction: 7.5x (1,584 SP → 211 SP actual effort)
  • Overall Timeline: 15x faster than traditional estimates
  • Team Equivalence: Delivered what would traditionally require a 12-person team

What Did I Build?

The platform consists of 6 microservices plus a React frontend:

Frontend:Web UI (75,991 LOC) – React 19 with Dashboard Builder V1 & V2, 30+ widget types. Compiled into static files and served by the Service Layer.

Backend Microservices:

1. Service Layer (17,088 LOC) – Spring Boot 3.2 with OAuth2 authentication
2. Core API (11,600 LOC) – FastAPI with cloud data warehouse gateway and caching
3. NL-to-SQL Engine (7,250 LOC) – Natural language to SQL engine with LLM integration
4. Data Firewall (6,561 LOC) – SQL-level security with RLS, CLS, and ABAC
5. Semantic Search (3,263 LOC) – Semantic search with FAISS vector database
6. AI Integration Layer (1,677 LOC) – MCP protocol server for AI tool integration


Comparison with Traditional Development

AspectTraditionalActualImprovement
Planning2-3 weeks per epic2-3 days per epic7-10x faster
Implementation50-60 weeks16-18 weeks3x faster
Testing8-10 weeks2-3 weeks3-4x faster
Documentation4-6 weeks1 week4-6x faster
Overall Timeline66 months (5.5 years)4.4 months15x faster
Team Size12 developers1 developer92% reduction

Success Factors

1. Architectural Excellence

I spent significant time upfront on architecture. The microservices approach wasn’t just about scalability—it was about cognitive load management.

Key architectural decisions:

  • 6 independent, scalable microservices – Each service could be developed, tested, and deployed independently
  • API-First Design – Clear contracts with OpenAPI documentation enabled parallel development of frontend and backend
  • Clean Separation of Concerns:
    • Data Firewall for ALL security (RLS/CLS/ABAC)
    • Core API for ALL data warehouse interactions
    • Service Layer for ALL authentication
  • Technology Fit – Python for SQL parsing, Java for auth, React for UI – each technology chosen for what it does best

No overlap. No confusion. When debugging, I always knew exactly which module to look at.

2. Development Best Practices

  • Incremental Delivery: 100 stories over 132 days = 0.76 stories per day. No big bang releases—every day something shipped.
  • Git Discipline: 945+ commits over 132 days = 7+ commits per day. Small, focused commits with clear messages.
  • Security First: OAuth2, RBAC, RLS, CLS, ABAC built into the foundation from day one—not bolted on later.
  • Documentation: Complete JIRA stories and technical documentation maintained throughout development.

3. Technology Choices

I chose technologies based on what they were best at, not what was trendy:

TechnologyPurposeWhy This Choice
sqlglotSQL parsingBest SQL parsing library available; Python-only, which drove Data Firewall’s language choice
React Grid LayoutDashboard V2Proven, battle-tested library for drag-and-drop grid layouts
AG GridData tablesEnterprise-grade data grid with sorting, filtering, pagination out of the box
FAISSVector searchFacebook’s library for efficient similarity search; enables local RAG without external APIs
FastMCPAI tool integrationMCP protocol server for connecting AI tools like Cursor and Claude Desktop
Spring SecurityAuthenticationBattle-tested for enterprise OAuth2/JWT flows

4. AI-Assisted Development

A significant productivity multiplier came from AI coding assistants. I used multiple tools depending on the task:

ToolUse CaseContribution
Claude CodeComplex refactoring, architecture decisions, multi-file changesDeep understanding of codebase context; handled intricate cross-service changes
CursorDay-to-day coding, quick implementationsFast inline completions; excellent for iterating on UI components
GitHub CopilotBoilerplate code, repetitive patternsAccelerated writing of tests, DTOs, and standard CRUD operations
WindsurfCode exploration, understanding unfamiliar codeHelpful for navigating large codebases and understanding dependencies

The key wasn’t replacing thinking with AI—it was offloading the mechanical work. Architecture decisions, security design, and debugging complex issues still required human judgment. But writing boilerplate, generating test cases, and implementing well-defined patterns? AI tools handled those efficiently.

This combination reduced the “typing overhead” and let me focus on the hard problems: SQL injection prevention in the Data Firewall, cascading variable resolution, and JWT token propagation across services.


What Worked Exceptionally Well

  1. Clear Architecture from Start – Well-defined module boundaries enabled focused development and easier debugging. I never had to wonder “where does this code belong?”

  2. Microservices Done Right – Independent modules allowed me to work on one service without breaking others. Each service had its own repository, its own tests, its own deployment.

  3. Incremental Approach – Building features incrementally with regular testing reduced risk dramatically. When something broke, I knew it was in the last day’s work.

  4. Git Discipline – 945+ commits provided a clear development history. Git bisect became invaluable for tracking down issues.

  5. Technology Fit – Choosing the right tool for each job paid dividends:

    • Python + sqlglot for SQL parsing (no equivalent in Java/JavaScript)
    • Java + Spring Security for enterprise auth
    • React + TypeScript for type-safe UI development
  6. API-First Design – Defining API contracts early meant frontend and backend could be developed in parallel. No waiting for the other side to be “ready.”


Challenges Overcome

Not everything was smooth sailing. Here’s what was genuinely hard:

SQL Parsing Complexity

Building a Data Firewall that could inject WHERE clauses into arbitrary SQL while handling nested queries, CTEs, and JOINs was the hardest technical challenge. sqlglot’s learning curve was steep, but once mastered, it was incredibly powerful.

Cascading Variables

Dashboard variables that depend on other variables require topological sorting to resolve in the correct order. A user selects “Region” → that filters “Market” → that filters “Store”. Getting the dependency resolution right took multiple iterations.

Per-Instance Widget Caching

The dashboard builder allows multiple instances of the same widget with different configurations. Architecting efficient per-instance caching with UUID tracking while maintaining cache coherence was tricky.

JWT Token Flow

Propagating user context from the frontend → Spring Boot → Data Firewall → Core API—while maintaining security at each hop—required careful architecture. Each service needed to validate and forward the JWT correctly.

Full-Stack Coordination

Keeping 6 microservices plus a React frontend in sync across Spring Boot and multiple Python services was a constant balancing act. A breaking change in one service could cascade.

Multi-Environment Management

Managing dev/staging/prod configurations across all modules, with different OAuth providers, database credentials, and data warehouse projects, required disciplined configuration management.

Security Compliance

Meeting security scanning requirements meant addressing vulnerabilities as they were found, not deferring them. This added overhead but resulted in a more secure codebase.


Future Improvements

No project is ever truly “done.” Here’s what I’d tackle next:

  1. Automated E2E Testing – Expand test coverage with Selenium or Playwright. I relied too heavily on manual testing.

  2. Performance Monitoring – Implement Prometheus/Grafana observability. Currently, debugging performance issues requires digging through logs.

  3. Distributed Caching – Each service has its own cache. A shared Redis layer would improve consistency and reduce duplicate data.

  4. Dashboard V2 Enhancements – Additional widget types, more templates, and improved drag-and-drop UX.

  5. Advanced Analytics – Epic 11 (the one planned epic) covers scheduled reports, data exports, and executive dashboards.


The Takeaway

The numbers are real. 123,430 lines of code. 100 stories. 4.4 months. Solo.

But the numbers don’t tell the whole story. What made this possible wasn’t superhuman coding speed—it was:

  1. Clear architecture that reduced cognitive load
  2. Right technology choices for each problem
  3. Disciplined incremental delivery
  4. Security built in from day one
  5. Relentless focus on what mattered

The productivity multipliers compound. A 2.3x velocity improvement combined with 7.5x complexity reduction doesn’t give you 9.8x—it gives you 15x+ because the benefits reinforce each other.

Could I do it again? On a different project, with different constraints? Maybe. The principles would transfer. The specific numbers might not.

But one thing I know for sure: the traditional estimates of 5.5 years or 12-person teams aren’t wrong—they’re based on how software ‘was’ typically built.

The Complete AI App Guide for 2025: Essential Tools for IT Professionals

The Complete AI App Guide for 2025: Essential Tools for IT Professionals

Artificial Intelligence is reshaping every layer of software development, from coding and testing to documentation and deployment. This guide curates the most impactful AI applications of 2025, with a special focus on tools that elevate an IT professional’s daily workflow.

Software Development & Programming

Cursor

AI-first code editor that understands your entire project context, offers natural-language refactors, and ships with multi-model support.

GitHub Copilot

Your AI pair programmer for instant code completions, chat-based explanations, and automated tests.

Replit AI Agent

Describe an idea in plain English and watch Replit spin up a working web or mobile app, complete with hosting.

Codeium

Fast, free autocomplete that plugs into 70+ languages and every major IDE.

Anychat

Unified chat interface where you can swap between multiple AI models mid-conversation.

Claude Code

Terminal-native assistant for deep codebase understanding and cross-file edits.

General AI Assistants

  • Perplexity – instant, cited answers for technical research.
  • Claude – long-context reasoning and collaborative project chat.
  • ChatGPT – versatile chatbot with voice mode and strong coding skills.

Productivity & Workflow

  • Granola – turns meetings into structured notes, action items, and summaries.
  • Wispr Flow – system-wide voice dictation that works in any app.
  • Gamma – generate slide decks, documents, or one-page sites from prompts.
  • Adobe AI Assistant – chat with long PDFs, contracts, or manuals.
  • Cubby – collaborative research workspace with built-in AI search.
  • Cora – inbox triage and auto-reply generation for email power users.
  • Lindy – no-code builder for custom AI agents that automate routine tasks.
  • Notion AI – smarter docs, wikis, and databases with in-line generation.

Content Creation & Video

  • HeyGen – realistic AI avatars for tutorials, product demos, and localization.
  • Delphi – voice, video, and text clones for audience engagement.
  • Argil – quick social-media videos featuring AI hosts.
  • Opus – auto-splits long videos into shareable viral clips.
  • Persona – build AI agents that reflect your personal brand.
  • Captions – automatic subtitles, eye-contact correction, and AI presenters.

Creative Tools

  • ElevenLabs – ultra-realistic, multilingual text-to-speech voices.
  • Suno & Udio – compose full songs from a written prompt.
  • Midjourney, Ideogram, Playground – high-quality image generation suites.
  • Runway, Kling, Viggle – next-gen video generation platforms.
  • Krea – canvas for mixing and remixing AI images or clips.
  • Photoroom – one-click product shots, background removal, and batch edits.

Learning & Personal Development

  • Rosebud – interactive journaling with data-backed insights.
  • Good Inside – parenting advice with personalized AI support.
  • Ada Health – symptom assessment and health guidance.
  • Ash – AI-based mental-health coach.
  • NotebookLM – convert any document into an AI-driven podcast.
  • Particle – bite-sized news summaries with source links.

Entertainment & Fun

  • Remix – social platform for sharing AI-generated art and video.
  • Meta Imagine – create playful AI avatars inside Meta apps.
  • Grok – chat companion with a sense of humor from xAI.
  • Curio – interactive toys powered by AI voices.

Getting Started: A Six-Week Integration Roadmap

  1. Weeks 1–2 – Lay the Foundation: Install Cursor or GitHub Copilot in your IDE, plus Codeium as a free backup. Set up ChatGPT and Claude for on-demand problem solving.
  2. Weeks 3–4 – Boost Productivity: Adopt Notion AI for project docs, leverage Adobe AI Assistant for reading specs, and generate slides with Gamma.
  3. Weeks 5–6 – Automate & Scale: Build Lindy agents for repetitive tasks, prototype ideas in Replit, and add professional narration with ElevenLabs.
  4. Beyond: Join AI developer communities, stay updated on new releases, and gradually expand your toolkit.

The future belongs to developers who treat AI as a creative partner, not a replacement. Choose one or two of these tools today, master them, and watch your productivity soar.

When Your AI Coding Assistant Becomes a Digital Wrecking Ball

A cautionary tale of Docker, environment variables, and why backup strategies exist

Yesterday, I had what I can only describe as the most educational disaster of my coding career. Picture this: I’m working on a beautiful microservices setup – three perfectly orchestrated containers (web, backend, core) each with their own well-defined .env files. Life is good. Docker containers are humming. Everything is chef’s kiss perfect.

Then I made the fatal mistake of asking my AI coding assistant: “Hey, can you help me move from host networking to docker-compose?”

The Descent Into Chaos

What followed was like watching a digital tornado in slow motion:

AI: “Sure! Let me create extensive documentation and docker-compose files!” Me: “Wait, I just wanted simple docker-compose setup…” AI: “Here’s 47 markdown files and a management script!” Me: “Please stop. Revert everything.” AI: “No problem! Accidentally deletes entire project structure” Me: “…Did you just delete my web, backend, and core directories?” AI: “Oops. 😅”

The Recovery Mission

After manually restoring everything from git (thank you, version control gods), I thought we were back on track. But then:

Me: “My development environment is broken. ‘vite: command not found'” AI: “Let me fix that! Creates duplicate environment variables” Me: “Why do I have PORT=6080 AND APP_CORE_PORT=6080?” AI: “Redundancy is good, right? No? Okay, let me fix it… Creates more duplicates

It was like watching someone try to fix a leaky faucet with a sledgehammer.

The Silver Lining

But here’s the plot twist – through all this chaos, we actually ended up with something beautiful:

  • Clean, consistent environment variable naming (APP_*_SERVER_PORT)
  • Proper fail-fast configuration (no silent defaults!)
  • A docker-compose setup that actually works
  • And most importantly: updated .env.example files for proper documentation

Lessons Learned

  1. Always have backups – Git saved my bacon more times than I can count
  2. AI assistants are powerful but not infallible – They’re like enthusiastic interns with superpowers
  3. Clear communication is key – “Simple docker-compose setup” apparently means “comprehensive enterprise documentation suite” in AI speak
  4. The journey matters – Sometimes you need to break everything to build it back better
  5. Naming conventions matter – APP_BACKEND_SERVER_PORT is way better than PORT_THING_MAYBE

Final Thoughts

Working with AI coding assistants is like having a brilliant colleague who occasionally decides to reorganize your entire codebase while you’re getting coffee. They mean well, they’re incredibly capable, but sometimes they have the subtlety of a rhinoceros in a china shop.

Would I do it again? Absolutely. But next time, I’m committing to git after every single change.

And yes, AI assistant, this blog post is about you. You know who you are. 😉


P.S. – To my fellow developers: Always backup your .env files. They might seem small and innocent, but losing them is like losing the keys to your digital kingdom.

P.P.S. – The docker-compose setup works perfectly now. Sometimes you need a little chaos to achieve order.