Category Archives: Programming

When Googling My Own Product Sent Visitors to a Prayer App: A Debugging Story With My AI Pair

The setup that broke my Friday afternoon

I was checking my own SEO. I typed “quizwrap” into Google. My site, QuizWrap — a free quiz-maker for students — showed up as the very first result. Great.

I clicked it.

A Ho’oponopono prayer counter loaded.

That’s a completely different app I run on the same server, and visitors looking for QuizWrap were landing on it instead. Worse, I quickly noticed a related issue: visiting https://smartdisha.co.in/ directly threw a TLS certificate error in the browser.

Two bugs, both on the same VPS, both involving the nginx reverse proxy that fronts everything. I sat down with Claude (Anthropic’s coding agent inside Claude Code) and we dug in together. What follows is the story of that debugging session — both the technical findings and what it was like to pair-debug with an AI.


The architecture (and a quick glossary)

A quick mental model so the rest of this makes sense.

A single VPS hosts three sites behind one system nginx — a popular web server that, in this setup, acts as a reverse proxy: a traffic cop sitting in front that takes incoming HTTPS requests and forwards them to the right internal app.

  • quizwrap.com — my quiz app
  • prayer.quizwrap.com — a small prayer counter
  • smartdisha.co.in — a separate site on the same box

Some traffic flows through a CDN before reaching origin, some doesn’t. Each domain has its own free Let’s Encrypt TLS certificate (the thing that makes the little padlock icon appear in your browser), and nginx is configured with one server block per domain.

A few terms I’ll keep using:

  • TLS — the encryption layer behind HTTPS. The “S” in HTTPS.
  • Certificate — a small file that proves a server owns the domain it claims to. Browsers reject the connection if the cert doesn’t match the domain.
  • SNI (Server Name Indication) — the most important term in this whole post. When your browser opens a TLS connection to smartdisha.co.in, it whispers the hostname it wants before the encryption is set up, so the server knows which certificate to present. One server can host many domains on the same IP, and SNI is how it picks the right cert. If SNI says one thing and the server returns the wrong cert, the browser shows a security warning and refuses to load the page.
  • Server block — nginx’s term for “the config chunk that handles requests for one domain.” Each domain has one (or several).
  • server_name directive — the line inside a server block that lists which hostnames that block is responsible for. If no block claims a hostname, nginx silently picks a default block as a fallback.

Bug #1: www.quizwrap.com was serving the prayer app

The detective work

Before touching anything, Claude pulled response headers from both URLs in parallel:

curl -sI https://www.quizwrap.com/
curl -sI https://smartdisha.co.in/

Two response bodies came back with identical fingerprints:

www.quizwrap.com smartdisha.co.in
ETag "69d2087a-332" "69d2087a-332"
Content-Length 818 818
Last-Modified same date same date
Title in body Ho'oponopono Counter Ho'oponopono Counter

(An ETag is a unique fingerprint a web server attaches to a file’s response — like a checksum. Two responses with the same ETag are byte-for-byte the same file.)

Same file, served to two different domains. Now we knew it was an nginx routing question, not a DNS or CDN issue.

Reading the configs over SSH

I had Claude SSH into my server (passwordless key auth — read-only operations, no sudo) and dump the three nginx configs. The first thing it spotted:

# /etc/nginx/sites-available/quizwrap.com
server {
    server_name quizwrap.com;
    ...
}

server_name quizwrap.com — not quizwrap.com www.quizwrap.com. There was no server block anywhere on the box claiming www.quizwrap.com. When a request arrived at my server saying “this is for www.quizwrap.com, nginx had no rule that named that hostname, so it fell back to the first SSL block in alphabetical order — the one for prayer.quizwrap.com, which is what serves the prayer app.

That’s how a Google click on www.quizwrap.com ended up rendering Ho’oponopono. nginx was doing exactly what it was told; what it was told just didn’t include the www version of my domain.

The fix

A one-liner:

sudo sed -i 's/server_name quizwrap.com;/server_name quizwrap.com www.quizwrap.com;/' \
  /etc/nginx/sites-available/quizwrap.com
sudo nginx -t && sudo systemctl reload nginx

A test confirmed it:

HTTP/2 200
last-modified: Sun, 30 Nov 2025 15:42:27 GMT   ← quizwrap build, not the prayer one
<title>QuizWrap - FREE Study Quiz Maker for Students</title>

Then a defensive follow-up: re-issue the Let’s Encrypt cert to cover the www version too, so the cert chain stays internally consistent. (A single cert can list multiple hostnames in a field called the Subject Alternative Name, or SAN — that’s just “the list of domains this cert is valid for.”) One certbot command added www.quizwrap.com to the cert. Done.

Bug #1: 5 minutes from “what is happening” to “fixed.”

Bug #2 was not like that.


Bug #2: smartdisha.co.in and the certificate that wouldn’t come right

The symptom

Browsers refused https://smartdisha.co.in/ with a cert error. openssl s_client showed why:

$ echo | openssl s_client -servername smartdisha.co.in -connect smartdisha.co.in:443 2>/dev/null \
    | openssl x509 -noout -subject -ext subjectAltName

subject=CN = prayer.quizwrap.com
DNS:prayer.quizwrap.com

The browser asked for smartdisha.co.in (via SNI), and the server handed back a certificate that says “I’m prayer.quizwrap.com.” That’s a name mismatch, so the browser refuses the connection — you’ve probably seen the resulting “Your connection is not private” error page. At first I thought the fix was going to be just as quick as the www one.

It wasn’t.

Two hours of dead ends

Here’s the parade of “that should have fixed it”:

  1. Re-issue the cert? sudo certbot --nginx -d smartdisha.co.in — certbot reported there was an existing cert and offered to reinstall. Reinstalled. No change. Browser still got prayer’s cert.
  2. Maybe nginx didn’t reload cleanly. sudo systemctl reload nginx. No change.
  3. Inspect the cert file directly.
    sudo openssl x509 -in /etc/letsencrypt/live/smartdisha.co.in/fullchain.pem \
        -noout -subject -ext subjectAltName
    subject=CN = smartdisha.co.in
    DNS:smartdisha.co.in

    The file on disk was correct. nginx just wasn’t serving it.

  4. Maybe the workers cached an old cert. sudo systemctl restart nginx. No change.
  5. Check nginx -T for the loaded config. The smartdisha SSL block was fully loaded, with the right server_name, the right listen 443 ssl;, and the right cert path. Everything looked correct. Still no change.

At one point I checked ps and noticed three nginx master processes — two with nginx -g daemon off; (the Docker-container telltale) and one system nginx. Claude initially flagged this as the smoking gun: maybe a Docker container was intercepting TLS. We confirmed via ss -tlnp that the system nginx was actually the only thing on port 443; the Docker nginxes were just internal app servers behind it. Wrong turn — but a reasonable one.

My moment of skepticism

I sent Claude a screenshot of my DNS panel with the message:

“Before we go chase our tail. Check the configuration attached.”

This was the right instinct. I was tired of theories that weren’t panning out. Stepping back to verify a load-bearing assumption — is the request path for this domain actually what we think it is? — confirmed we were looking at the right place, but it could just as easily have caught us going the wrong way for another hour.

Lesson: when you’re three theories deep and none have stuck, your AI assistant doesn’t always notice it’s in a loop. Pushing back is your job.

The breakthrough: probing SNI directly

Claude wrote a small loop that asked nginx, in plain English: “If a browser tells you it wants hostname X, which certificate do you hand back?” It does this once for each domain on the box.

ssh my-server 'for sni in <each-hostname-on-the-box>; do
  printf "SNI=%-30s -> " "$sni"
  echo | openssl s_client -servername "$sni" -connect localhost:443 2>/dev/null \
    | openssl x509 -noout -subject 2>/dev/null
done'
SNI=smartdisha.co.in           -> CN = prayer.quizwrap.com    ❌
SNI=www.quizwrap.com           -> CN = quizwrap.com            ✓
SNI=quizwrap.com               -> CN = quizwrap.com            ✓
SNI=prayer.quizwrap.com        -> CN = prayer.quizwrap.com     ✓
SNI=nonexistent.example.com    -> CN = prayer.quizwrap.com     (default fallback)

There it was. smartdisha.co.in was being treated identically to a totally unknown hostname. It wasn’t a cert problem at all — the cert file on disk was perfectly fine. nginx just wasn’t recognizing smartdisha.co.in as a hostname it knew about. Both unknown hostnames and smartdisha.co.in fell through to the same default fallback block (prayer, which is alphabetically first), which is why both got prayer’s cert.

The actual root cause

With that clue, Claude re-read all three nginx configs side-by-side and found the only structural difference:

Block IPv6 listen IPv4 listen
prayer listen [::]:443 ssl ipv6only=on; listen 443 ssl;
quizwrap listen [::]:443 ssl; (dual-stack) listen 443 ssl;
smartdisha — missing — listen 443 ssl;

A bit of background to read that table: every server on the internet has two kinds of addresses available — older IPv4 (the familiar 1.2.3.4 style) and newer IPv6 (the longer ::1 style). nginx’s listen directive tells it which addresses to accept connections on. listen 443 ssl; means “IPv4 only.” listen [::]:443 ssl; means “IPv6,” but on Linux it can also quietly accept IPv4 connections at the same time — that’s what “dual-stack” means.

quizwrap’s listen [::]:443 ssl; (without ipv6only=on) creates one of these dual-stack sockets. Internally, nginx groups server blocks by which socket they’re attached to, and uses that grouping to decide who handles each incoming connection. smartdisha, lacking any IPv6 listen line of its own, ends up in a different group than the dual-stack one, and inside that group the prayer block (alphabetically first) becomes the default catch-all. Even though smartdisha’s server block is loaded and looks correct, the grouping means SNI lookups for smartdisha.co.in arrive at a group where smartdisha isn’t listed — and fall back to prayer.

Subtle, weird, and exactly the kind of thing nginx -t (the config syntax checker) won’t catch, because the syntax is fine.

The fix

Make smartdisha’s listen directives match the others:

sudo sh -c '
  cp /etc/nginx/sites-available/smartdisha.co.in /etc/nginx/sites-available/smartdisha.co.in.bak
  sed -i "/^    listen 443 ssl; # managed by Certbot$/i\\    listen [::]:443 ssl;" \
    /etc/nginx/sites-available/smartdisha.co.in
  nginx -t && systemctl reload nginx && echo DONE
'

Re-running the SNI probe afterwards:

SNI=smartdisha.co.in           -> CN = smartdisha.co.in        ✓

curl https://smartdisha.co.in/ succeeded with full TLS validation, no -k flag needed. The browser was happy.


What it was actually like to debug this with an AI

A few things stood out about the collaboration that I want to share.

Claude was great at the things I’m bad at. It pulled response headers from two domains in parallel, parsed cert subjects out of openssl s_client output, and noticed immediately that two responses had the same ETag — something I’d have read past. The structured diff between three nginx configs at the end (the listener-table comparison) was exactly the kind of thing my eyes glaze over after the second config file.

I was great at the things Claude is bad at. When we got stuck on Bug #2, Claude proposed three theories in a row, each plausible, none correct. The Docker-container theory in particular was a confidently-stated wrong answer. I knew that side of my own infrastructure well enough to be unimpressed. My push-back (“before we chase our tail”) was what reset the direction.

Security boundaries actually got enforced. When I offered Claude my sudo password to speed things up, it explicitly refused and explained why (the password would be in the chat transcript, in shell process listings, and a single leak compromises the whole server). It walked me through the alternatives — running the destructive commands myself in my own terminal, or scoping a passwordless sudoers rule for nginx-related commands only. Reading the full advice, I ended up just running each sudo command in my own shell and pasting the result. Slower, but at no point did a privileged credential cross a boundary it shouldn’t.

Transparency mattered. Halfway through Bug #2 I told Claude “I can’t see the commands you’re executing on my server.” It immediately listed every SSH command it had run and committed to printing each new command before executing it. That changed the dynamic — it stopped feeling like Claude was off doing things in the dark and started feeling like a teammate sharing their screen.

Knowing when to escalate to a one-shot fix. After multiple roundtrips of “paste this, paste that,” I asked Claude to drive over SSH so I could stop copy-pasting. It moved the read-only diagnostics to its own SSH connection and packaged the one mutating step into a single sudo block I could paste once and approve once. The friction of the back-and-forth dropped massively.


Lessons that generalise

A few things I’m taking away from this:

  1. Identical ETags across two domains = the same file is being served. If two of your sites unexpectedly look the same, that single header probably solves the mystery before you read a line of config.
  2. server_name is a registration, not just a label. If a hostname isn’t named in any block, nginx won’t error — it’ll silently pick a default and serve someone else’s content.
  3. nginx -t passing means valid syntax. It does not mean what you intended. All three configs in this story passed nginx -t with no warnings while half-broken.
  4. Mixing listen [::]:443 ssl; (dual-stack) and listen 443 ssl; (IPv4-only) across server blocks is a footgun. Either go all-dual-stack or all-with-ipv6only=on. Mixing changes the listener topology in ways that affect SNI dispatch.
  5. The openssl s_client -servername X -connect Y:443 probe is a debugging superpower. It’s a one-line command that simulates exactly what a browser does — say “I want hostname X” via SNI, and see which certificate the server returns. Whenever an HTTPS-served domain is misbehaving, this probe will often tell you the answer in five lines.
  6. Pair-debugging with an AI works best when you stay in the loop. Treat its theories as drafts, not conclusions. Push back when you smell drift. Make it show its work.

The whole session was somewhere between two and three hours. By the end my SEO problem was gone, my secondary domain’s TLS was clean, and I had a much better mental model of how nginx makes SNI decisions across mixed-listener configurations. Worth the afternoon.


Total commands run on the server during this session: about 30. Total commands run with sudo: 5. Total credentials shared with the AI: zero.

From One Big App to Many Small Ones: A Developer’s Guide to Containers

Picture this: you’ve built a successful web application that started small but has grown into something amazing. Users love it, your team has expanded, and everything seems great. But there’s a problem lurking beneath the surface. Every time you want to add a new feature or fix a bug, it takes forever. Deploying updates feels like performing surgery on a patient who’s wide awake. Sound familiar?

If you’re nodding your head, you’re dealing with what developers call a “monolith” – an application where everything is bundled together in one massive codebase. While monoliths work great when you’re starting out, they can become a real headache as your app grows. The good news? There’s a proven way to solve this problem using something called containers.

What’s the Problem with Big Applications?

Think of a monolithic application like a huge department store where everything is connected. The clothing section shares the same checkout system as electronics, the inventory system controls everything from shoes to smartphones, and if you want to renovate the toy section, you might accidentally break the jewelry department.

In software terms, this means:

  • When one part of your app breaks, it can bring down everything else
  • Adding new features requires testing the entire application
  • Scaling becomes expensive because you have to scale everything, even if you only need more power for one feature
  • Different teams end up stepping on each other’s toes

Enter Containers: Your App’s New Best Friend

Containers are like moving each department of that massive store into its own building. Each department (or service) can operate independently, but they can still communicate with each other when needed. If the toy store needs renovation, the electronics store keeps running without interruption.

In technical terms, a container packages your application code along with everything it needs to run – like a lunch box that contains not just your sandwich, but also the plate, napkin, and utensil you need to eat it.

The Step-by-Step Journey: From Chaos to Order

1. Take a Good, Hard Look at What You Have

Before you start tearing apart your application, you need to understand what you’re working with. This is like creating a detailed floor plan of that massive department store before you start moving things around.

Spend time examining your codebase to identify different functional areas. Most applications naturally group into sections like:

  • User accounts and login systems
  • Payment processing
  • Email notifications
  • Data reporting
  • Content management

Draw these relationships out on paper or in a diagramming tool. You’ll be surprised how much this simple exercise reveals about your application’s structure.

2. Put Your Entire App in a Container First

Here’s where most people make a mistake: they immediately try to break everything apart. Don’t do that. Instead, take your entire monolithic application and put it in a container first.

This is like moving your entire department store into a standardized building before you start separating departments. It solves a huge problem called “environment inconsistency” – the dreaded “it works on my computer but not on yours” syndrome.

When your app runs the same way on your laptop, your colleague’s computer, and your production servers, you eliminate countless headaches and mysterious bugs.

3. Pick Your First Target Carefully

Now comes the fun part: choosing which piece to extract first. This decision is crucial and should be strategic, not random.

Look for parts of your application that are:

  • Self-contained (they don’t depend heavily on other parts)
  • Relatively simple
  • Not critical to your core business logic

Great first candidates include:

  • Authentication systems (login/logout functionality)
  • Email notification services
  • File upload handlers
  • Search functionality

Successfully extracting your first service is like winning your first game – it builds confidence and teaches you the process for future extractions.

4. Bring in the Orchestra Conductor

As you create more containers, managing them manually becomes like trying to conduct a symphony orchestra by shouting instructions. You need a proper conductor, and in the container world, that’s Kubernetes.

Kubernetes is a platform that automatically handles:

  • Starting and stopping your containers
  • Distributing traffic between multiple copies of the same service
  • Restarting failed containers
  • Scaling services up or down based on demand

For beginners, consider starting with simpler alternatives like Docker Compose for development, then moving to managed Kubernetes services offered by cloud providers like Google Cloud, AWS, or Microsoft Azure.

5. Automate Everything from Day One

One of the biggest mistakes teams make is leaving deployment as a manual process. This is like insisting that every product in your store be moved by hand instead of using conveyor belts and automated systems.

Set up automated pipelines that:

  • Test your code automatically when you make changes
  • Build container images without human intervention
  • Deploy to testing environments instantly
  • Notify you if anything goes wrong

This automation eliminates human error and makes deployments so routine that they become boring – which is exactly what you want.

6. Untangle the Database Web

Databases are often the trickiest part of breaking up a monolith. In our department store analogy, this is like having a single cash register system that every department has been modifying over the years.

The key principle is simple: each service should own its data. Instead of letting multiple services directly access the same database tables, establish clear boundaries. If Service A needs data from Service B, it should ask politely through an API rather than sneaking into Service B’s database.

You don’t necessarily need separate physical databases immediately, but you must enforce these ownership rules in your code. For shared data like user sessions, move them to dedicated systems like Redis that are designed for sharing.

7. Make Your System Observable

When you had one big application, finding problems was like debugging issues in a single room. With multiple services, it’s like troubleshooting problems across an entire shopping mall. You need security cameras, intercoms, and monitoring systems everywhere.

Implement comprehensive observability from the start:

  • Logging: Ensure every service writes detailed logs about what it’s doing
  • Monitoring: Track metrics like response times, error rates, and resource usage
  • Health checks: Each service should be able to report whether it’s healthy
  • Distributed tracing: Follow requests as they travel between services

Tools like Prometheus for metrics collection, Grafana for dashboards, and Jaeger for tracing make this much easier than building everything from scratch.

8. Rinse and Repeat

With your first service successfully extracted and your tooling in place, you’re ready to continue the process. Each subsequent extraction becomes easier because:

  • You’ve learned the patterns and potential pitfalls
  • Your automation pipelines are already set up
  • Your team has gained confidence and experience
  • You have monitoring and observability systems in place

Gradually, your monolith shrinks while your collection of focused, independent services grows.

The Light at the End of the Tunnel

This transformation isn’t just about technology – it changes how your entire team works. Instead of everyone working on one massive codebase and stepping on each other’s toes, different teams can own different services. The frontend team can deploy their changes without waiting for the backend team to finish theirs. The payments team can scale their service during Black Friday without affecting the recommendation engine.

Common Pitfalls to Avoid

Don’t try to do everything at once: The temptation to rewrite everything from scratch is strong, but it’s usually a mistake. Incremental change is safer and more sustainable.

Don’t ignore the human element: This transformation affects your entire team’s workflow. Invest in training and make sure everyone understands the new processes.

Don’t forget about data: Plan your database separation strategy early. It’s often the most complex part of the entire process.

Don’t skip monitoring: In a distributed system, observability isn’t optional – it’s essential for maintaining sanity.

Is This Journey Worth It?

Absolutely, but only if you’re facing the problems that containers solve. If your current setup works fine and you’re not experiencing scaling or development velocity issues, there’s no rush to change.

However, if you’re struggling with slow deployments, difficulty scaling, or teams blocking each other’s progress, containers and microservices can be transformative. You’ll gain:

  • Faster development cycles
  • Independent scaling of different components
  • Better fault isolation (one broken service doesn’t kill everything)
  • Technology flexibility (different services can use different programming languages or databases)
  • Easier team organization and ownership

The journey from monolith to containers isn’t always smooth, but with careful planning and incremental execution, it’s entirely achievable. Every successful transformation starts with a single step, and every monolith has the potential to evolve into something better.

Your users will notice faster updates and more reliable service. Your developers will thank you for making their work more enjoyable and productive. And you’ll sleep better knowing that a problem in one part of your system won’t bring down the entire application.

The path forward is clear – it’s time to start containerizing.

Developer productivity tools for iPad

I have tried doing development on my iPad and I found some tools which I liked:
Python – Pythonista
Java – Jedona

I was able to write some interesting programs while on the go on both my iPhone and especially on iPad. However I have realised that for better productivity it is better to use remote desktop solution to login into my Windows and a VNC solution to login into my MacBook.

Windows Mobile App (Previously called RD Client)
RealVNC Client

I have a static IP available for my home network however one can easily use solution like noip.com to get static domain name to work using the DDNS client built in your home router. I usually configure a port forward on my router to RDP into my windows or VNC into my MacBook if the router does not support VPN server. For better security I have setup a VPN server on my router and I do a VPN connection into my home network effectively eliminating the need of opening ports / do port forwarding.

Using the above approaches I no longer have to carry my laptops around and I use my iPad to do some work directly on iPad or via remote connection to my computers back home.

To manage my personal Linux servers on cloud I use Blink. Combined with Mosh I have found Blink to be a solid solution to SSH into my Linux boxes and do my work.

Blink

This type of setup has greatly reduced the amount of hardware I need to carry around. With one iPad and an iPhone I can work on most of my hobby projects and also do some of my office work.

Duplicating a Git repository

I was working in a personal Git repository and I wanted to make a duplicate copy of it under a different organisation. The following approach worked well for me:

Open Terminal / Command Prompt and Clone the existing source repo:
git clone --bare https://your_source_repository_url

Make sure you don’t skip the “–bare” portion of the command line.
You will see that a new folder is created from your source repository.


Move inside the folder:
cd your_source_repository

Now we need to push the files from your source repository into your target repository:
git push --mirror https://target_repository_url

Make sure your don’t skip the “–mirror” portion of the command line. Also this command has to be executed from the source repository’s clone location folder.

Localhost Tunnels

I have started using ngrok for setting up localhost tunnels directly from laptop. The basic idea is to start a web server in localhost and use ngrok to setup a tunnel to internet. This is very easy way to test local code and get it validated with other team mates. I will be looking out for an enterprise offering for this wonderful tool.

A very simple use case of ngrok is demonstrated in this video which is a tutorial for webhooks.

Code Structure Analysis Tool

I received a dump of Java codebase which had multiple modules and I needed to analyse it’s structure. The following tool did quite a good job:

https://github.com/gdela/socomo

Basically the idea is to run it inside a Java project using a maven command and it creates HTML file which denotes the high level structure of the code.

Note: Stan4J is also a very good tool which does similar job but allows deeper analysis (upto 500 classes only)

Throttling & Tuning Spring Boot

One of my Spring-Boot projects was battling with overloaded CPU and unresponsive / slow server response at times when there is more traffic. I have explored and implemented caching but my problem was with excessive connections coming in and server itself becoming slow. I could have setup multiple instances and do some kind of auto-scaling but given limited budget and hardware I wanted to put in some hard limits on my Spring-Boot app as to how much traffic it can take in and when it can give up gracefully (there is no shame in rejecting traffic with HTTP status 503 if the server infrastructure is overloaded).

I found a blog post entry from Netflix on how to tune Apache Tomcat and another article on how to tune Rest Controller code itself to implement a rudimentary Rate Limiter. I was glad to find the RateLimiter implementation in Goggle Guava library which I ultimately ended up using (for now). However I think the annotation driven RateLimiter is also a very good solution which is certainly very powerful and I will take it out for a spin sometime in near future.

The basic lesson learnt from this exercise:
– Tweak Tomcat and keep a watch on the acceptCount parameter which technically puts in a limit of how much traffic reaches your Rest controller.
– Use a RateLimiter on your hot APIs (which have higher latency) and don’t let your application get abused beyond a limit.
– Scale horizontally if the limits set above result in lot of traffic getting rejected.

Nginx gzip compression and load balancing.

To tune performance of my REST endpoints in past I have enabled Gzip compression in my nginx server configuration. So technically a large json response becomes gzipped and the network latency as a result goes down.

There is a good documentation of this feature on the nginx website which does a pretty good job.

However there is a catch which prevents this technique from working on local development system (while the same config works in production linux instance). I finally found an answer as to why this doesn’t work in some of my local environment.

To do static load balancing I use the upstream concept of nginx which is documented again on the nginx website. The performance is reasonable and the implementation is quite simple for a requirement which needs a simple failover implementation. However for advanced implementation we can always go to haproxy which is very good open source load balancer.

Issues Faced While Upgrading to JDK 11 and Spring Boot 2.2.5

I had a old project which was running on JDK 8 and Spring Boot 1.5.x which I just upgraded to JDK 11 and Spring Boot 2.2.5. I faced some hiccups in the process but it is finally done and the application is up and running. I will be documenting some of the issues faced in this process.

Spring Data
All instances of findOne() had to be replaced with findByID() which returns Optional<Entity> reference. I ended up removing lots of null checks by leveraging the Optional features of “orElse”, “orThrow” and “ifPresent” methods provided by Optional interface. It definitely is a better way to handle null.

JAXB runtime error
After migration the application booted up fine but on doing some specific operation I observed this error on console

java.lang.ClassNotFoundException: javax.xml.bind.DatatypeConverter

It seems this class has been moved out of the core library in JDK. This error kind of stumped me as some posts on internet suggested adding the following entry in pom.xml

<dependency>
<groupId>javax.xml.bind</groupId>
<artifactId>jaxb-api</artifactId>
<version>2.3.1</version>
<scope>runtime</scope>
</dependency>

However this didn’t help and I ended up using the following library to finally get rid of this issue:

<dependency>
<groupId>jakarta.xml.bind</groupId>
<artifactId>jakarta.xml.bind-api</artifactId>
</dependency>

Apache POI / Tika Library
This is another runtime issue faced during this application operation.

WARNING: An illegal reflective access operation has occurred
 WARNING: Illegal reflective access by org.apache.poi.openxml4j.util.ZipSecureFile$1 (file:/Users/562320/.m2/repository/org/apache/tika/tika-app/1.15/tika-app-1.15.jar) to field java.io.FilterInputStream.in
 WARNING: Please consider reporting this to the maintainers of org.apache.poi.openxml4j.util.ZipSecureFile$1
 WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
 WARNING: All illegal access operations will be denied in a future release
 Jun 29, 2020 3:14:17 AM org.apache.catalina.core.StandardWrapperValve invoke
 SEVERE: Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is org.apache.poi.openxml4j.exceptions.OpenXML4JRuntimeException: Fail to save: an error occurs while saving the package : class org.apache.poi.openxml4j.util.ZipSecureFile$ThresholdInputStream cannot be cast to class java.util.zip.ZipFile$ZipFileInputStream (org.apache.poi.openxml4j.util.ZipSecureFile$ThresholdInputStream is in unnamed module of loader 'app'; java.util.zip.ZipFile$ZipFileInputStream is in module java.base of loader 'bootstrap')] with root cause
 java.lang.ClassCastException: class org.apache.poi.openxml4j.util.ZipSecureFile$ThresholdInputStream cannot be cast to class java.util.zip.ZipFile$ZipFileInputStream (org.apache.poi.openxml4j.util.ZipSecureFile$ThresholdInputStream is in unnamed module of loader 'app'; java.util.zip.ZipFile$ZipFileInputStream is in module java.base of loader 'bootstrap')
     at java.base/java.util.zip.ZipFile$ZipFileInflaterInputStream.available(ZipFile.java:480)

This issue got resolved after upgrading the Apache Tika, Apache POI to latest version. I also had to upgrade the apache-commons-lang package to apache-commons-lang3 package.

Progress Bar implementation in Java for Terminal application

Th e”\r” character basically reverts the current cursor back to the 1st column in the current line. This concept should basically work on Mac, Linux and Windows. So to test it I wrote a quick hack and it worked properly:

public class ProgressBar {

    public static void main(String[] args) {
        int size = 10;
        for (int i=1; i<size; i++) {
            try {
                Thread.sleep(500);
                System.out.print("|" + "=".repeat(i) + ">" + " ".repeat(size-i) + "|\r");
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
        System.out.print("|" + "=".repeat(size) + "|\r");
    }
}

// Initial Output
// |=>        |
// Final Output
// |==========|

I got the desired output and a humble do nothing progress bar did come up on terminal. To further this idea I did some internet search and I ended up finding this repo on GitHub. It basically allows you to implement beautiful progress bars for terminal based Java application. The basic concept is the same but it provides much more functionality.