Author Archives: cyberaka

About cyberaka

I am an experienced Senior Solution Architect with proven history of designing robust and highly available Java based solutions. I am skilled in architecting designing and developing scalable, highly available, fault tolerant and concurrent systems which can serve high volume traffic. I have hands on experience in designing RESTful MicroServices architecture using Spring Boot, Spring Cloud, MongoDB, Java 8, Redis and Kafka. I like TDD (JUnit), BDD (Cucumber) and DDD based development as required for my projects. I have used AWS primarily for cloud based deployment and I like developing cloud enabled POC as a hobby in my spare time. I have deigned and developed CI/CD pipeline using Jenkins and leveraged Docker, Kubernetes for containerizing and deploying some of my applications. I am highly experienced in creating high performing technical teams from scratch. As an ex-entrepreneur I am very much involved in the business side of the IT industry. I love interacting with clients to understand their requirements and get the job done.

From One Big App to Many Small Ones: A Developer’s Guide to Containers

Picture this: you’ve built a successful web application that started small but has grown into something amazing. Users love it, your team has expanded, and everything seems great. But there’s a problem lurking beneath the surface. Every time you want to add a new feature or fix a bug, it takes forever. Deploying updates feels like performing surgery on a patient who’s wide awake. Sound familiar?

If you’re nodding your head, you’re dealing with what developers call a “monolith” – an application where everything is bundled together in one massive codebase. While monoliths work great when you’re starting out, they can become a real headache as your app grows. The good news? There’s a proven way to solve this problem using something called containers.

What’s the Problem with Big Applications?

Think of a monolithic application like a huge department store where everything is connected. The clothing section shares the same checkout system as electronics, the inventory system controls everything from shoes to smartphones, and if you want to renovate the toy section, you might accidentally break the jewelry department.

In software terms, this means:

  • When one part of your app breaks, it can bring down everything else
  • Adding new features requires testing the entire application
  • Scaling becomes expensive because you have to scale everything, even if you only need more power for one feature
  • Different teams end up stepping on each other’s toes

Enter Containers: Your App’s New Best Friend

Containers are like moving each department of that massive store into its own building. Each department (or service) can operate independently, but they can still communicate with each other when needed. If the toy store needs renovation, the electronics store keeps running without interruption.

In technical terms, a container packages your application code along with everything it needs to run – like a lunch box that contains not just your sandwich, but also the plate, napkin, and utensil you need to eat it.

The Step-by-Step Journey: From Chaos to Order

1. Take a Good, Hard Look at What You Have

Before you start tearing apart your application, you need to understand what you’re working with. This is like creating a detailed floor plan of that massive department store before you start moving things around.

Spend time examining your codebase to identify different functional areas. Most applications naturally group into sections like:

  • User accounts and login systems
  • Payment processing
  • Email notifications
  • Data reporting
  • Content management

Draw these relationships out on paper or in a diagramming tool. You’ll be surprised how much this simple exercise reveals about your application’s structure.

2. Put Your Entire App in a Container First

Here’s where most people make a mistake: they immediately try to break everything apart. Don’t do that. Instead, take your entire monolithic application and put it in a container first.

This is like moving your entire department store into a standardized building before you start separating departments. It solves a huge problem called “environment inconsistency” – the dreaded “it works on my computer but not on yours” syndrome.

When your app runs the same way on your laptop, your colleague’s computer, and your production servers, you eliminate countless headaches and mysterious bugs.

3. Pick Your First Target Carefully

Now comes the fun part: choosing which piece to extract first. This decision is crucial and should be strategic, not random.

Look for parts of your application that are:

  • Self-contained (they don’t depend heavily on other parts)
  • Relatively simple
  • Not critical to your core business logic

Great first candidates include:

  • Authentication systems (login/logout functionality)
  • Email notification services
  • File upload handlers
  • Search functionality

Successfully extracting your first service is like winning your first game – it builds confidence and teaches you the process for future extractions.

4. Bring in the Orchestra Conductor

As you create more containers, managing them manually becomes like trying to conduct a symphony orchestra by shouting instructions. You need a proper conductor, and in the container world, that’s Kubernetes.

Kubernetes is a platform that automatically handles:

  • Starting and stopping your containers
  • Distributing traffic between multiple copies of the same service
  • Restarting failed containers
  • Scaling services up or down based on demand

For beginners, consider starting with simpler alternatives like Docker Compose for development, then moving to managed Kubernetes services offered by cloud providers like Google Cloud, AWS, or Microsoft Azure.

5. Automate Everything from Day One

One of the biggest mistakes teams make is leaving deployment as a manual process. This is like insisting that every product in your store be moved by hand instead of using conveyor belts and automated systems.

Set up automated pipelines that:

  • Test your code automatically when you make changes
  • Build container images without human intervention
  • Deploy to testing environments instantly
  • Notify you if anything goes wrong

This automation eliminates human error and makes deployments so routine that they become boring – which is exactly what you want.

6. Untangle the Database Web

Databases are often the trickiest part of breaking up a monolith. In our department store analogy, this is like having a single cash register system that every department has been modifying over the years.

The key principle is simple: each service should own its data. Instead of letting multiple services directly access the same database tables, establish clear boundaries. If Service A needs data from Service B, it should ask politely through an API rather than sneaking into Service B’s database.

You don’t necessarily need separate physical databases immediately, but you must enforce these ownership rules in your code. For shared data like user sessions, move them to dedicated systems like Redis that are designed for sharing.

7. Make Your System Observable

When you had one big application, finding problems was like debugging issues in a single room. With multiple services, it’s like troubleshooting problems across an entire shopping mall. You need security cameras, intercoms, and monitoring systems everywhere.

Implement comprehensive observability from the start:

  • Logging: Ensure every service writes detailed logs about what it’s doing
  • Monitoring: Track metrics like response times, error rates, and resource usage
  • Health checks: Each service should be able to report whether it’s healthy
  • Distributed tracing: Follow requests as they travel between services

Tools like Prometheus for metrics collection, Grafana for dashboards, and Jaeger for tracing make this much easier than building everything from scratch.

8. Rinse and Repeat

With your first service successfully extracted and your tooling in place, you’re ready to continue the process. Each subsequent extraction becomes easier because:

  • You’ve learned the patterns and potential pitfalls
  • Your automation pipelines are already set up
  • Your team has gained confidence and experience
  • You have monitoring and observability systems in place

Gradually, your monolith shrinks while your collection of focused, independent services grows.

The Light at the End of the Tunnel

This transformation isn’t just about technology – it changes how your entire team works. Instead of everyone working on one massive codebase and stepping on each other’s toes, different teams can own different services. The frontend team can deploy their changes without waiting for the backend team to finish theirs. The payments team can scale their service during Black Friday without affecting the recommendation engine.

Common Pitfalls to Avoid

Don’t try to do everything at once: The temptation to rewrite everything from scratch is strong, but it’s usually a mistake. Incremental change is safer and more sustainable.

Don’t ignore the human element: This transformation affects your entire team’s workflow. Invest in training and make sure everyone understands the new processes.

Don’t forget about data: Plan your database separation strategy early. It’s often the most complex part of the entire process.

Don’t skip monitoring: In a distributed system, observability isn’t optional – it’s essential for maintaining sanity.

Is This Journey Worth It?

Absolutely, but only if you’re facing the problems that containers solve. If your current setup works fine and you’re not experiencing scaling or development velocity issues, there’s no rush to change.

However, if you’re struggling with slow deployments, difficulty scaling, or teams blocking each other’s progress, containers and microservices can be transformative. You’ll gain:

  • Faster development cycles
  • Independent scaling of different components
  • Better fault isolation (one broken service doesn’t kill everything)
  • Technology flexibility (different services can use different programming languages or databases)
  • Easier team organization and ownership

The journey from monolith to containers isn’t always smooth, but with careful planning and incremental execution, it’s entirely achievable. Every successful transformation starts with a single step, and every monolith has the potential to evolve into something better.

Your users will notice faster updates and more reliable service. Your developers will thank you for making their work more enjoyable and productive. And you’ll sleep better knowing that a problem in one part of your system won’t bring down the entire application.

The path forward is clear – it’s time to start containerizing.

Safeguarding Your Domain from Spam & Spoofing

Spam is something we all deal with. However, if you own the domain from which spam emails appear to originate, the consequences can be severe. Email spoofing is real, and every domain owner should take steps to ensure their domain is not exploited by spammers and scammers.

For business owners, it is crucial to implement basic security measures to prevent domain and email spoofing. Ensuring proper email authentication protocols are in place can help protect both your brand and your customers from malicious attacks.

A good writeup is available at the following links:

https://www.zoho.com/mail/help/adminconsole/spf-configuration.html

https://www.zoho.com/mail/help/adminconsole/dkim-configuration.html

https://www.zoho.com/mail/help/adminconsole/dmarc-policy.html

DKIM (DomainKeys Identified Mail)

DKIM is an email authentication method designed to detect forged sender addresses in email messages. It allows an organization to sign its outgoing emails with a cryptographic signature, which receiving mail servers can verify using the sender’s public key published in the domain’s DNS records.

How DKIM Works

  1. Signing Emails: The sending mail server generates a unique DKIM signature using a private key and embeds it in the email’s header.
  2. Publishing the Public Key: The domain owner publishes the corresponding public key as a TXT record in the domain’s DNS.
  3. Verifying Emails: The recipient’s mail server retrieves the public key from DNS and validates the email’s DKIM signature. If the signature is valid, the email is considered authentic.

SPF (Sender Policy Framework)

SPF is an email authentication protocol that helps prevent email spoofing by specifying which mail servers are authorized to send emails on behalf of a domain. It works by allowing domain owners to publish a TXT record in their DNS settings, listing the mail servers permitted to send emails using their domain.

How SPF Works

  1. DNS Record Setup: The domain owner publishes an SPF TXT record in their DNS settings, specifying allowed mail servers.
  2. Email Transmission: When an email is sent, the recipient’s server queries the sender’s domain for the SPF record.
  3. Verification: The recipient’s server checks if the sending server’s IP address matches the authorized list in the SPF record.
  4. Pass or Fail Decision: If the email comes from an authorized server, it is accepted. Otherwise, it may be rejected or marked as spam.

DMARC (Domain based Message Authentication, Reporting, and Conformance)

DMARC is an email security protocol that builds upon SPF and DKIM to prevent email spoofing. It provides domain owners with visibility into email activity and enforces policies to reject or quarantine unauthorized emails.

How DMARC Works

  1. Email Authentication: DMARC relies on SPF and DKIM to authenticate emails. The recipient server checks if the sender’s domain has valid SPF and/or DKIM signatures.
  2. Policy Enforcement: Based on the domain’s DMARC policy (none, quarantine, or reject), the receiving server determines how to handle unauthenticated emails:
    • p=none: The email is delivered normally, but reports are generated.
    • p=quarantine: Suspicious emails are sent to the spam folder.
    • p=reject: Unauthenticated emails are rejected outright.
  3. Reporting Mechanism: DMARC provides reports (rua for aggregate reports, ruf for forensic reports) to help domain owners monitor email authentication activity and detect unauthorized usage.

Developer productivity tools for iPad

I have tried doing development on my iPad and I found some tools which I liked:
Python – Pythonista
Java – Jedona

I was able to write some interesting programs while on the go on both my iPhone and especially on iPad. However I have realised that for better productivity it is better to use remote desktop solution to login into my Windows and a VNC solution to login into my MacBook.

Windows Mobile App (Previously called RD Client)
RealVNC Client

I have a static IP available for my home network however one can easily use solution like noip.com to get static domain name to work using the DDNS client built in your home router. I usually configure a port forward on my router to RDP into my windows or VNC into my MacBook if the router does not support VPN server. For better security I have setup a VPN server on my router and I do a VPN connection into my home network effectively eliminating the need of opening ports / do port forwarding.

Using the above approaches I no longer have to carry my laptops around and I use my iPad to do some work directly on iPad or via remote connection to my computers back home.

To manage my personal Linux servers on cloud I use Blink. Combined with Mosh I have found Blink to be a solid solution to SSH into my Linux boxes and do my work.

Blink

This type of setup has greatly reduced the amount of hardware I need to carry around. With one iPad and an iPhone I can work on most of my hobby projects and also do some of my office work.

Converting Chrome HAR file to CSV

It is sometimes easier to export a chrome har file into CSV format so that it can be opened in Excel. Once opened in Excel it is really easy to do further analysis for rather very large har dumps.

The following solution is based on this Link: https://stackoverflow.com/questions/31648078/how-to-import-har-file-to-excel

Install jq on Mac.

brew install jq

Run this command.

cat some_file.har | jq '[ "URL", "Time", "Wait time", "Status", "Body size","Content-Type", "Content-Encoding"],
    (.log.entries[] | [
        .request.url,
        .time,
        .timings.wait,
        .response.status,
        .response.content.size,
        .response.content.mimeType,
        .response.content.encoding
]) | @csv' | sed 's/\\"//g' | sed 's/"//g' > some_file.csv

This results in a CSV file which you can easily open in an Excel.

To extract the path of the URL minus the file name the following Excel function can be used by adding a column:

=LEFT(A2,LEN(A2)-LEN(RIGHT(A2,LEN(A2)-FIND("@",SUBSTITUTE(A2,"/","@",LEN(A2)-LEN(SUBSTITUTE(A2,"/",""))),1))))

Viewing files in hex mode

Sometimes I need to look at files in their actual hexadecimal format. I use the combination of vi and hexdump to fulfill this requirement.

To simply view a file’s content in hex format I use this command on my mac.

hexdump -C <file_name>

This typically is enough for the job at hand. However there is also another trick of using hexdump by leveraging vi command. I follow the following steps:

  • Start vi by typing “vi <file_name>” on the command line.
  • Type “:%!hexdump -C”
  • This will replace the contents of the vi screen with the hex view of the current file.
  • Save the file for future use.

Duplicating a Git repository

I was working in a personal Git repository and I wanted to make a duplicate copy of it under a different organisation. The following approach worked well for me:

Open Terminal / Command Prompt and Clone the existing source repo:
git clone --bare https://your_source_repository_url

Make sure you don’t skip the “–bare” portion of the command line.
You will see that a new folder is created from your source repository.


Move inside the folder:
cd your_source_repository

Now we need to push the files from your source repository into your target repository:
git push --mirror https://target_repository_url

Make sure your don’t skip the “–mirror” portion of the command line. Also this command has to be executed from the source repository’s clone location folder.

Localhost Tunnels

I have started using ngrok for setting up localhost tunnels directly from laptop. The basic idea is to start a web server in localhost and use ngrok to setup a tunnel to internet. This is very easy way to test local code and get it validated with other team mates. I will be looking out for an enterprise offering for this wonderful tool.

A very simple use case of ngrok is demonstrated in this video which is a tutorial for webhooks.

Bash Shell – Folder as tree

I like using tree command in Linux, thankfully it has been ported to Mac as well and it is quite easy to get tree representation of files and folders inside a directory on a shell. An article on StackOverFlow talks about it aptly. The original site is here.

In case you are using brew then it is quite easy peasy as shown below:

brew install tree

The following commands generate a tree view:

Generates a tree of all files and folders in the current directory:

tree

Generates a tree containing only folders in the current directory:

tree -d

Generates a tree containing only folders for 3 levels depth.

tree -d -L 2 .

In case you want to use vanilla shell without installing anything. (doesn’t generate a tree though).

find . -maxdepth 3 - type d

Code Structure Analysis Tool

I received a dump of Java codebase which had multiple modules and I needed to analyse it’s structure. The following tool did quite a good job:

https://github.com/gdela/socomo

Basically the idea is to run it inside a Java project using a maven command and it creates HTML file which denotes the high level structure of the code.

Note: Stan4J is also a very good tool which does similar job but allows deeper analysis (upto 500 classes only)