Author Archives: cyberaka

About cyberaka

I am an experienced Senior Solution Architect with proven history of designing robust and highly available Java based solutions. I am skilled in architecting designing and developing scalable, highly available, fault tolerant and concurrent systems which can serve high volume traffic. I have hands on experience in designing RESTful MicroServices architecture using Spring Boot, Spring Cloud, MongoDB, Java 8, Redis and Kafka. I like TDD (JUnit), BDD (Cucumber) and DDD based development as required for my projects. I have used AWS primarily for cloud based deployment and I like developing cloud enabled POC as a hobby in my spare time. I have deigned and developed CI/CD pipeline using Jenkins and leveraged Docker, Kubernetes for containerizing and deploying some of my applications. I am highly experienced in creating high performing technical teams from scratch. As an ex-entrepreneur I am very much involved in the business side of the IT industry. I love interacting with clients to understand their requirements and get the job done.

The Complete AI App Guide for 2025: Essential Tools for IT Professionals

The Complete AI App Guide for 2025: Essential Tools for IT Professionals

Artificial Intelligence is reshaping every layer of software development, from coding and testing to documentation and deployment. This guide curates the most impactful AI applications of 2025, with a special focus on tools that elevate an IT professional’s daily workflow.

Software Development & Programming

Cursor

AI-first code editor that understands your entire project context, offers natural-language refactors, and ships with multi-model support.

GitHub Copilot

Your AI pair programmer for instant code completions, chat-based explanations, and automated tests.

Replit AI Agent

Describe an idea in plain English and watch Replit spin up a working web or mobile app, complete with hosting.

Codeium

Fast, free autocomplete that plugs into 70+ languages and every major IDE.

Anychat

Unified chat interface where you can swap between multiple AI models mid-conversation.

Claude Code

Terminal-native assistant for deep codebase understanding and cross-file edits.

General AI Assistants

  • Perplexity – instant, cited answers for technical research.
  • Claude – long-context reasoning and collaborative project chat.
  • ChatGPT – versatile chatbot with voice mode and strong coding skills.

Productivity & Workflow

  • Granola – turns meetings into structured notes, action items, and summaries.
  • Wispr Flow – system-wide voice dictation that works in any app.
  • Gamma – generate slide decks, documents, or one-page sites from prompts.
  • Adobe AI Assistant – chat with long PDFs, contracts, or manuals.
  • Cubby – collaborative research workspace with built-in AI search.
  • Cora – inbox triage and auto-reply generation for email power users.
  • Lindy – no-code builder for custom AI agents that automate routine tasks.
  • Notion AI – smarter docs, wikis, and databases with in-line generation.

Content Creation & Video

  • HeyGen – realistic AI avatars for tutorials, product demos, and localization.
  • Delphi – voice, video, and text clones for audience engagement.
  • Argil – quick social-media videos featuring AI hosts.
  • Opus – auto-splits long videos into shareable viral clips.
  • Persona – build AI agents that reflect your personal brand.
  • Captions – automatic subtitles, eye-contact correction, and AI presenters.

Creative Tools

  • ElevenLabs – ultra-realistic, multilingual text-to-speech voices.
  • Suno & Udio – compose full songs from a written prompt.
  • Midjourney, Ideogram, Playground – high-quality image generation suites.
  • Runway, Kling, Viggle – next-gen video generation platforms.
  • Krea – canvas for mixing and remixing AI images or clips.
  • Photoroom – one-click product shots, background removal, and batch edits.

Learning & Personal Development

  • Rosebud – interactive journaling with data-backed insights.
  • Good Inside – parenting advice with personalized AI support.
  • Ada Health – symptom assessment and health guidance.
  • Ash – AI-based mental-health coach.
  • NotebookLM – convert any document into an AI-driven podcast.
  • Particle – bite-sized news summaries with source links.

Entertainment & Fun

  • Remix – social platform for sharing AI-generated art and video.
  • Meta Imagine – create playful AI avatars inside Meta apps.
  • Grok – chat companion with a sense of humor from xAI.
  • Curio – interactive toys powered by AI voices.

Getting Started: A Six-Week Integration Roadmap

  1. Weeks 1–2 – Lay the Foundation: Install Cursor or GitHub Copilot in your IDE, plus Codeium as a free backup. Set up ChatGPT and Claude for on-demand problem solving.
  2. Weeks 3–4 – Boost Productivity: Adopt Notion AI for project docs, leverage Adobe AI Assistant for reading specs, and generate slides with Gamma.
  3. Weeks 5–6 – Automate & Scale: Build Lindy agents for repetitive tasks, prototype ideas in Replit, and add professional narration with ElevenLabs.
  4. Beyond: Join AI developer communities, stay updated on new releases, and gradually expand your toolkit.

The future belongs to developers who treat AI as a creative partner, not a replacement. Choose one or two of these tools today, master them, and watch your productivity soar.

When Your AI Coding Assistant Becomes a Digital Wrecking Ball

A cautionary tale of Docker, environment variables, and why backup strategies exist

Yesterday, I had what I can only describe as the most educational disaster of my coding career. Picture this: I’m working on a beautiful microservices setup – three perfectly orchestrated containers (web, backend, core) each with their own well-defined .env files. Life is good. Docker containers are humming. Everything is chef’s kiss perfect.

Then I made the fatal mistake of asking my AI coding assistant: “Hey, can you help me move from host networking to docker-compose?”

The Descent Into Chaos

What followed was like watching a digital tornado in slow motion:

AI: “Sure! Let me create extensive documentation and docker-compose files!” Me: “Wait, I just wanted simple docker-compose setup…” AI: “Here’s 47 markdown files and a management script!” Me: “Please stop. Revert everything.” AI: “No problem! Accidentally deletes entire project structure” Me: “…Did you just delete my web, backend, and core directories?” AI: “Oops. 😅”

The Recovery Mission

After manually restoring everything from git (thank you, version control gods), I thought we were back on track. But then:

Me: “My development environment is broken. ‘vite: command not found'” AI: “Let me fix that! Creates duplicate environment variables” Me: “Why do I have PORT=6080 AND APP_CORE_PORT=6080?” AI: “Redundancy is good, right? No? Okay, let me fix it… Creates more duplicates

It was like watching someone try to fix a leaky faucet with a sledgehammer.

The Silver Lining

But here’s the plot twist – through all this chaos, we actually ended up with something beautiful:

  • Clean, consistent environment variable naming (APP_*_SERVER_PORT)
  • Proper fail-fast configuration (no silent defaults!)
  • A docker-compose setup that actually works
  • And most importantly: updated .env.example files for proper documentation

Lessons Learned

  1. Always have backups – Git saved my bacon more times than I can count
  2. AI assistants are powerful but not infallible – They’re like enthusiastic interns with superpowers
  3. Clear communication is key – “Simple docker-compose setup” apparently means “comprehensive enterprise documentation suite” in AI speak
  4. The journey matters – Sometimes you need to break everything to build it back better
  5. Naming conventions matter – APP_BACKEND_SERVER_PORT is way better than PORT_THING_MAYBE

Final Thoughts

Working with AI coding assistants is like having a brilliant colleague who occasionally decides to reorganize your entire codebase while you’re getting coffee. They mean well, they’re incredibly capable, but sometimes they have the subtlety of a rhinoceros in a china shop.

Would I do it again? Absolutely. But next time, I’m committing to git after every single change.

And yes, AI assistant, this blog post is about you. You know who you are. 😉


P.S. – To my fellow developers: Always backup your .env files. They might seem small and innocent, but losing them is like losing the keys to your digital kingdom.

P.P.S. – The docker-compose setup works perfectly now. Sometimes you need a little chaos to achieve order.

10 Universal Phone Shortcuts Every Mobile User Should Know

Ever wondered how to check your phone’s IMEI number without diving into settings? Just dial *#06# and it pops right up — even on the simplest feature phone!

But did you know there are many more such hidden codes that can unlock useful information or tweak important settings — all from your phone’s keypad?

Whether you’re using a basic keypad phone or the latest smartphone, here are 10 tried-and-true universal GSM shortcuts that work on nearly every mobile device around the world.

Top 10 Universal GSM Codes (Not OS Dependent)

1. Check IMEI Number*#06#

Shows your device’s unique identifier. Useful if your phone is lost or stolen, or when verifying authenticity before buying a second-hand phone.

2. Check Call Forwarding Status*#21#

Tells you whether your calls, messages, or data are being forwarded to another number. A handy security check!

3. Check Forwarding When Out of Coverage*#62#

Find out where your calls are being sent when your phone is off or out of network range.

4. Check Forwarding When Busy*#67#

Shows which number your calls go to when you’re already on another call.

5. Check All Conditional Forwarding Settings*#004#

Gives a complete overview of all conditional call forwarding (busy, unreachable, no reply).

6. Deactivate All Call Forwarding##002#

Quickly cancels all types of call forwarding, restoring default behavior. Good to use if you’re unsure which settings are active.

7. Check Call Waiting Status*#43#

Find out if call waiting is currently enabled — so you don’t miss important second calls.

8. Activate Call Waiting – *43#

Turn on call waiting to receive alerts during ongoing calls.

9. Deactivate Call Waiting#43#

Switch off call waiting to avoid interruptions during important conversations.

Conclusion

Mobile phones — even the simplest ones — come with a powerful set of secret codes that give you control, transparency, and security. Whether you’re a tech enthusiast or just someone who wants to keep things tidy, these universal codes will come in handy.

From One Big App to Many Small Ones: A Developer’s Guide to Containers

Picture this: you’ve built a successful web application that started small but has grown into something amazing. Users love it, your team has expanded, and everything seems great. But there’s a problem lurking beneath the surface. Every time you want to add a new feature or fix a bug, it takes forever. Deploying updates feels like performing surgery on a patient who’s wide awake. Sound familiar?

If you’re nodding your head, you’re dealing with what developers call a “monolith” – an application where everything is bundled together in one massive codebase. While monoliths work great when you’re starting out, they can become a real headache as your app grows. The good news? There’s a proven way to solve this problem using something called containers.

What’s the Problem with Big Applications?

Think of a monolithic application like a huge department store where everything is connected. The clothing section shares the same checkout system as electronics, the inventory system controls everything from shoes to smartphones, and if you want to renovate the toy section, you might accidentally break the jewelry department.

In software terms, this means:

  • When one part of your app breaks, it can bring down everything else
  • Adding new features requires testing the entire application
  • Scaling becomes expensive because you have to scale everything, even if you only need more power for one feature
  • Different teams end up stepping on each other’s toes

Enter Containers: Your App’s New Best Friend

Containers are like moving each department of that massive store into its own building. Each department (or service) can operate independently, but they can still communicate with each other when needed. If the toy store needs renovation, the electronics store keeps running without interruption.

In technical terms, a container packages your application code along with everything it needs to run – like a lunch box that contains not just your sandwich, but also the plate, napkin, and utensil you need to eat it.

The Step-by-Step Journey: From Chaos to Order

1. Take a Good, Hard Look at What You Have

Before you start tearing apart your application, you need to understand what you’re working with. This is like creating a detailed floor plan of that massive department store before you start moving things around.

Spend time examining your codebase to identify different functional areas. Most applications naturally group into sections like:

  • User accounts and login systems
  • Payment processing
  • Email notifications
  • Data reporting
  • Content management

Draw these relationships out on paper or in a diagramming tool. You’ll be surprised how much this simple exercise reveals about your application’s structure.

2. Put Your Entire App in a Container First

Here’s where most people make a mistake: they immediately try to break everything apart. Don’t do that. Instead, take your entire monolithic application and put it in a container first.

This is like moving your entire department store into a standardized building before you start separating departments. It solves a huge problem called “environment inconsistency” – the dreaded “it works on my computer but not on yours” syndrome.

When your app runs the same way on your laptop, your colleague’s computer, and your production servers, you eliminate countless headaches and mysterious bugs.

3. Pick Your First Target Carefully

Now comes the fun part: choosing which piece to extract first. This decision is crucial and should be strategic, not random.

Look for parts of your application that are:

  • Self-contained (they don’t depend heavily on other parts)
  • Relatively simple
  • Not critical to your core business logic

Great first candidates include:

  • Authentication systems (login/logout functionality)
  • Email notification services
  • File upload handlers
  • Search functionality

Successfully extracting your first service is like winning your first game – it builds confidence and teaches you the process for future extractions.

4. Bring in the Orchestra Conductor

As you create more containers, managing them manually becomes like trying to conduct a symphony orchestra by shouting instructions. You need a proper conductor, and in the container world, that’s Kubernetes.

Kubernetes is a platform that automatically handles:

  • Starting and stopping your containers
  • Distributing traffic between multiple copies of the same service
  • Restarting failed containers
  • Scaling services up or down based on demand

For beginners, consider starting with simpler alternatives like Docker Compose for development, then moving to managed Kubernetes services offered by cloud providers like Google Cloud, AWS, or Microsoft Azure.

5. Automate Everything from Day One

One of the biggest mistakes teams make is leaving deployment as a manual process. This is like insisting that every product in your store be moved by hand instead of using conveyor belts and automated systems.

Set up automated pipelines that:

  • Test your code automatically when you make changes
  • Build container images without human intervention
  • Deploy to testing environments instantly
  • Notify you if anything goes wrong

This automation eliminates human error and makes deployments so routine that they become boring – which is exactly what you want.

6. Untangle the Database Web

Databases are often the trickiest part of breaking up a monolith. In our department store analogy, this is like having a single cash register system that every department has been modifying over the years.

The key principle is simple: each service should own its data. Instead of letting multiple services directly access the same database tables, establish clear boundaries. If Service A needs data from Service B, it should ask politely through an API rather than sneaking into Service B’s database.

You don’t necessarily need separate physical databases immediately, but you must enforce these ownership rules in your code. For shared data like user sessions, move them to dedicated systems like Redis that are designed for sharing.

7. Make Your System Observable

When you had one big application, finding problems was like debugging issues in a single room. With multiple services, it’s like troubleshooting problems across an entire shopping mall. You need security cameras, intercoms, and monitoring systems everywhere.

Implement comprehensive observability from the start:

  • Logging: Ensure every service writes detailed logs about what it’s doing
  • Monitoring: Track metrics like response times, error rates, and resource usage
  • Health checks: Each service should be able to report whether it’s healthy
  • Distributed tracing: Follow requests as they travel between services

Tools like Prometheus for metrics collection, Grafana for dashboards, and Jaeger for tracing make this much easier than building everything from scratch.

8. Rinse and Repeat

With your first service successfully extracted and your tooling in place, you’re ready to continue the process. Each subsequent extraction becomes easier because:

  • You’ve learned the patterns and potential pitfalls
  • Your automation pipelines are already set up
  • Your team has gained confidence and experience
  • You have monitoring and observability systems in place

Gradually, your monolith shrinks while your collection of focused, independent services grows.

The Light at the End of the Tunnel

This transformation isn’t just about technology – it changes how your entire team works. Instead of everyone working on one massive codebase and stepping on each other’s toes, different teams can own different services. The frontend team can deploy their changes without waiting for the backend team to finish theirs. The payments team can scale their service during Black Friday without affecting the recommendation engine.

Common Pitfalls to Avoid

Don’t try to do everything at once: The temptation to rewrite everything from scratch is strong, but it’s usually a mistake. Incremental change is safer and more sustainable.

Don’t ignore the human element: This transformation affects your entire team’s workflow. Invest in training and make sure everyone understands the new processes.

Don’t forget about data: Plan your database separation strategy early. It’s often the most complex part of the entire process.

Don’t skip monitoring: In a distributed system, observability isn’t optional – it’s essential for maintaining sanity.

Is This Journey Worth It?

Absolutely, but only if you’re facing the problems that containers solve. If your current setup works fine and you’re not experiencing scaling or development velocity issues, there’s no rush to change.

However, if you’re struggling with slow deployments, difficulty scaling, or teams blocking each other’s progress, containers and microservices can be transformative. You’ll gain:

  • Faster development cycles
  • Independent scaling of different components
  • Better fault isolation (one broken service doesn’t kill everything)
  • Technology flexibility (different services can use different programming languages or databases)
  • Easier team organization and ownership

The journey from monolith to containers isn’t always smooth, but with careful planning and incremental execution, it’s entirely achievable. Every successful transformation starts with a single step, and every monolith has the potential to evolve into something better.

Your users will notice faster updates and more reliable service. Your developers will thank you for making their work more enjoyable and productive. And you’ll sleep better knowing that a problem in one part of your system won’t bring down the entire application.

The path forward is clear – it’s time to start containerizing.

Safeguarding Your Domain from Spam & Spoofing

Spam is something we all deal with. However, if you own the domain from which spam emails appear to originate, the consequences can be severe. Email spoofing is real, and every domain owner should take steps to ensure their domain is not exploited by spammers and scammers.

For business owners, it is crucial to implement basic security measures to prevent domain and email spoofing. Ensuring proper email authentication protocols are in place can help protect both your brand and your customers from malicious attacks.

A good writeup is available at the following links:

https://www.zoho.com/mail/help/adminconsole/spf-configuration.html

https://www.zoho.com/mail/help/adminconsole/dkim-configuration.html

https://www.zoho.com/mail/help/adminconsole/dmarc-policy.html

DKIM (DomainKeys Identified Mail)

DKIM is an email authentication method designed to detect forged sender addresses in email messages. It allows an organization to sign its outgoing emails with a cryptographic signature, which receiving mail servers can verify using the sender’s public key published in the domain’s DNS records.

How DKIM Works

  1. Signing Emails: The sending mail server generates a unique DKIM signature using a private key and embeds it in the email’s header.
  2. Publishing the Public Key: The domain owner publishes the corresponding public key as a TXT record in the domain’s DNS.
  3. Verifying Emails: The recipient’s mail server retrieves the public key from DNS and validates the email’s DKIM signature. If the signature is valid, the email is considered authentic.

SPF (Sender Policy Framework)

SPF is an email authentication protocol that helps prevent email spoofing by specifying which mail servers are authorized to send emails on behalf of a domain. It works by allowing domain owners to publish a TXT record in their DNS settings, listing the mail servers permitted to send emails using their domain.

How SPF Works

  1. DNS Record Setup: The domain owner publishes an SPF TXT record in their DNS settings, specifying allowed mail servers.
  2. Email Transmission: When an email is sent, the recipient’s server queries the sender’s domain for the SPF record.
  3. Verification: The recipient’s server checks if the sending server’s IP address matches the authorized list in the SPF record.
  4. Pass or Fail Decision: If the email comes from an authorized server, it is accepted. Otherwise, it may be rejected or marked as spam.

DMARC (Domain based Message Authentication, Reporting, and Conformance)

DMARC is an email security protocol that builds upon SPF and DKIM to prevent email spoofing. It provides domain owners with visibility into email activity and enforces policies to reject or quarantine unauthorized emails.

How DMARC Works

  1. Email Authentication: DMARC relies on SPF and DKIM to authenticate emails. The recipient server checks if the sender’s domain has valid SPF and/or DKIM signatures.
  2. Policy Enforcement: Based on the domain’s DMARC policy (none, quarantine, or reject), the receiving server determines how to handle unauthenticated emails:
    • p=none: The email is delivered normally, but reports are generated.
    • p=quarantine: Suspicious emails are sent to the spam folder.
    • p=reject: Unauthenticated emails are rejected outright.
  3. Reporting Mechanism: DMARC provides reports (rua for aggregate reports, ruf for forensic reports) to help domain owners monitor email authentication activity and detect unauthorized usage.

Developer productivity tools for iPad

I have tried doing development on my iPad and I found some tools which I liked:
Python – Pythonista
Java – Jedona

I was able to write some interesting programs while on the go on both my iPhone and especially on iPad. However I have realised that for better productivity it is better to use remote desktop solution to login into my Windows and a VNC solution to login into my MacBook.

Windows Mobile App (Previously called RD Client)
RealVNC Client

I have a static IP available for my home network however one can easily use solution like noip.com to get static domain name to work using the DDNS client built in your home router. I usually configure a port forward on my router to RDP into my windows or VNC into my MacBook if the router does not support VPN server. For better security I have setup a VPN server on my router and I do a VPN connection into my home network effectively eliminating the need of opening ports / do port forwarding.

Using the above approaches I no longer have to carry my laptops around and I use my iPad to do some work directly on iPad or via remote connection to my computers back home.

To manage my personal Linux servers on cloud I use Blink. Combined with Mosh I have found Blink to be a solid solution to SSH into my Linux boxes and do my work.

Blink

This type of setup has greatly reduced the amount of hardware I need to carry around. With one iPad and an iPhone I can work on most of my hobby projects and also do some of my office work.

Converting Chrome HAR file to CSV

It is sometimes easier to export a chrome har file into CSV format so that it can be opened in Excel. Once opened in Excel it is really easy to do further analysis for rather very large har dumps.

The following solution is based on this Link: https://stackoverflow.com/questions/31648078/how-to-import-har-file-to-excel

Install jq on Mac.

brew install jq

Run this command.

cat some_file.har | jq '[ "URL", "Time", "Wait time", "Status", "Body size","Content-Type", "Content-Encoding"],
    (.log.entries[] | [
        .request.url,
        .time,
        .timings.wait,
        .response.status,
        .response.content.size,
        .response.content.mimeType,
        .response.content.encoding
]) | @csv' | sed 's/\\"//g' | sed 's/"//g' > some_file.csv

This results in a CSV file which you can easily open in an Excel.

To extract the path of the URL minus the file name the following Excel function can be used by adding a column:

=LEFT(A2,LEN(A2)-LEN(RIGHT(A2,LEN(A2)-FIND("@",SUBSTITUTE(A2,"/","@",LEN(A2)-LEN(SUBSTITUTE(A2,"/",""))),1))))

Viewing files in hex mode

Sometimes I need to look at files in their actual hexadecimal format. I use the combination of vi and hexdump to fulfill this requirement.

To simply view a file’s content in hex format I use this command on my mac.

hexdump -C <file_name>

This typically is enough for the job at hand. However there is also another trick of using hexdump by leveraging vi command. I follow the following steps:

  • Start vi by typing “vi <file_name>” on the command line.
  • Type “:%!hexdump -C”
  • This will replace the contents of the vi screen with the hex view of the current file.
  • Save the file for future use.

Duplicating a Git repository

I was working in a personal Git repository and I wanted to make a duplicate copy of it under a different organisation. The following approach worked well for me:

Open Terminal / Command Prompt and Clone the existing source repo:
git clone --bare https://your_source_repository_url

Make sure you don’t skip the “–bare” portion of the command line.
You will see that a new folder is created from your source repository.


Move inside the folder:
cd your_source_repository

Now we need to push the files from your source repository into your target repository:
git push --mirror https://target_repository_url

Make sure your don’t skip the “–mirror” portion of the command line. Also this command has to be executed from the source repository’s clone location folder.