Author Archives: cyberaka

About cyberaka

I am an experienced Senior Solution Architect with proven history of designing robust and highly available Java based solutions. I am skilled in architecting designing and developing scalable, highly available, fault tolerant and concurrent systems which can serve high volume traffic. I have hands on experience in designing RESTful MicroServices architecture using Spring Boot, Spring Cloud, MongoDB, Java 8, Redis and Kafka. I like TDD (JUnit), BDD (Cucumber) and DDD based development as required for my projects. I have used AWS primarily for cloud based deployment and I like developing cloud enabled POC as a hobby in my spare time. I have deigned and developed CI/CD pipeline using Jenkins and leveraged Docker, Kubernetes for containerizing and deploying some of my applications. I am highly experienced in creating high performing technical teams from scratch. As an ex-entrepreneur I am very much involved in the business side of the IT industry. I love interacting with clients to understand their requirements and get the job done.

Converting Chrome HAR file to CSV

It is sometimes easier to export a chrome har file into CSV format so that it can be opened in Excel. Once opened in Excel it is really easy to do further analysis for rather very large har dumps.

The following solution is based on this Link: https://stackoverflow.com/questions/31648078/how-to-import-har-file-to-excel

Install jq on Mac.

brew install jq

Run this command.

cat some_file.har | jq '[ "URL", "Time", "Wait time", "Status", "Body size","Content-Type", "Content-Encoding"],
    (.log.entries[] | [
        .request.url,
        .time,
        .timings.wait,
        .response.status,
        .response.content.size,
        .response.content.mimeType,
        .response.content.encoding
]) | @csv' | sed 's/\\"//g' | sed 's/"//g' > some_file.csv

This results in a CSV file which you can easily open in an Excel.

To extract the path of the URL minus the file name the following Excel function can be used by adding a column:

=LEFT(A2,LEN(A2)-LEN(RIGHT(A2,LEN(A2)-FIND("@",SUBSTITUTE(A2,"/","@",LEN(A2)-LEN(SUBSTITUTE(A2,"/",""))),1))))

Viewing files in hex mode

Sometimes I need to look at files in their actual hexadecimal format. I use the combination of vi and hexdump to fulfill this requirement.

To simply view a file’s content in hex format I use this command on my mac.

hexdump -C <file_name>

This typically is enough for the job at hand. However there is also another trick of using hexdump by leveraging vi command. I follow the following steps:

  • Start vi by typing “vi <file_name>” on the command line.
  • Type “:%!hexdump -C”
  • This will replace the contents of the vi screen with the hex view of the current file.
  • Save the file for future use.

Duplicating a Git repository

I was working in a personal Git repository and I wanted to make a duplicate copy of it under a different organisation. The following approach worked well for me:

Open Terminal / Command Prompt and Clone the existing source repo:
git clone --bare https://your_source_repository_url

Make sure you don’t skip the “–bare” portion of the command line.
You will see that a new folder is created from your source repository.


Move inside the folder:
cd your_source_repository

Now we need to push the files from your source repository into your target repository:
git push --mirror https://target_repository_url

Make sure your don’t skip the “–mirror” portion of the command line. Also this command has to be executed from the source repository’s clone location folder.

Localhost Tunnels

I have started using ngrok for setting up localhost tunnels directly from laptop. The basic idea is to start a web server in localhost and use ngrok to setup a tunnel to internet. This is very easy way to test local code and get it validated with other team mates. I will be looking out for an enterprise offering for this wonderful tool.

A very simple use case of ngrok is demonstrated in this video which is a tutorial for webhooks.

Bash Shell – Folder as tree

I like using tree command in Linux, thankfully it has been ported to Mac as well and it is quite easy to get tree representation of files and folders inside a directory on a shell. An article on StackOverFlow talks about it aptly. The original site is here.

In case you are using brew then it is quite easy peasy as shown below:

brew install tree

The following commands generate a tree view:

Generates a tree of all files and folders in the current directory:

tree

Generates a tree containing only folders in the current directory:

tree -d

Generates a tree containing only folders for 3 levels depth.

tree -d -L 2 .

In case you want to use vanilla shell without installing anything. (doesn’t generate a tree though).

find . -maxdepth 3 - type d

Code Structure Analysis Tool

I received a dump of Java codebase which had multiple modules and I needed to analyse it’s structure. The following tool did quite a good job:

https://github.com/gdela/socomo

Basically the idea is to run it inside a Java project using a maven command and it creates HTML file which denotes the high level structure of the code.

Note: Stan4J is also a very good tool which does similar job but allows deeper analysis (upto 500 classes only)

Mongodump – Unrecognized field ‘snapshot’

I recently upgraded my mongo instance on my laptop and I was trying to do a backup of my mongo database on a remote server. I tried using my usual monodump commands but saw some strange error:

Failed: error reading collection: Failed to parse: { find: "Files", skip: 0, snapshot: true, $readPreference: { mode: "secondaryPreferred" }, $db: "app_db" }. Unrecognized field 'snapshot'.

On some googling I found out that this error can happen due to difference between the Mongo server version and the Mongo client version. This stackoverflow article explains the problem and the solution to it as well.

In the end I simply had to add “–forceTableScan” to my mongodump command to get it to work properly.

Throttling & Tuning Spring Boot

One of my Spring-Boot projects was battling with overloaded CPU and unresponsive / slow server response at times when there is more traffic. I have explored and implemented caching but my problem was with excessive connections coming in and server itself becoming slow. I could have setup multiple instances and do some kind of auto-scaling but given limited budget and hardware I wanted to put in some hard limits on my Spring-Boot app as to how much traffic it can take in and when it can give up gracefully (there is no shame in rejecting traffic with HTTP status 503 if the server infrastructure is overloaded).

I found a blog post entry from Netflix on how to tune Apache Tomcat and another article on how to tune Rest Controller code itself to implement a rudimentary Rate Limiter. I was glad to find the RateLimiter implementation in Goggle Guava library which I ultimately ended up using (for now). However I think the annotation driven RateLimiter is also a very good solution which is certainly very powerful and I will take it out for a spin sometime in near future.

The basic lesson learnt from this exercise:
– Tweak Tomcat and keep a watch on the acceptCount parameter which technically puts in a limit of how much traffic reaches your Rest controller.
– Use a RateLimiter on your hot APIs (which have higher latency) and don’t let your application get abused beyond a limit.
– Scale horizontally if the limits set above result in lot of traffic getting rejected.