Using SSH to clone Git repository with multiple private keys

I prefer to use the multiple private keys and avoid using the same private key with multiple services. Recently I decided to switch from HTTPS based Git clone of my bitbucket repositories to SSH based Git clone. So created a new private key added them into bitbucket.org and then I expected that my git clone would work. But it didn’t.

git clone git@bitbucket.org:mybitbucketid/mygitrepository.git
Cloning into 'mygitrepository'...
conq: repository access denied.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

I knew I had to point the git client to my private key so I created the following entry in my ~/.ssh/config file.

Host bitbucketrepo
HostName bitbucket.org
IdentityFile ~/.ssh/bitbucket_private_key
User git

Now I felt it would work. I tried again and it still didn’t work.

git clone bitbucketrepo:mybitbucketid/mygitrepository.git
Cloning into 'mygitrepository'...
conq: repository access denied.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

Now I was confused as to why it was not working. On further research I found this link. Now I updated the entry in my ~/.ssh/config file to this.

Host bitbucketrepo
HostName bitbucket.org
IdentityFile ~/.ssh/bitbucket_private_key
IdentitiesOnly yes
User git

Now I tried again. This time it worked perfectly.

git clone bitbucketrepo:mybitbucketid/mygitrepository.git
Cloning into 'mygitrepository'...
warning: You appear to have cloned an empty repository.
Checking connectivity... done.

Well the addition of “IdentitiesOnly yes” line in my config file did the trick. It seems that when we do an SSH connection it’s default behavior is to send the identity file matching the default filename for each protocol. So if you have a file named ~/.ssh/id_rsa then that will get tried before your private key which in my case was ~/.ssh/bitbucket_private_key. So by using the “IdentitiesOnly yes” line I explicitly asked my ssh client to use my identity file and nothing else and it worked like a charm.

Awesome tmux

I have always found moving between tabbed interfaces cumbersome in my Mac terminal. Finally when debugging 7 different log files sitting on 7 different servers I felt enough was enough and started looking for a solution. Enter ‘tmux’.

tmux is a terminal multiplexer which supports multiple windows inside a single terminal session and allows me to create horizontal as well as vertical panes. So I can debug the logs sitting on different servers by tailing them in different pane and then still can have one pane dedicated for executing commands. I found a very good tutorial at this and this location.

Now I don’t want to go back to the old way of having multiple tabs for multiple logs. Agreed tabs in terminal have their own place and usage but for this particular case where I am doing development and debugging multiple servers I don’t have time for a tabbed interface instead ‘tmux’ is the way to go.

Fixing “Write failed: Broken pipe” on Yosemite

After latest update to my macbook pro I noticed that my SSH connections started dropping if I kept them idle for few minutes. Each time the session terminated with the text “Write failed: Broken pipe”. I have observed that the connection used to hang for a long while before this error message was shown.

This was not the behavior before so I suspected that recent updates might have changed some configuration and hence I started looking around. I found an article which explained how to configure my Macbook.

Based on the inputs provided in the article I edited the file /etc/ssh_config using the following command:
sudo vi /etc/ssh_config

And changed/uncommented the following lines:
Host *
ServerAliveInterval 60
TCPKeepAlive yes

I learnt another point in this article that during the SSH session if I press “~” followed by “.” then the connection terminates immediately. If it doesn’t then pressing enter before doing this helps.

MySQL Slowdown With Large Inserts

On a plain vanilla windows system with approximately 6 GB of RAM and a XAMPP based MySQL installation I found out that as the number of inserts increased the MySQL inserts became slower and slower. Ultimately it came down to one insert in 2 seconds!

This was totally unacceptable so I looked around for some solution to this problem and I found one here.

Based on the article above I started looking into the MySQL configuration and I found the following entries:

#innodb_log_arch_dir = "D:/xampp/mysql/data"
## You can set .._buffer_pool_size up to 50 - 80 %
## of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 16M
innodb_additional_mem_pool_size = 2M
## Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 5M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50

So my database is using innodb file system but the buffer pool size is only 16 MB with additional increase of 2 MB. This in my opinion is too less keeping in mind the comments provided in the file.

## You can set .._buffer_pool_size up to 50 - 80 %
## of RAM but beware of setting memory usage too high

So in a 6 GB RAM system I could easily set 5 GB as the buffer pool size but it is a development system and I wanted the developer to have decent development performance as well. So I tweaked the configuration a little bit and now it looks like this:

#innodb_log_arch_dir = "D:/xampp/mysql/data"
## You can set .._buffer_pool_size up to 50 - 80 %
## of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 1000M
innodb_additional_mem_pool_size = 250M
## Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 50M
innodb_log_buffer_size = 80M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50

At the moment the innodb_buffer_pool_size is 1000 MB with an additional innodb_additional_mem_pool_size of 250 MB.

I also increased the innodb_log_file_size to 50 MB and innodb_log_buffer_size to 80 MB (I simply multiplied the default value with 10).

I have started the inserts again into the same table using the same program with no other changes and I see a marked improvement in performance. The insert operation has not yet slowed down. I am able to insert 10-14 records within one second.

Update after 3 hours =====
I observed that after inserting close to 110000 records the inserts slowed down to 4-5 inserts per second. I have read an article which summarizes how InnoDB actually works. I think I will tweaking the configuration a little bit more.

Searching for file containing keyword in Linux

Grep suits the bill for all my requirement for efficient search of files containing text in Linux and Mac. The following commands detail the use cases of grep.

Search for pattern
grep -rnw 'folder' -e 'text'

-r stands for recursive.
-n is the line number.
-w stands for whole word match.

Example: grep -rnw . -e 'import'
This searches for the the text ‘import’ in the current directory recursively.

Found this tip at stackoverflow.com.

Debugging HTTP Traffic in Mac

I use Fiddler extensively in Windows for debugging any HTTP traffic in my web applications. However on Mac Fiddler is not available. On some search I have found out the following tools which can do the job:

Chrome
In Chrome just type “chrome://net-internals/#http2” in the address bar and you will be able to see all the HTTP traffic that is going on in your system. Not sure when it got added to Chrome but it is a very simple yet powerful utility.

Charles Proxy
This is Java based commercial utility which can be used for almost Fiddler like functionality. This works in Windows, Linux and Mac so it is good deal I think for any developer.

Update: On using Charles Proxy I found it to be dead simple to use and it fulfilled all the requirements I had on my Mac. There is 30 minute lockout feature for unlicensed version which seems fair. It is completely worth the 50$ price tag.