Using a Console Cable (USB TTL Serial Cable) with Raspberry Pi 3

A console cable provides terminal access to Raspberry Pi from say a laptop host without requiring to connect a monitor to Pi's HDMI port. So when the Pi is moved to a new network we can still be able to ssh to its console via the UART cable and find the new ip address.

Please see the steps in this page for setting up console cable for Raspberry Pi.

Overview | Adafruit's Raspberry Pi Lesson 5. Using a Console Cable | Adafruit Learning System

MS Project Demo - Amigo Chatbot

This is the demo of my MS project work - Amigo Chatbot - for multi-cloud operations management.

The project sources are on the github at:

The project presentation is at:

Wit.AI Java Client

Docker Swarm: An Introduction

Docker swarm is orchestrator for docker containers.
It can be used to deploy docker containers in a cluster.The cluster has one or more managers and multiple workers.The docker engine on the cluster nodes have the docker swarm mode (as of docker 1.12 release) which can be used to register nodes to the swarm cluster (no separate service registry is required). It internally uses Consul for service registration. No separate swarm agent container is required to run on each cluster node anymore like it was required before 1.12 release. The same docker compose file can be used to deploy application locally on one host and in swarm cluster. So no separate deployment files for development and production modes. This is an advantage compared to kubernetes where we have docker-compose file for development on a single host only and kubernetes specific deployment file for deployment in kubernetes cluster in production.Once a cluster…

Apache Kafka Message Producer/Consumer

Message Producer for Kafka

Message Consumer for Kafka

Java HttpClient using Unirest Java Library

Dockerize Java Application

Introduction to Microservices Summary

Excerpted from

Author of the ebook Chris Richardson, has very nicely summarized each article. I am copy pasting the summary sections from the 7 articles and putting them in one place for easy reference.

Building complex applications is inherently difficult. A Monolithic architecture only makes sense for simple, lightweight applications. You will end up in a world of pain if you use it for complex applications. The Microservices architecture pattern is the better choice for complex, evolving applications despite the drawbacks and implementation challenges.For most microservices‑based applications, it makes sense to implement an API Gateway, which acts as a single entry point into a system. The API Gateway is responsible for request routing, composition, and protocol translation. It provides each of the application’s clients with a custom API. The API Gateway can also mask failures in the backend services by returning cached or d…

Slackbot in Java


Following snippet is for a slackbot that looks for <@BOT_ID> prefix in the incoming messages across all channels (or direct messages) that it receives from slack.

Getting Started with Apache Kafka in Docker

To run Apache Kafka in docker container:

git clone the docker-compose.yml to provide KAFKA_ADVERTISED_HOST_NAME and KAFKA_ZOOKEEPER_CONNECT host ips. This is the IP of the host on which we are going to run the docker containers for Kafka and Zookeeper.
KAFKA_ADVERTISED_HOST_NAME: a cluster in detached mode:docker-compose up -dIt will start 2 containers: kafkadocker_kafka_1 - with kafka running at 9092 mapped to 9092 of localhost kafkadocker_zookeeper_1 - with zookeeper running at 2181 mapped to 2181 of localhost To start a cluster with 2 brokers: docker-compose scale kafka=2You can use docker-compose ps to show the running instances. If you want to add more Kafka brokers simply increase the value passed to docker-compose scale kafka=n If you want to customise any Kafka parameters, simply add them as environment variables in docker-compose.yml. For example: to inc…

Running AWS CLI using docker image

For my project i needed to execute AWS commands without having to install awscli or using SDK but do it in a cloud provider agnostic manner. The project in itself requires a separate article but this piece of getting to run awscli built into a docker image (using alpine linux which is very light weight) and executed from Java code seems quite useful by itself outside of the context of my master's project (which by the way is a chatbot for cloud operations management named Amigo).

Docker hub registry entry for the awscli image:

Docker image with awscli installed. It uses alpine python 2.7 and hence the size of the image is 115MB approx.
This enables one to execute awscli commands without having to install awscli locally. Usage:
$ docker pull sjsucohort6/docker_awscli Example command - ecs list-clusters can be executed as: $ docker run -it --rm -e AWS_DEFAULT_REGION='' -e AWS_ACCESS_KEY_ID='' -e AWS_SECRET_ACCESS_KE…

Kubernetes Basics

From Building Microservice Systems with Docker and Kubernetes by Ben Straub
KubernetesRuns docker containersPowerful Label matching system for control/grouping/routing trafficSpans across hosts - converts a set of computers into one big oneOne master (that sends control commands to minions to execute) - multiple minions (that run docker containers)POD - set of docker containers (often just one) always on the same host. For each POD there is one IP address.Replication controller - manages lifecycle of PODs which match the labels associated with the RC.Services - load balance traffic to PODs based on matching label. For eg. Service (name = frontend) will route traffic to PODs with name = frontend. The PODs may be managed by different RCs. Traffic routed by Service named frontend to both old and new version PODs. Once the rollover to new version is…

Using dnsmasq on Mac OSX

Excerpted from

dnsmasq makes it easy to redirect development sites to localhost. This is very useful when working on a dev box and running docker containers.

Instead of having to edit the /etc/hosts file everytime we can suffix .dev to any name of our choosing or and it will be mapped to the localhost address

Following are the steps to get this working on Mac:
# update brew
➜  ~  brew up
➜  ~  brew install dnsmasq
➜  ~ cp $(brew list dnsmasq | grep /dnsmasq.conf.example$) /usr/local/etc/dnsmasq.conf
➜  ~ sudo cp $(brew list dnsmasq | grep /homebrew.mxcl.dnsmasq.plist$) /Library/LaunchDaemons/
➜  ~ sudo launchctl load /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist
# edit dnsmasq.conf and add address=/dev/ -- this will cause any address ending in dev to be redirected to localhost.

➜  ~ vi /usr/local/etc/dnsmasq.conf
# restart dnsmasq
➜  ~ sudo launchctl stop homebrew.mxcl.dnsm…

Introduction to Docker

Notes from Introduction to Docker by Andrew Tork Baker
Two main parts of docker - Docker engine and docker hubEssential Docker commands:docker run [options] image [command] [args…]docker run busybox /bin/echo "hello world"docker run -it ubuntu /bin/bash - gives interactive shell Docker run -p 8000:80 atbaker/nginx-example http://localhost:8000Docker run -d -p 8000:80 atbaker/nginx-example - runs container in background (detached mode)Docker run -d -p 8000:80 --name webserver atbaker/nginx-example - to name containersDocker images docker images -q - will list all image ids aloneDocker ps - shows active/running containersDocker ps -a - shows all containers (even once we exited)docker ps -a -q - shows all container ids.Docker stop Docker start Docker rm Docker rm -f - to even remove running containerDocker rm -f $(docker ps -a…

Apache Kafka - An Overview

Excerpted from:

Message Oriented Middleware (MOM) such as Apache Qpid,RabbitMQ,Microsoft Message Queue, andIBM MQ Serieswere used for exchanging messages across various components. While these products are good at implementing the publisher/subscriber pattern (Pub/Sub), they are not specifically designed for dealing with large streams of data originating from thousands of publishers. Most of the MOM software have a broker that exposesAdvanced Message Queuing Protocol (AMQP) protocol for asynchronous communication. Kafka is designed from the ground up to deal with millions of firehose-style events generated in rapid succession. It guarantees low latency, “at-least-once”, delivery of messages to consumers. Kafka also supports retention of data for offline consumers, which means that the data can be processed either in real-time or in offline mode. Kafka is designed to be a distributed commit log. Much like relational databases, it can provide …

Book Notes: I love logs by Jay Kreps

This book is by Jay Kreps who is a Software architect at LinkedIn and primary author of Apache Kafka and Apache Samza open source software.
Following are some of the salient points from the book: The use of logs in much of the rest of this book will be variations on the two uses in database internals: The log is used as a publish/subscribe mechanism to transmit data to other replicas The log is used as a consistency mechanism to order the updates that are applied to multiple replicasSo this is actually a very intuitive notion: if you feed two deterministic pieces of code the same input log, they will produce the same output in the same order.You can describe the state of each replica by a single number: the timestamp for the maximum log entry that it has processed. Two replicas at the same time will be in the same state. Thus, this timestamp combined with the log uniquely capture the entire state of the replica. This gives a discrete, event-driven notion of time that, unlike the machi…