Tuesday, May 16, 2017

Using a Console Cable (USB TTL Serial Cable) with Raspberry Pi 3

A console cable provides terminal access to Raspberry Pi from say a laptop host without requiring to connect a monitor to Pi's HDMI port. So when the Pi is moved to a new network we can still be able to ssh to its console via the UART cable and find the new ip address.



Please see the steps in this page for setting up console cable for Raspberry Pi.



Overview | Adafruit's Raspberry Pi Lesson 5. Using a Console Cable | Adafruit Learning System




MS Project Demo - Amigo Chatbot





This is the demo of my MS project work - Amigo Chatbot - for multi-cloud operations management.



The project sources are on the github at: https://github.com/sjsucohort6/amigo-chatbot



The project presentation is at: https://github.com/sjsucohort6/amigo-chatbot/blob/master/amigo.pptx






Saturday, March 11, 2017

Wit.AI Java Client


Docker Swarm: An Introduction

Docker swarm is orchestrator for docker containers.

  1. It can be used to deploy docker containers in a cluster.
  2. The cluster has one or more managers and multiple workers.
  3. The docker engine on the cluster nodes have the docker swarm mode (as of docker 1.12 release) which can be used to register nodes to the swarm cluster (no separate service registry is required). It internally uses Consul for service registration. No separate swarm agent container is required to run on each cluster node anymore like it was required before 1.12 release.
  4. The same docker compose file can be used to deploy application locally on one host and in swarm cluster. So no separate deployment files for development and production modes. This is an advantage compared to kubernetes where we have docker-compose file for development on a single host only and kubernetes specific deployment file for deployment in kubernetes cluster in production.
  5. Once a cluster is provisioned, docker swarm will manage on which node the service container gets deployed.
  6. Docker swarm will also ensure that at any time the number of container replicas remains same. So if a node goes down or a container crashes, it will start new container for the application service such that the total number of replicas remain same as configured initially. This is same as what Amazon Auto scaling group (ASG) does.

Tuesday, February 28, 2017

Introduction to Microservices Summary

Excerpted from https://www.nginx.com/blog/introduction-to-microservices/

Author of the ebook Chris Richardson, has very nicely summarized each article. I am copy pasting the summary sections from the 7 articles and putting them in one place for easy reference.

  1. Building complex applications is inherently difficult. A Monolithic architecture only makes sense for simple, lightweight applications. You will end up in a world of pain if you use it for complex applications. The Microservices architecture pattern is the better choice for complex, evolving applications despite the drawbacks and implementation challenges.
  2. For most microservices‑based applications, it makes sense to implement an API Gateway, which acts as a single entry point into a system. The API Gateway is responsible for request routing, composition, and protocol translation. It provides each of the application’s clients with a custom API. The API Gateway can also mask failures in the backend services by returning cached or default data.
  3. Microservices must communicate using an inter‑process communication mechanism. When designing how your services will communicate, you need to consider various issues: how services interact, how to specify the API for each service, how to evolve the APIs, and how to handle partial failure. There are two kinds of IPC mechanisms that microservices can use, asynchronous messaging and synchronous request/response.
  4. In a microservices application, the set of running service instances changes dynamically. Instances have dynamically assigned network locations. Consequently, in order for a client to make a request to a service it must use a service‑discovery mechanism.
  5. A key part of service discovery is the service registry. The service registry is a database of available service instances. The service registry provides a management API and a query API. Service instances are registered with and deregistered from the service registry using the management API. The query API is used by system components to discover available service instances.
  6. There are two main service‑discovery patterns: client-side discovery and service-side discovery. In systems that use client‑side service discovery, clients query the service registry, select an available instance, and make a request. In systems that use server‑side discovery, clients make requests via a router, which queries the service registry and forwards the request to an available instance.
  7. There are two main ways that service instances are registered with and deregistered from the service registry. One option is for service instances to register themselves with the service registry, the self‑registration pattern. The other option is for some other system component to handle the registration and deregistration on behalf of the service, the third‑party registration pattern.
  8. In some deployment environments you need to set up your own service‑discovery infrastructure using a service registry such as Netflix Eurekaetcd, or Apache Zookeeper. In other deployment environments, service discovery is built in. For example, Kubernetes and Marathon handle service instance registration and deregistration. They also run a proxy on each cluster host that plays the role of server‑side discovery router.
  9. In a microservices architecture, each microservice has its own private datastore. Different microservices might use different SQL and NoSQL databases. While this database architecture has significant benefits, it creates some distributed data management challenges. The first challenge is how to implement business transactions that maintain consistency across multiple services. The second challenge is how to implement queries that retrieve data from multiple services.
  10. For many applications, the solution is to use an event‑driven architecture. One challenge with implementing an event‑driven architecture is how to atomically update state and how to publish events. There are a few ways to accomplish this, including using the database as a message queue, transaction log mining, and event sourcing.
  11. Deploying a microservices application is challenging. There are tens or even hundreds of services written in a variety of languages and frameworks. Each one is a mini‑application with its own specific deployment, resource, scaling, and monitoring requirements. There are several microservice deployment patterns including Service Instance per Virtual Machine and Service Instance per Container. Another intriguing option for deploying microservices is AWS Lambda, a serverless approach.
  12. The process of migrating an existing application into microservices is a form of application modernization. You should not move to microservices by rewriting your application from scratch. Instead, you should incrementally refactor your application into a set of microservices. There are three strategies you can use: 
    1. implement new functionality as microservices; 
    2. split the presentation components from the business and data access components; and 
    3. convert existing modules in the monolith into services. 
Over time the number of microservices will grow, and the agility and velocity of your development team will increase.

Sunday, February 26, 2017

Slackbot in Java

Using https://github.com/Ullink/simple-slack-api

Following snippet is for a slackbot that looks for <@BOT_ID> prefix in the incoming messages across all channels (or direct messages) that it receives from slack.



Getting Started with Apache Kafka in Docker

To run Apache Kafka in docker container:

  • git clone https://github.com/wurstmeister/kafka-docker.git
  • Edit the docker-compose.yml to provide KAFKA_ADVERTISED_HOST_NAME and KAFKA_ZOOKEEPER_CONNECT host ips. This is the IP of the host on which we are going to run the docker containers for Kafka and Zookeeper.

KAFKA_ADVERTISED_HOST_NAME: 192.168.86.89
KAFKA_ZOOKEEPER_CONNECT: 192.168.86.89:2181
  • Start a cluster in detached mode:
docker-compose up -d

It will start 2 containers:
kafkadocker_kafka_1 - with kafka running at 9092 mapped to 9092 of localhost
kafkadocker_zookeeper_1 - with zookeeper running at 2181 mapped to 2181 of localhost

To start a cluster with 2 brokers:

docker-compose scale kafka=2

 You can use docker-compose ps to show the running instances. 
 If you want to add more Kafka brokers simply increase the value passed to 
 docker-compose scale kafka=n
 
 If you want to customise any Kafka parameters, simply add them as environment variables 
 in docker-compose.yml. For example:
    • to increase the message.max.bytes parameter add KAFKA_MESSAGE_MAX_BYTES: 2000000 to the environment section.
    • to turn off automatic topic creation set KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'false'
  • To interact with Kafka container use the kafka-shell as:
    $ start-kafka-shell.sh  [DOCKER_HOST_IP] [ZOOKEEPER_HOST:ZOOKEEPER_PORT]
Example: $ ./start-kafka-shell.sh 192.168.86.89 192.168.86.89:2181
  • Testing:
    • To test your setup, start a shell, create a topic and start a producer:
$ $KAFKA_HOME/bin/kafka-topics.sh --create --topic "topic" \ --partitions 4 --zookeeper $ZK --replication-factor 2

$ $KAFKA_HOME/bin/kafka-topics.sh --describe --topic "topic" --zookeeper $ZK 

$ $KAFKA_HOME/bin/kafka-console-producer.sh --topic="topic" \
--broker-list=`broker-list.sh
    • Start another shell and start consumer:
$ $KAFKA_HOME/bin/kafka-console-consumer.sh --topic=topic --zookeeper=$ZK
    • Type in producer shell and it should be published to the client's shell.