Saturday, December 12, 2015

MongoDB for Java Developer Course

I successfully completed MongoDB for Java Developer course on 12/09/2015.

The course completion certificate is at: http://university.mongodb.com/course_completion/24540ba216594c40ac7f3686d79a1046
(this pdf file will be stored in MongoDB GridFS).

I obtained 77% grade and grade for being certified was 65%.

The course spanned over 3 months from October to December 2015 and following was the syllabus:

Week1 - Introduction - Overview, Design Goals, the Mongo Shell, JSON Intro, Installing Tools, Overview of Blog Project. Maven, Spark and Freemarker Intro
Week 2 - CRUD - Mongo Shell, Query Operators, Update Operators and a Few Commands
Week 3 - Schema Design - Patterns, Case Studies & Tradeoffs
Week 4 - Using Indexes, Monitoring And Understanding Performance. Performance In Sharded Environments
Week 5 - Aggregation Framework - Goals, The Use Of The Pipeline, Comparison With SQL Facilities.
Week 6 - Application Engineering - Drivers, Impact Of Replication And Sharding On Design And Development.
Week 7 - Case Studies - Interview with Jon Hoffman, foursquare and Interview with Ryan Bubunksi, codecademy
Week 8 - Final Exam 

I am glad to have invested some time to do this nicely made course by MongoDB and it is a good tool to have under your belt as a developer. I did 3 projects of my MSSE 1st semester using MongoDB and hosted it on a virtual box VM, docker container, AWS EC2 instance and of course my Mac and it was always an easy to install, very reliable DB. The programming interface using Morphia (document object mapper) is very intuitive and simple. I did struggle a bit to keep the entity objects same for both backend modules (REST endpoint and DAO and between REST endpoint and UI) because of the _id being a BSON type but later found a simpler solution (see my Morphia and Dropwizard intergration post). Doing this course gave me good confidence in the use of the DB for my projects.

Thursday, December 10, 2015

Understanding Docker

LXC (Linux Container)
2 unique traits of containers are:
  1. Namespace - is for: ls -al /proc/1341/ns - isilates
    1. Filesystem and mount points (mnt)
    2. Process space (pid)
    3. Network (net)
      1. Network devices
      2. Routes
      3. Firewall Rules
    4. Hostname (uts)
    5. Inter process communication (ipc)

  1. Control Groups - cat /proc/cgroups
    1. Manages what portion of the resources each container can consume
      1. Certain cpus
      2. Etc - check notes



/container --- is where the containers live.

Under each container:
  1. Separate namespace for containers - Rootfs - represents the / (root) file system
    1. Which has /bin, /boot etc.
  2. /container/mycontainer1/rootfs/tmp/newfile - is seen under container mycontainer1 as /tmp/newfile
  3. So host can copy a file between 2 containers.
  4. Within a container (once logged in) we can only go upto / and cannot cd .. Beyond it. So one container cannot see files of another container.
  5. Separate namespaces for processes -
    1. Each lxc-start process will spawn lxc-init process within container and it will have pid=1 within the container but outside on the host it will not.
  6. Network namespace -

  1. System or OS containers vs Application containers:
    1. OS/System containers - They have full OS and close to VM. Generally long running containers. (lxc-start -n o17cont1
    2. App containers - they are temporary, execute one command or process and dies, they are given access to parts of the host's root file system. Similar to JVM sandbox, OSGI etc. (lxc-execute 0n myappcont -- ps -A) after executing the specified command or app  it will come back to
  2. If container messes up the kernel it can impact the host OS but in VM the kernel is not impacted.



Docker - packages all dependencies required by an application and packages it together and puts it into a container.
  1. Docker engine - uses docker containers (similar to LXC application containers)
  2. Docker hub registry - SaaS operated by Docker Inc, contains docker images, users must register, archives of hundreds of ready made container images and community can upload their images.

Busybox - is most useful unix commands all packaged as one application so instead of ps you need to type busybox ps.


Magnum - container as a service (openstack)

Adv of container -
  1. Quick startup
  2. Deterministic packaging - on any target host
  3. Portability - just like JVM (all application which work within docker can be installed on any OS with docker container support)
  4. Much smaller than VM image

Disadv of container -
  1. Less secure than VM
  2. Less isolation


  1. Concept: Docker uses lxc (linux containers) and so it is native to linux. On PC or MAC, we need to install docker toolbox which uses Oracle Virtualbox to install a small Linux VM and uses that to create docker containers.
  2. PODA - Package once deploy anywhere.
  3. Docker helps in packaging an application and all its dependencies into one image. The image creation is defined in a dockerfile (http://docs.docker.com/engine/reference/builder/ ). Docker can parse this file and build an image. This file can include other existing images. Docker will first try to find the images locally and if not found then it searches in the remote docker registry.
  4. Docker has 3 main stages:
    1. Build  - builds a dockerfile to create an image (images are all versioned and have a SHA id like git commit id).
    2. Run - runs an image to create a container
    3. Ship - posts the image to a repository (push/pull).
  5. Important to understand is - that docker container should run only one service. Anytime we have multiple JVM processes being started within our application, think of separating those into multiple containers so each process has its own container. The containers can then be linked.
  6. Docker composer - is a tool to build applications comprising several containers. A typical example is:
    1. Run 4 instances of mongodb containers - say where 3 are part of a replica set and 1 is to run the mongos router.
    2. Run 2 instances of application server - say weblogic or wildfly.
    3. Run 1 instance of apache http proxy load balancer fronting the application servers.
    4. Run 2 instances of web server hosting the web UI.
    5. Run 1 instance of apache http proxy load balancer fronting the web server.
  7. The above set of containers can be defined within a docker composer yaml file or a set of yaml definition files. SO when we say docker-composer up -d it will create the required images and run the containers.

  1. Docker hub:
You can login using:
$ docker login
User: xyz
Password: abc
Email: myself@me.com

  1. Install docker toolbox (for Mac or PC only). It installs docker-machine which can be used to create a TinyCore Linux VM where docker daemon will listen on port 2375 and provide REST service and the docker client will be a REST cli client to the docker daemon process in the docker host VM. Docker toolbox uses Oracle VirtualBox to create docker host linux VM.
  2. Docker commands:
    1. Dokcer build --tag   sjsucohort6/esp . --- will build an image.
    2. Docker push  [image]-- can push the image to docker hub.
    3. Docker pull  [image]-- image is analogous to a commit id
    4. Before issuing docker pull you must be logged in to docker hub (same as github).
    5. Docker pull will get the [image]:latest (the latest version)

  1. Open a shell and type:
    1. Docker-machine create --driver virtualbox --virtualbox-memory 8096 default -> this will create a Linux VM named default using virtualbox. On MAC, the VM is created in ~/.docker/machine/machines/default.
    2. Once the VM is created we can ssh to it by name (default in this case).
    3. Docker-machine ls -> should show the new machine as running
    4. Docker-machine env default -> will give the environment settings to export
    1. Eval "$(docker-machine env default)"
  2. Docker-machine ssh default --> this will log you in to the docker host VM.


Docker pull mongo
docker run -d --name mongodb mongo -- starts mongodb

Docker ps

$ docker run -it --link mongodb:mongo --name mongoclient mongo sh -- create a new container mongoclient to run mongo client. The above will use mongo image to launch a client but not run mongodb instead will bring the shell prompt and now we can connect to the mongo host.

# cat /etc/hosts
# mongo --host=mongo

$ docker build -f ./Dockerfile --tag sjsucohort6/esp .
$ docker run -it --link mongodb:mongo --name studentreg sjsucohort6/esp

When we create a link this way, docker will inject environment variables named MONGO_PORT_27017_TCP_ADDR (will give the source container's ip address) and MONGO_PORT_27017_TCP_PORT (will give the port exposed by the container) into the container so the application can use these to connect to the DB service in the parent container.

In Docker 1.9.0 and later, docker has virtual networking support between containers so all we need to know is the container name to connect to and docker can resolve the named container to the ip address of the container.


$sudo docker exec -i -t loving_heisenberg bash
The above command will launch bash shell in the image loving_heisenberg.

Wednesday, November 25, 2015

Setting up Dropwizard

Generate self signed certificate.
  • $ keytool -genkey -alias selfsigned -keyalg RSA -keystore keystore.jks -keysize 2048
Export the keystore as certificate (.crt):
  • $ keytool -export -alias selfsigned -file selfsigned.crt -keystore keystore.jks

Import that certificate into your cacerts, the default Java keystore. You may need to do this as root, or with sudo. Go to the $JAVA_HOME/jre/lib/security directory, and run:
$ keytool -import -trustcacerts -alias selfsigned -file $SSL_FOLDER/selfsigned.crt -keystore cacerts

Put the following config in config.yml of dropwizard:

config.yml:
server:  adminMinThreads: 1
  adminMaxThreads: 64
  adminContextPath: /admin
  applicationContextPath: /
  applicationConnectors:    - type: http
      port: 8080
    - type: https
      port: 8443
      keyStorePath: ./keystore.jks
      keyStorePassword: password
      trustStorePath: $JAVA_HOME/jre/lib/security/cacerts
      trustStorePassword: changeit
      certAlias: selfsigned
  adminConnectors:    - type: http
      port: 8081
    - type: https
      port: 8444
      keyStorePath: ./keystore.jks
      keyStorePassword: password
      keyStoreType: JKS
      validateCerts: false


To have caching control headers being returned on every response:
Add the following filter:

public class CacheControlFilter implements Filter{

    public void doFilter(ServletRequest request, ServletResponse response,
                         FilterChain chain) throws IOException, ServletException {

        HttpServletResponse resp = (HttpServletResponse) response;

        // Add whatever headers you want here        resp.setHeader("Cache-Control", "public, max-age=500000");
        resp.setHeader("Expires", new Date().getTime()+500000 + "");

        chain.doFilter(request, response);
    }

    public void destroy() {}

    public void init(FilterConfig arg0) throws ServletException {}
}

And then in your Application#run():

environment.servlets().addFilter("CacheControlFilter", new CacheControlFilter())
        .addMappingForUrlPatterns(EnumSet.of(DispatcherType.REQUEST), true, "/*");

Similarly, for CORS headers add a filter:

public class CORSFilter implements ContainerResponseFilter {

    @Override    public void filter(ContainerRequestContext request,
                       ContainerResponseContext response) throws IOException {
        response.getHeaders().add("Access-Control-Allow-Origin", "*");
        response.getHeaders().add("Access-Control-Allow-Headers",
                "origin, content-type, accept, authorization");
        response.getHeaders().add("Access-Control-Allow-Credentials", "true");
        response.getHeaders().add("Access-Control-Allow-Methods",
                "GET, POST, PUT, DELETE, OPTIONS, HEAD");
    }
}

and initialize it as:
environment.jersey().register(new CORSFilter());

Sunday, October 11, 2015

Setting up OpenStack python SDK on Oracle Enterprise Linux 6

To setup openstacksdk on Oracle VM Server 3.3.3 or Oracle Enterprise Linux 6:

Oracle Enterprise Linux 6 comes with python 2.6.6 by default.

OpenStackSDK requires at least python 2.7 or 3.3.

Following are the steps to upgrade local python to 2.7.10 and 3.5.0 (the latest as of this writing) and install openstacksdk for development.

It is assumed that OpenStack is hosted on the same host where the test program is being executed.

$ sudo su -
# cd /etc/yum.repos.d
# rm -f *
# yum clean all
# yum makecache
# yum install gcc

The above step may take quite some time and is only required to install gcc. If one already has gcc installed then skip to the following step.


# yum groupinstall "Development tools"
# yum install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel
# cd /usr/src

Get ez_setup to install easy_install for python interpreter.

# tar xzf Python-2.7.10.tgz
# tar xzf Python-3.5.0.tgz


Install Python-2.7.10
# cd Python-2.7.10
# ./configure --prefix=/usr/local --enable-unicode=ucs4 --enable-shared LDFLAGS="-Wl,-rpath /usr/local/lib"
# make && make altinstall
# python2.7 -V
# python2.7 ez_setup.py
# easy_install-2.7 pip


OR

Install Python 3.5.0
# cd Python-3.5.0
# ./configure --prefix=/usr/local --enable-shared LDFLAGS="-Wl,-rpath /usr/local/lib"
# make && make altinstall
# python3.5 -V
# python3.3 ez_setup.py
# easy_install-3.3 pip


Setup Virtualenv for your project:
# cd /files/project
# pip2.7 install virtualenv
# virtualenv -p /usr/local/bin/python2.7 venv

OR 
# python3.5 -m venv venv

Activate virtualenv
# source venv/bin/activate

Now install the SDK:
# pip2.7 install openstacksdk
OR
# pip3.5 install openstacksdk

# Run the following test program (test.py) as:
# python3.5 ./test.py
# OR
# python2.7 ./test.py

from openstack import connection
from openstack import utils
import logging
import sys

# Enable requests logging
logger = logging.getLogger('requests')
formatter = logging.Formatter(
    '%(asctime)s %(levelname)s: %(name)s %(message)s')
console = logging.StreamHandler(sys.stdout)
console.setFormatter(formatter)
logger.setLevel(logging.DEBUG)
logger.addHandler(console)

# Enable logging to console
utils.enable_logging(debug=True, stream=sys.stdout)


# Get connection
conn = connection.Connection(auth_url="http://localhost:5000/v2.0",
                             project_name="admin",
                             username="admin",
                             password="password")


# Get network API
network = conn.network.find_network("watshnet")
if network is None:
    network = conn.network.create_network(name="watshnet")
else:
    print(network)

If everything went fine so far then you are all set to begin writing your extensions for OpenStack. Examples using the sdk are available here - https://github.com/stackforge/python-openstacksdk/tree/master/examples

Friday, October 02, 2015

Integrating Morphia and Dropwizard

Recently i had the pleasure of using mongodb's document object mapper framework morphia and the REST webservice framework dropwizard (well it is more than simply for REST but that is the primary purpose of it).

Dropwizard comes with the following essential thirdparty libraries:

  1. Jackson - for JSON parsing
  2. Jetty - as embedded light weight Java EE web container
  3. Jersey - JAX-RS 2.0 implementation
  4. and some more... 
Morphia has annotations for POJO model classes that can be persisted in mongodb and is a nice framework that simplifies the development of a mongodb client application. At the same time we can use the same POJO model classes for REST endpoint implementations as well. So there is no need to have separate DTO and entity classes. The same POJO model classes are annotated as entities (using morphia's annotations) and as DTO (using Jackson's annotations).

A simple example for such a model class is given below:


@Entity(value = "students" , noClassnameStored = true, concern = "SAFE")
public class Student extends BaseModel {
    @Id    private String id = new ObjectId().toHexString();

    @Reference    private User user;

    @Reference    private List courseRefs;

    Date lastUpdated = new Date();

    @PrePersist    void prePersist() {
        lastUpdated = new Date();
    }

    
    public Student(User user) {
        this.user = user;
        this.courseRefs = new ArrayList<>();
    }

    public Student() {
    }


    @JsonProperty    public Date getLastUpdated() {
        return lastUpdated;
    }

    public void setLastUpdated(Date lastUpdated) {
        this.lastUpdated = lastUpdated;
    }


    @JsonProperty    public List getCourseRefs() {
        return courseRefs;
    }

    public void setCourseRefs(List courseRefs) {
        this.courseRefs = courseRefs;
    }


    @JsonProperty    public User getUser() {
        return user;
    }

    public void setUser(User user) {
        this.user = user;
    }

    @JsonProperty    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id != null ? new ObjectId(id).toHexString() : new ObjectId().toHexString();
    }

    @Override    public String toString() {
        return "Student{" +
                "id=" + id +
                ", user=" + user +
                ", courseRefs=" + courseRefs +
                ", lastUpdated=" + lastUpdated +
                '}';
    }
}

Morphia annotations are:
@Entity
@Id
@Reference
@PrePersist

Jackson annotations used are:
@JsonProperty

Additionally with Dropwizard we also get the bean validations through Hibernate Validator framework which can be used in the POJO above to annotate the fields in the model with @NotNull or @MinLength... etc.

One important learning in this integration was to map the mongodb _id (ObjectID BSON type) to a string id in the model but set it to the hex string value from the _id field. Getting this to work without the use of frameworks like MongoJack or Katharsis keeps the code simple to comprehend and yet meets the requirements.

Using VNC with Ubuntu on VirtualBox


  1. Create an Ubuntu 64 bit VM on VirtualBox with following specs:
    1. CPU - 1
    2. Memory - 1GB
    3. HDD - 20GB (VDI, Dynamic allocation)
    4. Network - NAT with Port forwarding - TCP, 5900 (on local host) forwarded to 5900 (on VM)
  2. Download Ubuntu desktop 15.04 and install it on the VM.
  3. Once installation completes, search for Desktop Sharing applet. 
  4. Enable sharing, provide password.
  5. Now launch a VNC viewer and specify the IP of the parent host on which VirtualBox is installed with port 5900.
    1. If connecting from same host where virtual box is installed then you may use localhost:5900. 
  6. It will prompt for password then connect.


Tuesday, September 22, 2015

AngularJS Introduction


AngularJS 1.5

Directive is a marker on an HTML tag that tells Angular to run or reference some Javascript code.

HTML page:
<html ng-app="studentModule">
<head>
    <link rel="stylesheet" type="text/css" href="css/bootstrap.min.css" />
    <script type="text/javascript" src="js/angular.min.js"></script>
    <script type="text/javascript" src="js/app.js"></script>
</head>
<body class="container" ng-controller="StudentController as s">
    <ul class="list group">
        <li class="list group item" ng-repeat="student in s.students">
            <h3>
                {{student.firstName}} {{student.lastName}}
                <em class="pull-right">{{student.emailId}}</em>
            </h3>
        </li>
    </ul>
</body>
</html>
Javascript:
(function() {
  var app = angular.module('studentModule', []);

  app.controller('StudentController', ['$http', function($http){
    var store = this;
    store.students = [];

    $http.get("/api/students").success(function(data) {
        store.students = data;
    });
  }]);
})();

Here ng-controller is the directive.

Modules: is where we write pieces of our Angular application and define dependencies for our app. Modules can use other modules (or depend on other modules).

Var app = angular.module('store', []) -- define a new module named 'store' and with no dependencies (so empty array) -- this goes into a separate app.js

-- ng-app directive will run the module named "store" when the document loads. Ng-app also makes the entire html page an angular application which means the contents within can now use the features of the angular framework - like writing expression such as:

I am {{4 + 6 }}

Will translate to:

I am 10


Similarly we can have string expressions: {{ "hello" + " world" }}

We can have filters in the expression by using pipe to get the o/p of expression as input of a filter and thus achieve filtered result:
{{ product.price | currency }}

Or in general: {{ data* | filter: options*}}

date

uppercase
lowercase

limitTo

orderBy


Sample 1:

Controllers: where we define our app's behavior.

//Wrapthejavascriptinaclosure(function(){})();
(function(){
varapp=angular.module('store',[]);

//Defineourcontroller,theanonymousfunctioniswhatgetscalledwhenourcontrollerisinvoked.
app.controller('StoreController',function(){
//setproperty-producttothedata.
this.product=gem;
});

//data
vargem={
name:'Dodecahedron',
price:2.95,
description:'...',
}
})();
The purpose of the controller is to get the JSON data (from gem object in this case) over to html page block where the ng-controller is defined. This way we can get the JSON data into a javascript client side object in some way (say by querying REST endpoint) and then populate the div block in the html page with the data - and this is facilitated by the controller's property ("product" in this case) which is set to JSON javascript object (gem in this case).

html>
<htmllang="en"ng-app="store">
<head>
<metacharset="UTF-8">
<title></title>
<linkrel="stylesheet"type="text/css"href="css/bootstrap.min.css"/>

</head>
<body>
<divng-controller="StoreControllerasstore">
<h1>{{store.product.name}}</h1>
<h2>{{store.product.price}}</h2>
<p>{{store.product.description}}</p>
</div>
<scripttype="text/javascript"src="js/angular.min.js"></script>
<scripttype="text/javascript"src="js/app.js"></script>
</body>
</html>
Built-in validations for common input types:
Email, url, number (max, min),

  Directives

Monday, September 21, 2015
1:27 PM
Directives:
<htmlng-app="studentModule">
Attach application module to a page
<bodyclass="container"ng-controller="StudentController">
Attach a controller function to the page
<divclass="productrow"ng-repeat="studentinstudents">
Repeat a section of the page for each item in an array
ng-show/ng-hide
Display or not display a section based on an expression
ng-source
Ng-click
Ng-class
Will set to the classname when condition specified is true.
Ng-class="active:tab === 1"
Ng-model - binds the form elements to properties -- this is how we create a live preview.
Ng-submit - specify method to invoke when form is submitted.

Ng-include - include html snippet



Custom Directives - makes html more readable.. Same can be achieved using ng-include directive by including the duplicating snippet of html code from another html file. But custom directives will be like new tags -

 Modules

Tuesday, September 22, 2015
12:02 AM
App.js - root module "store" is defined here that is referred in our single page app's html page (with ng-app directive). It will have dependencies on all other sub-modules.

var app = angular.module("store", ["products-directives"])

Products.js - can be another module consisting of custom directives (which also might encapsulate controllers) pertaining to products in the store.

..
And so on, there could be other modules which are referred by products module or by the main store module.

Services
Tuesday, September 22, 2015
12:06 AM
Services give controllers additional functionality.
  1. Fetching JSON from webservice - $http
  2. Logging message for Javascript console - $log
  3. Filtering an array - $filter

$http

$http({ method: 'GET',  url: '/products.json' });
Or
$http.get('/products.json', {apiKey: 'myApiKey'… other query params});

Returns a promise object: .success() or .error()


$http.post('/api/students',  {param: 'value'});

$http.delete('/api/students');


Popular micro services patterns

Here are some popular Microservice design patterns that a programmer should know: Service Registry  pattern provides a  central location  fo...