Friday, February 24, 2017

Running AWS CLI using docker image

For my project i needed to execute AWS commands without having to install awscli or using SDK but do it in a cloud provider agnostic manner. The project in itself requires a separate article but this piece of getting to run awscli built into a docker image (using alpine linux which is very light weight) and executed from Java code seems quite useful by itself outside of the context of my master's project (which by the way is a chatbot for cloud operations management named Amigo).

Docker hub registry entry for the awscli image:
https://hub.docker.com/r/sjsucohort6/docker_awscli/

Docker image with awscli installed. It uses alpine python 2.7 and hence the size of the image is 115MB approx.
This enables one to execute awscli commands without having to install awscli locally.
Usage:
$ docker pull sjsucohort6/docker_awscli
Example command - ecs list-clusters can be executed as:
$ docker run -it --rm -e AWS_DEFAULT_REGION='' -e AWS_ACCESS_KEY_ID='' -e AWS_SECRET_ACCESS_KEY='' --entrypoint aws sjsucohort6/docker_awscli:latest ecs list-clusters
The Java client docker-client from Spotify is the one used for the below code that pulls the image from docker hub and executes the command.

import com.spotify.docker.client.DefaultDockerClient;
import com.spotify.docker.client.DockerClient;
import com.spotify.docker.client.LogStream;
import com.spotify.docker.client.exceptions.DockerException;
import com.spotify.docker.client.messages.ContainerConfig;
import com.spotify.docker.client.messages.ContainerCreation;
import com.spotify.docker.client.messages.RegistryAuth;
import edu.sjsu.amigo.cp.tasks.api.CommandExecutionException;

import java.util.ArrayList;
import java.util.List;
import java.util.logging.Logger;

import static com.spotify.docker.client.DockerClient.LogsParam.stderr;
import static com.spotify.docker.client.DockerClient.LogsParam.stdout;

/** 
* Docker command executor task. 
* 
* @author rwatsh on 2/14/17. 
*/
public class DockerTask {
    private static final Logger logger = Logger.getLogger(DockerTask.class.getName());
/**     
* Executes a containerized command on AWS.     
*/    
public String execute(String dockerImage, List envList, List commandList, String entryPoint) throws CommandExecutionException {
        final DockerClient dockerClient = new DefaultDockerClient("unix:///var/run/docker.sock");
        String response = null;
        try {
            pullImage(dockerClient, dockerImage);

            final ContainerConfig containerConfig = ContainerConfig.builder()
                    .image(dockerImage)
                    .env(envList)
                    .entrypoint(entryPoint)
                    .cmd(commandList)
                    .build();

            final ContainerCreation container = dockerClient.createContainer(containerConfig);
            final String containerId = container.id();
            dockerClient.startContainer(containerId);

// Wait for the container to exit.            
// If we don't wait, docker.logs() might return an epmty string because the container            
// cmd hasn't run yet.            
dockerClient.waitContainer(containerId);

            final String log;
            try (LogStream logs = dockerClient.logs(containerId, stdout(), stderr())) {
                log = logs.readFully();
            }
            logger.info(log);
            response = log;
            dockerClient.removeContainer(containerId);

        } catch (DockerException | InterruptedException e) {
            throw new CommandExecutionException(e.getMessage());
        }
        return response;
    }

    private void pullImage(DockerClient docker, String imageRepoName) throws DockerException, InterruptedException {
        final RegistryAuth registryAuth = RegistryAuth.builder()
                .email(System.getenv("AUTH_EMAIL"))
                .username(System.getenv("AUTH_USERNAME"))
                .password(System.getenv("AUTH_PASSWORD"))
                .build();

        final int statusCode = docker.auth(registryAuth);
        logger.info("Docker registry auth status: " + statusCode);
        docker.pull(imageRepoName, registryAuth);
    }

    public static void main(String[] args) throws CommandExecutionException {
        DockerTask t = new DockerTask();
        List envList = new ArrayList<>();
        envList.add("AWS_DEFAULT_REGION="+ System.getenv("AWS_DEFAULT_REGION"));
        envList.add("AWS_ACCESS_KEY_ID="+ System.getenv("AWS_ACCESS_KEY_ID"));
        envList.add("AWS_SECRET_ACCESS_KEY="+ System.getenv("AWS_SECRET_ACCESS_KEY"));
        List cmdList = new ArrayList<>();
        cmdList.add("s3");
        cmdList.add("ls");
        t.execute("sjsucohort6/docker_awscli:latest", envList, cmdList, "aws");
    }
}

Kubernetes Basics

From Building Microservice Systems with Docker and Kubernetes by Ben Straub

  • Kubernetes
    • Runs docker containers
    • Powerful Label matching system for control/grouping/routing traffic
    • Spans across hosts - converts a set of computers into one big one
    • One master (that sends control commands to minions to execute) - multiple minions (that run docker containers)
    • POD - set of docker containers (often just one) always on the same host. For each POD there is one IP address.
    • Replication controller - manages lifecycle of PODs which match the labels associated with the RC.
    • Services - load balance traffic to PODs based on matching label. For eg. Service (name = frontend) will route traffic to PODs with name = frontend. The PODs may be managed by different RCs.
  • Traffic routed by Service named frontend to both old and new version PODs. Once the rollover to new version is completed the traffic continues to get routed to the frontend named PODs which are version 124 and old version PODs and RC are deleted eventually without any downtime.
  • Every service gets a DNS entry same as its name. For example, ServiceA, ServiceB etc. POD looks up a service by its name and communicates with minions under that service via the service.
  • Service can have ingress port configured to receive inbound traffic. Say port 8000 on ServiceA is opened which will map to a port (say 37654) on every minion.
  • Setting up Kubernetes Cluster in AWS:

    • Identity and Access Management (IAM) -
      • Create user and generate creds/download
      • Attach policy
    • Get awscli and install it
    • Download kubernetes - https://github.com/kubernetes/kubernetes/releases/ and unpack it.
    • Open cluster/aws/config-default.sh, edit as needed to change the size of the kubernetes cluster
    • Run: KUBERNETES_PROVIDER=aws cluster/kube-up.sh
      • Created new VPC 172.20.0.*
      • 5 EC2 instances (t2.micro) = 1 master + 4 minions with public Ips
      • ASG for minions
      • SSH Keys for direct access
        • ~/.ssh/kube_aws_rsa
      • Kubectl is configured
        • ~/.kube/config
    • KUBERNETES_PROVIDER=aws cluster/kube-down.sh - to delete all aws resources

Using dnsmasq on Mac OSX

Excerpted from https://passingcuriosity.com/2013/dnsmasq-dev-osx/

dnsmasq makes it easy to redirect development sites to localhost. This is very useful when working on a dev box and running docker containers.

Instead of having to edit the /etc/hosts file everytime we can suffix .dev to any name of our choosing database.dev or frontend.dev and it will be mapped to the localhost address 127.0.0.1.

Following are the steps to get this working on Mac:
# update brew
➜  ~  brew up
➜  ~  brew install dnsmasq
➜  ~ cp $(brew list dnsmasq | grep /dnsmasq.conf.example$) /usr/local/etc/dnsmasq.conf
➜  ~ sudo cp $(brew list dnsmasq | grep /homebrew.mxcl.dnsmasq.plist$) /Library/LaunchDaemons/
Password:
➜  ~ sudo launchctl load /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist
# edit dnsmasq.conf and add address=/dev/127.0.0.1 -- this will cause any address ending in dev to be redirected to localhost.

➜  ~ vi /usr/local/etc/dnsmasq.conf
# restart dnsmasq
➜  ~ sudo launchctl stop homebrew.mxcl.dnsmasq
➜  ~ sudo launchctl start homebrew.mxcl.dnsmasq

➜  ~ dig testing.testing.one.two.three.dev @127.0.0.1

# add a new nameresolver to Mac
➜  ~ sudo mkdir -p /etc/resolver
➜  ~ sudo tee /etc/resolver/dev >/dev/null <nameserver 127.0.0.1
EOF
# test
➜  ~ ping -c 1 this.is.a.test.dev
PING this.is.a.test.dev (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.044 ms

--- this.is.a.test.dev ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.044/0.044/0.044/0.000 ms
➜  ~ ping -c 1 iam.the.walrus.dev
PING iam.the.walrus.dev (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.041 ms

--- iam.the.walrus.dev ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.041/0.041/0.041/0.000 ms

# and that we did not break the nameresolver for non-dev sites
➜  ~ ping -c 1 www.google.com
PING www.google.com (216.58.194.196): 56 data bytes
64 bytes from 216.58.194.196: icmp_seq=0 ttl=54 time=54.026 ms

--- www.google.com ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 54.026/54.026/54.026/0.000 ms

Introduction to Docker

Notes from Introduction to Docker by Andrew Tork Baker

  1. Two main parts of docker - Docker engine and docker hub
  2. Essential Docker commands:
  • docker run [options] image [command] [args…]
    • docker run busybox /bin/echo "hello world"
    • docker run -it ubuntu /bin/bash - gives interactive shell
    • Docker run -p 8000:80 atbaker/nginx-example
    • Docker run -d -p 8000:80 atbaker/nginx-example - runs container in background (detached mode)
    • Docker run -d -p 8000:80 --name webserver atbaker/nginx-example - to name containers
  • Docker images
    • docker images -q - will list all image ids alone
  • Docker ps - shows active/running containers
    • Docker ps -a - shows all containers (even once we exited)
    • docker ps -a -q - shows all container ids.
  • Docker stop
  • Docker start
  • Docker rm
    • Docker rm -f   - to even remove running container
    • Docker rm -f $(docker ps -a -q) - will remove all containers from the system running or not
Where containerId = short form can be used first 4 unique chars from SHA id of container
  • Docker logs webserver - inspect the logs
    • Docker logs -f webserver -- follows the logs on the container
  • Docker attach webserver - attaches to the running container in detached mode but the disadvantage is if you attached and you exit the container shell then container will exit too. So if you need to check the logs alone then just use docker logs instead.
  • Docker port webserver 80 -- will show the mapping for the port 80 on the container to the port on the docker host
  • Docker diff webserver - what has changed on filesystem of container since we started it
  • Docker cp webserver:/usr/local/nginx/html/index.html . - will copy the index.html from container to local directory
  • Docker inspect webserver -- low level info on container (environment variables, hostname of container etc)
  • Docker history atbaker/nginx-example - will show when each layer in the image was applied to the image
  • Docker search postgres - will search docker hub registry for all images for postgres
  • Postgres:
    • Docker pull postgres:latest - will pull down each layer in the image to local host
    • Docker run -p 5432:5432 postgres
    • psql -U postgres -h localhost - will connect to postgres DB container at port 5432 (need to install psql for this)
  • Redis:
    • Docker pull atbaker/redis-example
    • Docker run -it atbaker/redis-example /bin/bash - to run interactive shell
    • Change the file /usr/src/custom-redis.conf - uncomment requirepass line, exit the shell
    • Docker commit -m "message"   - to save the change done to container as a new image
    • Docker commit -m "setting password" redis-passwd
    • Docker run -p 6379:6379 redis-passwd redis-server /usr/src/custom-redis.conf
    • Redis-cli -h localhost -p 6379
    • Auth foobared
    • Set foo bar
    • Get foo
  • Once image is created you can push it to the docker hub account:
    • Docker login - login to docker hub
    • Docker tag redis-passwd sjsucohort6/redis
    • Docker push sjsucohort6/redis -  by default it will tag remote image as latest
    • docker push sjsucohort6/redis:ver1 - will tag remote image as ver1
  • Mongodb:
    • git clone https://github.com/atbaker/mongo-example.git
    • Docker build -t mongodb . - assuming current working dir has Dockerfile
    • Docker run -P mongodb - the -P option maps any port on localhost to exposed port (27017) of mongodb container
    • Docker ps - find what port on localhost is mapped to mongodb port
    • mongo localhost:32768 -- connect to the mongodb assuming here that local port 32768 was mapped to 27017 of mongodb


TODO - cover dockerfiles, docker compose in a later post.

Monday, February 20, 2017

Apache Kafka - An Overview

Excerpted from: https://thenewstack.io/apache-kafka-primer/

  1. Message Oriented Middleware (MOM) such as Apache Qpid, RabbitMQ, Microsoft Message Queue, and IBM MQ Series were used for exchanging messages across various components. While these products are good at implementing the publisher/subscriber pattern (Pub/Sub), they are not specifically designed for dealing with large streams of data originating from thousands of publishers. Most of the MOM software have a broker that exposes Advanced Message Queuing Protocol (AMQP) protocol for asynchronous communication.
  2. Kafka is designed from the ground up to deal with millions of firehose-style events generated in rapid succession. It guarantees low latency, “at-least-once”, delivery of messages to consumers. Kafka also supports retention of data for offline consumers, which means that the data can be processed either in real-time or in offline mode.
  3. Kafka is designed to be a distributed commit log. Much like relational databases, it can provide a durable record of all transactions that can be played back to recover the state of a system. 
  4. Kafka provides redundancy, which ensures high availability of data even when one of the servers faces disruption.
  5. Multiple event sources can concurrently send data to a Kafka cluster, which will reliably gets delivered to multiple destinations.
  6. Key concepts:
  • Message - Each message is a key/value pair. Irrespective of the data type, Kafka always converts messages into byte arrays.
  • Producers - or publisher clients that produce data
  • Consumers - are subscribers or readers that read the data. Unlike subscribers in MOM, Kafka consumers are stateful, which means they are responsible for remembering the cursor position, which is called as an offset. The consumer is also a client of Kafka cluster. Each consumer may belong to a consumer group, which will be introduced in the later sections.
    The fundamental difference between MOM and Kafka is that the clients will never receive messages automatically. They have to explicitly ask for a message when they are ready to handle.
  • Topics - logical collection of messages. Data sent by producers are stored in topics. Consumers subscribe to a specific topic that they are interested in.
  • Partition - Each topic is split into one or more partitions. They are like shards and Kafka may use the message key to automatically group similar messages into a partition. This scheme enables Kafka to dynamically scale the messaging infrastructure. Partitions are redundantly distributed across the Kafka cluster. Messages are written to one partition but copied to at least two more partitions maintained on different brokers with the cluster.
  • Consumer groups - consumers belong to at least one consumer group, which is typically associated with a topic. Each consumer within the group is mapped to one or more partitions of the topic. Kafka will guarantee that a message is only read by a single consumer in the group. Each consumer will read from a partition while tracking the offset. If a consumer that belongs to a specific consumer group goes offline, Kafka can assign the partition to an existing consumer. Similarly, when a new consumer joins the group, it balances the association of partitions with the available consumers.
    It is possible for multiple consumer groups to subscribe to the same topic. For example, in the IoT use case, a consumer group might receive messages for real-time processing through an Apache Storm cluster. A different consumer group may also receive messages from the same topic for storing them in HBase for batch processing.
    The concept of partitions and consumer groups allows horizontal scalability of the system.
  • Broker - Each Kafka instance belonging to a cluster is called a broker. Its primary responsibility is to receive messages from producers, assigning offsets, and finally committing the messages to the disk. Based on the underlying hardware, each broker can easily handle thousands of partitions and millions of messages per second.
    The partitions in a topic may be distributed across multiple brokers. This redundancy ensures the high availability of messages.
  • Cluster - A collection of Kafka broker forms the cluster. One of the brokers in the cluster is designated as a controller, which is responsible for handling the administrative operations as well as assigning the partitions to other brokers. The controller also keeps track of broker failures.
  • Zookeeper - Kafka uses Apache ZooKeeper as the distributed configuration store. It forms the backbone of Kafka cluster that continuously monitors the health of the brokers. When new brokers get added to the cluster, ZooKeeper will start utilizing it by creating topics and partitions on it.
Kafka in docker - https://github.com/spotify/docker-kafka
http://docs.confluent.io/3.0.0/quickstart.html#quickstart 

Book Notes: I love logs by Jay Kreps


This book is by Jay Kreps who is a Software architect at LinkedIn and primary author of Apache Kafka and Apache Samza open source software.

Following are some of the salient points from the book:
  1. The use of logs in much of the rest of this book will be variations on the two uses in database internals: 
    1. The log is used as a publish/subscribe mechanism to transmit data to other replicas 
    2. The log is used as a consistency mechanism to order the updates that are applied to multiple replicas
  2. So this is actually a very intuitive notion: if you feed two deterministic pieces of code the same input log, they will produce the same output in the same order.
  3. You can describe the state of each replica by a single number: the timestamp for the maximum log entry that it has processed. Two replicas at the same time will be in the same state. Thus, this timestamp combined with the log uniquely capture the entire state of the replica. This gives a discrete, event-driven notion of time that, unlike the machine’s local clocks, is easily comparable between different machines.
  4. In each case, the usefulness of the log comes from the simple function that the log provides: producing a persistent, replayable record of history.
  5. Surprisingly, at the core of the previously mentioned log uses is the ability to have many machines play back history at their own rates in a deterministic manner.
  6. It’s worth noting the obvious: without a reliable and complete data flow, a Hadoop cluster is little more than a very expensive and difficult-to-assemble space heater.
  7. Take all of the organization’s data and put it into a central log for real-time subscription.
  8. Each logical data source can be modeled as its own log. A data source could be an application that logs events (such as clicks or page views), or a database table that logs modifications. Each subscribing system reads from this log as quickly as it can, applies each new record to its own store, and advances its position in the log. Subscribers could be any kind of data system: a cache, Hadoop, another database in another site, a search system, and so on.
  9. The log also acts as a buffer that makes data production asynchronous from data consumption.
  10. Of particular importance: the destination system only knows about the log and does not know any details of the system of origin.
  11. use the term “log” here instead of “messaging system” or “pub sub” because it is much more specific about semantics and a much closer description of what you need in a practical implementation to support data replication.
  12. You can think of the log as acting as a kind of messaging system with durability guarantees and strong ordering semantics.
  13. we needed to isolate each consumer from the source of the data. The consumer should ideally integrate with just a single data repository that would give her access to everything.
  14. Amazon has offered a service that is very similar to Kafka called Kinesis - it is the piping that connects all their distributed systems — DynamoDB, RedShift, S3 — as well as the basis for distributed stream processing using EC2.
  15. Google has followed with a data stream and processing framework, and Microsoft has started to move in the same direction with their Azure Service Bus offering.
  16. ETL is really two things. First, it is an extraction and data cleanup process, essentially liberating data locked up in a variety of systems in the organization and removing any system-specific nonsense. Secondly, that data is restructured for data warehousing queries (that is, made to fit the type system of a relational database, forced into a star or snowflake schema, perhaps broken up into a high performance column format, and so on). Conflating these two roles is a problem. The clean, integrated repository of data should also be available in real time for low-latency processing, and for indexing in other real-time storage systems.
  17. A better approach is to have a central pipeline, the log, with a well-defined API for adding data. The responsibility of integrating with this pipeline and providing a clean, well-structured data feed lies with the producer of this data feed. This means that as part of their system design and implementation, they must consider the problem of getting data out and into a well-structured form for delivery to the central pipeline.
  18. benefit of this architecture: it enables decoupled, event-driven systems.
  19. In order to allow horizontal scaling, we chop up our log into partitions
    1. Each partition is a totally ordered log, but there is no global ordering between partitions. 
    2. The writer controls the assignment of the messages to a particular partition, with most users choosing to partition by some kind of key (such as a user ID). Partitioning allows log appends to occur without coordination between shards, and allows the throughput of the system to scale linearly with the Kafka cluster size while still maintaining ordering within the sharding key.
    3. Each partition is replicated across a configurable number of replicas, each of which has an identical copy of the partition’s log. At any time, a single partition will act as the leader; if the leader fails, one of the replicas will take over as leader.
    4. each partition is order preserving, and Kafka guarantees that appends to a particular partition from a single sender will be delivered in the order they are sent.
  20. Kafka uses a simple binary format that is maintained between in-memory log, on-disk log, and in-network data transfers.
  21. It turns out that “log” is another word for “stream” and logs are at the heart of stream processing - see stream processing as something much broader: infrastructure for continuous data processing.
  22. Data collected in batch is naturally processed in batch. When data is collected continuously, it is naturally processed continuously.
  23. many data transfer processes still depend on taking periodic dumps and bulk transfer and integration. The only natural way to process a bulk dump is with a batch process. As these processes are replaced with continuous feeds, we naturally start to move towards continuous processing to smooth out the processing resources needed and reduce latency.
  24. This means that a stream processing system produces output at a user-controlled frequency instead of waiting for the “end” of the data set to be reached.
  25. The final use of the log is arguably the most important, and that is to provide buffering and isolation to the individual processes.
  26. The log acts as a very, very large buffer that allows the process to be restarted or fail without slowing down other parts of the processing graph. This means that a consumer can come down entirely for long periods of time without impacting any of the upstream graph; as long as it is able to catch up when it restarts, everything else is unaffected.
  27. An interesting application of this kind of log-oriented data modeling is the Lambda Architecture. This is an idea introduced by Nathan Marz, who wrote a widely read blog post describing an approach to combining stream processing with offline processing.
  28. For event data, Kafka supports retaining a window of data. The window can be defined in terms of either time (days) or space (GBs), and most people just stick with the one week default retention.
  29. Instead of simply throwing away the old log entirely, we garbage-collect obsolete records from the tail of the log. Any record in the tail of the log that has a more recent update is eligible for this kind of cleanup. By doing this, we still guarantee that the log contains a complete backup of the source system, but now we can no longer recreate all previous states of the source system, only the more recent ones. We call this feature log compaction.
  30. data infrastructure could be unbundled into a collection of services and application-facing system API.
    1. Zookeeper handles much of the system coordination (perhaps with a bit of help from higher-level abstractions like Helix or Curator).
    2. Mesos and YARN process virtualization and resource management.
    3. Embedded libraries like Lucene, RocksDB, and LMDB do indexing.
    4. Netty, Jetty, and higher-level wrappers like Finagle and rest.li handle remote communication.
    5. Avro, Protocol Buffers, Thrift, and umpteen zillion other libraries handle serialization. 
    6. Kafka and BookKeeper provide a backing log.
  31. If you stack these things in a pile and squint a bit, it starts to look like a LEGO version of distributed data system engineering. You can piece these ingredients together to create a vast array of possible systems.
  32. Here are some things a log can do: 
    1. Handle data consistency (whether eventual or immediate) by sequencing concurrent updates to nodes 
    2. Provide data replication between nodes 
    3. Provide “commit” semantics to the writer (such as acknowledging only when your write is guaranteed not to be lost) 
    4. Provide the external data subscription feed from the system 
    5. Provide the capability to restore failed replicas that lost their data or bootstrap new replicas 
    6. Handle rebalancing of data between nodes

Thursday, November 17, 2016

Swift 3 by example

//: Playground - noun: a place where people can play

import UIKit

// Mutable variable
var str = "Hello, playground"

// Array
var myarr : [String] = ["Watsh", "Rajneesh"]


// Dictionary
var mydict : [String:String] = ["tic": "tac", "ping": "pong"]

// constants
let conststr = "My Const String"
let countup = ["one", "two"]
let mapNameToParkingSpace = ["Alice": 10, "Rajneesh": 12]

// subscript of array
let elem = countup[1]


// Initializers
let emptyString = String()
let emptyArr = [Int]()

let emptySet = Set<Float>()

let defaultNum = Int()

let defaultBool = Bool()

let defaultFloat = Float()


// initializing a set
let availableRoomsSet = Set([1,2,3, 4])


// Properties
emptyArr.count
availableRoomsSet.count
conststr.isEmpty


// Optionals - can be value of the said datatype or nil
var anOptionalFloat: Float?
var anOptionalArrayOfStrings: [String]?
var anOptionalArrayOfOptionalStrings: [String?]?

var reading1: Float?
var reading2: Float?

reading1 = 1.0
reading2 = 2.0

// to use the value we need to unwrap the optional

// 1. forced unwrapping - used when sure that value will not be nil
// if reading2 assignment commented - this will result in error

let avgReading = (reading1! + reading2!)/2

// 2. Optional binding - safe to use always
//  if reading2 assignment is commented out, will go to else block
if let r1 = reading1,
    let r2 = reading2 {
    let avg = (r1 + r2)/2
} else {
    print("reading nil")
}

// Subscripting dictionary
let nameByParkingSpace = [13: "Alice", 27: "Bob"]
let space13Assignee: String? = nameByParkingSpace[13]
let space42Assignee: String? = nameByParkingSpace[42]

if let space13Assignee = nameByParkingSpace[13] {
    print("Key 13 is assigned in the dictionary!")
}


// looping
let range = 0 ..< countup.count

for i in range {
    let string = countup[i]
    // Use 'string'
}


for string in countup {
    // Use 'string'
}

// return a tuple of index and value in array
for (i, string) in countup.enumerated() {
    // (0, "one"), (1, "two")
}


// iterating dictionary
for (space, name) in nameByParkingSpace {
    let permit = "Space \(space): \(name)"
    print(permit)
}

// enum
enum PieType {
    case Apple
    case Cherry
    case Pecan
}

let favoritePie = PieType.Apple

let name: String

// switch-case with enum
// no break required as only the code within the matching case is executed so break is implicit
// fallthrough behavior can be set
switch favoritePie {
case .Apple:
    name = "Apple"
case .Cherry:
    name = "Cherry"
case .Pecan:
    name = "Pecan"
}

let osxVersion: Int = 7
switch osxVersion {
case 0...8:
    print("A big cat")
case 9:
    print("Mavericks")
case 10:
    print("Yosemite")
default:
    print("Greetings, people of the future! What's new in 10.\(osxVersion)?")
}


// Value associated with enum
enum PieTypeInt: Int {
    case Apple = 0
    case Cherry
    case Pecan
}


Friday, November 11, 2016

Understanding Bitcoin technology


A short summary of what i understood so far about Bitcoin technology:
  1. It is a digital cryptocurrency - uses asymmetric encryption (private/public key pairs) during transactions and hashing (SHA256) to generate bitcoin addresses (which is analogous to a bank account number)
  2. The main author of this technology and initial software is Satoshi Nakamoto but no one knows him/her/them. Satoshi released the bitcoin software in Jan 2009. 
  3. It is decentralized - like gold and unlike paper currency which is centrally managed by governments of countries - so it is also called Digital Gold.
  4. An upper limit is set to the number of bitcoins that will be mined or generated. It is 21 million bitcoins (or BTC or XBT) and this number will be reached by the year 2040. At present the total bitcoins are around 16 million at the time of this writing (https://blockchain.info/charts/total-bitcoins). That the upper limit is fixed so the value of the bitcoin will keep increasing as time goes by. This is called deflation. For paper currency government's print more money to offset any deficit causing the existing value of money to reduce - and this is inflation. Bitcoin, like gold, is deflationary and its value keeps increasing unlike paper currency whose value is decreasing due to inflation.
  5. The lowest unit in bitcoin cryptocurrency is (0.000000001 bitcoin) which is called a "satoshi". Then there is 0.0000001 micro bitcoin or bit and 0.0001 milli bitcoin. So we can go very granular and can make transactions with very small fractions like 0.000345 bitcoin or BTC.
  6. Value of 1BTC at the time of this writing is $717 approx.
  7. It uses a peer-to-peer (P2P) network of computers. 
  8. A set of transactions is grouped in a structure called "block" and added to the end of the "blockchain". Blockchain is like a ledger of all transactions ever done using bitcoins. All nodes running the bitcoin software get a copy of this ledger (initially it takes quite a while to download the blockchain data). After initial download all transactions are added to the blockchain. 
  9. Every 10 mins the blocks of transactions will be synched to all nodes in the bitcoin network but only one node  (also called "miners") that wins a lottery (there could be just one such winner usually) will be able to change the blockchain and add the blocks of transactions it received to the blockchain. This then makes the transactions permanent in the blockchain and it will be synched to all thousands of miner nodes in the P2P network (thus no one central node maintains the ledger and it is virtually impossible to delete a transaction once done). The idea is that every 10 mins only one node in the P2P network will win the chance to add blocks to the blockchain. 
  10. If there happens to be a collision (meaning 2 nodes win the lottery at the same time) then only one of them will hold once in the next 10 mins the next winner node adds a block to one of the previous lottery winning node's blockchain. Then that (longest blockchain) remains as the source of truth and the other blockchain becomes orphaned. This protocol where nodes in the distributed computing environment arrive at a consensus and choose a winner every 10 mins is said to solve a byzantine general's problem that was deemed unsolvable earlier.
  11. The miners are rewarded in 2 ways - 
    1. each transaction has a fee associated with it that is earned as a reward by the miner that wins the lottery. It is generally a small value (0.0002 BTC at the time of this writing in Bitcoin-core wallet program). 
    2. when bitcoins were introduced in Jan 2009 then miners that used to win in those days until next 4 years were rewarded with 50 BTC every 10mins. Then in the following 4 years this reward was halved to 25 BTC and then in the following 4 years to 12.5 BTC and so on. This value will keep reducing and so the reward fee value will keep increasing. The reason that it is reducing is to ensure the total "mined" or generated bitcoins will not go beyond 21 Million BTC.
  12. The incentive to "find" a block (akin to mining and finding gold) in bitcoin is why several folks invested in hardware that can do bitcoin mining on the P2P network and be compensated by the fee associated with all transactions that make that block they find and also the share of new bitcoins mined which also goes to the miner that found the block.

Wednesday, November 09, 2016

git clone shows all files as deleted or changed

When git clone shows all files as deleted or changed then do the following:

Ensure the following property is set or else windows will not be able to handle file path names longer than 260 chars. Thus resulting in odd behavior of git.

git config --system core.longpaths true

git reset --hard origin/[your_branch]-- this will reset local git repo index to match the one at remote branch.

Monday, October 17, 2016

Good Health numbers

HDL (good cholestrol) > 50
LDL (bad cholestrol) < 100 (best is < 50 - that is centenarian type)

Total cholestrol (HDL + LDL) < 200

High cholestrol means heart attack risk (#1 killer)
Obesity, diabetes, high blood pressure and smoking are 4 other risk factors.

       CHOLESTEROL
           ------------------
Cholesterol ---   <  200
HDL  ---  40  ---  60
LDL  ---    <  100
VLDL --     <  30
Triglycerides --   <  150
----------------------------

         CHOLESTEROL
         ----------------
Borderline --200 -- 239
High ----    >  240
V.High --    >  250
----------------------------

            LDL
           ------
Borderline --130 ---159
High ---  160  ---  189
V.High --  > 190
----------------------------

           TRIGLYCERIDES
           -----------------
Borderline - 150 -- 199
High --   200  ---  499
V.High --     >   500
----------------------------

       
        PLATELETS COUNT
       ----------------------
1.50  Lac  ----  4.50 Lac
----------------------------

              BLOOD
             -----------
Vitamin-D --  50   ----  80
Uric Acid --  3.50  ---  7.20
----------------------------

            KIDNEY
           ----------
Urea  ---   17   ---   43
Calcium --  8.80  --  10.60
Sodium --  136  ---  146
Protein  --   6.40  ---  8.30
----------------------------


           HIGH BP
          ----------
120/80 --  Normal
130/85 --Normal  (Control)
140/90 --  High
150/95 --  V.High
----------------------------

         LOW BP
        ---------
120/80 --  Normal
110/75 --  Normal  (Control)
100/70 --  Low
90//65 --   V.Low
----------------------------

              SUGAR
             ---------
Glucose (F) --  70  ---  100
(12 hrs Fasting)
Glucose (PP) --  70  --- 140
(2 hrs after eating)
Glucose (R) --  70  ---  140
(After 2 hrs)
----------------------------
    
             HAEMOGLOBIN
            -------------------
Male --  13  ---  17
Female --  11 ---  15
RBC Count  -- 4.50 -- 5.50
                           (million)
----------------------------

           PULSE
          --------
72  per minute (standard)
60 --- 80 p.m. (Normal)
40 -- 180  p.m.(abnormal)
----------------------------

          TEMPERATURE
          -----------------
98.4 F    (Normal)
99.0 F Above  (Fever)

Please help your Relatives, Friends by sharing this information....

Heart Attacks And Drinking Warm Water:

This is a very good article. Not only about the warm water after your meal, but about Heart Attack's . The Chinese and Japanese drink hot tea with their meals, not cold water, maybe it is time we adopt their drinking habit while eating. For those who like to drink cold water, this article is applicable to you. It is very Harmful to have Cold Drink/Water during a meal. Because, the cold water will solidify the oily stuff that you have just consumed. It will slow down the digestion. Once this 'sludge' reacts with the acid, it will break down and be absorbed by the intestine faster than the solid food. It will line the intestine. Very soon, this will turn into fats and lead to cancer . It is best to drink hot soup or warm water after a meal.

French fries and Burgers
are the biggest enemy of heart health. A coke after that gives more power to this demon. Avoid them for
your Heart's & Health.


Drink one glass of warm water just when you are about to go to bed to avoid clotting of the blood at night to avoid heart attacks or strokes.