Featured post
1

SIX sparkling features of Apache Spark!

What is Apache Spark? Why there is a serious buzz going-on about this? If you are into BigData analytics business then, should you really care about Spark? Hope this post will help to answer some of these questions which might have coming to your mind these days.

Apache Spark is a powerful open source processing engine for Hadoop data built around speed, easy to use, and sophisticated analytics. It was originally developed in UC Berkeley’s AMPLab and later-on it moved to Apache. Apache Spark is basically a parallel data processing framework that can work with Apache Hadoop to make it extremely easy to develop fast, Big Data applications combining batch, streaming, and interactive analytics on all your data.

Lets go through some of its features which are really highlighting it in the Bigdata world!

  1. Lighting Fast Processing

When comes to BigData processing speed always matters. We always look for processing our huge data as fast as possible. Spark enables applications in Hadoop clusters to run up to 100x faster in memory, and 10x faster even when running on disk. Spark makes it possible by reducing number of read/write to disc. It stores this intermediate processing data in-memory. It uses the concept of an Resilient Distributed Dataset (RDD), which allows it to transparently store data on memory and persist it to disc only it’s needed. This helps to reduce most of the disc read and write –  the main time consuming factors – of data processing.

(Spark Performance over Hadoop. Image Courtesy: Cloudera. Visit this link to see how Jai & Matei explains the delightful experience giving by Spark to its developers.)

  1. Ease of Use as it supports multiple languages

Spark lets you quickly write applications in JavaScala, or Python. This helps developers to create and run their applications on their familiar programming languages. It comes with a built-in set of over 80 high-level operators.We can use it interactively to query data within the shell too.

  1. Support for Sophisticated Analytics

In addition to simple “map” and “reduce” operations, Spark supports SQL queries, streaming data, and complex analytics such as machine learning and graph algorithms out-of-the-box. Not only that, users can combine all these capabilities seamlessly in a single workflow.

  1. Real time stream processing

Spark can handle real time streaming. Map-reduce mainly handles and process the data stored already. However Spark can also manipulate data in real time using Spark Streaming. Not ignoring that there are other frameworks with their integration we can handle streaming in Hadoop.

Here is what Cloudera says about Sparks Streaming abilities:

  • Easy: Built on Spark’s lightweight yet powerful APIs, Spark Streaming lets you rapidly develop streaming applications
  • Fault tolerant: Unlike other streaming solutions (e.g. Storm), Spark Streaming recovers lost work and delivers exactly-once semantics out of the box with no extra code or configuration
  • Integrated: Reuse the same code for batch and stream processing, even joining streaming data to historical data

(Streaming Performance over Storm. Image Courtesy:Cloudera.com)

  1. Ability to integrate with Hadoop and existing HadoopData

Spark can run independently. Apart from that it can run on Hadoop 2’s YARN cluster manager, and can read any existing Hadoop data. That’s a BIG advantage! It can read from any Hadoop data sources for example HBase, HDFS etc. This feature of Spark makes it suitable for migration of existing pure Hadoop applications, if that application use-case is really suiting Spark. As Spark is using immutability more all scenarios might not be suitable for migration.

  1. Active and expanding Community

Apache Spark is built by a wide set of developers from over 50 companies. The project started in 2009 and as of now more than 250 developers have contributed to Spark already! It has active mailing lists and JIRA for issue tracking.

Below are some useful links to start with:

If you want to learn basics of Apache Spark then my previous post will help you. It has a training video link which explains Spark simple way.

About these ads

Poll: Which JavaScript framework you will choose for your Single Page Application (SPA)?

A single-page application (SPA) is a web application or web site that fits on a single web page with the goal of providing a more easy user experience similar to a desktop application. We uses sophisticated JavaScript libraries to develop them. It also will use REST webservices to handle server side code. Below is a simple poll about the SPA.

Apache Spark: A promising framework for Big Data world!

Apache Spark™ is an open-source data analytics cluster computing framework originally developed in the AMPLab at UC Berkeley. It is a fast and general engine for large-scale data processing. We can say it as an engine that increases the computing workloads Hadoop can handle. Also increasing the performance by using in-memory storage during execution. It is a standalone project, but it designed to work with/on top of the Hadoop Distributed File System.

The ecosystem of Spark projects. Source: Databricks

The ecosystem of Spark projects. Source: Databricks

Below is a very useful training video link for Spark beginners by Intellipaat (copy righted to Intellipaat and shared through their YouTube Channel). Hope that will help to get some initial idea about Spark for sure!

Step by Step Guide to create a sample CRUD Java application using MongoDB and Spring Data for MongoDB.

MongoDB is a scalable, high-performance, open source NoSQL database. Instead of storing data in tables as is done in a “classical” relational database, it stores structured data as JSON-like documents with dynamic schemas. This post contains steps to create a sample application using MongoDB and Spring Data for MongoDB.

Spring Data for MongoDB

‘Spring data for MongoDB’ is providing a familiar Spring-based programming model for NoSQL data stores. It provides many features to the Java developers and make their life more simpler while working with MongoDB. MongoTemplate helper class support increases productivity performing common Mongo operations. Includes integrated object mapping between documents and POJOs.  As usual it translates exception into Spring’s portable Data Access Exception hierarchy.  The Java based Query, Criteria, and Update DSLs  are very useful to code all in Java. It also provides a cross-store persistence – support for JPA Entities with fields transparently persisted/retrieved using MongoDB.

You can download it from here: Download

Installing Mongo DB in just 5 steps!

There is no other place on internet which explains more clearly than its official installation reference. Following are the steps which I followed when I did its installation.

1. Download the latest production release of MongoDB from the MongoDB downloads page.

2. Unzip it into any of your convenient location say like

C:\MongoDB.

3. MongoDB requires a data folder to store its files. The default location for the MongoDB data directory is C:\data\db. But we can create any folder location for storing data. I want to make it in the same MongoDB folder. So I have created a folder at the below path.

C:\mongodb\data\db

4. That’s it! Go to C:\mongodb\bin folder and run mongod.exe with the data path

C:\mongodb\bin\mongod.exe –dbpath C:\mongodb\data\db

If your path includes spaces, enclose the entire path in double quotations, for example:

C:\mongodb\bin\mongod.exe –dbpath “C:\mongodb\data\db storage place”

image

5. To start MongoDB, go to its bin folder and run mongo.exe. This mongo shell will connect to the database running on the localhost interface and port 27017 by default. If you want to run MongoDB as a windows service then please see it here.

C:\mongodb\bin\mongod.exe

image

Okay, this part is done. Let it run there. Now we can create a small Java application with Spring Data.

Creating an application with Spring Data (Another 5 more steps!)

We need below Jars for creating this sample project. As I am a nature lover and a GoGreen person I named the project as “NatureStore”! Using this we are going to “Save” some “Trees” in to the DB!

Step1: Create a simple domain object.

The @Document annotation identifies a domain object that is going to be persisted to MongoDB.  And the  @Id annotation identifies its id.

package com.orangeslate.naturestore.domain;

import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;

@Document
public class Tree {

	@Id
	private String id;

	private String name;

	private String category;

	private int age;

	public Tree(String id, String name, int age) {
		this.id = id;
		this.name = name;
		this.age = age;
	}

	public String getId() {
		return id;
	}

	public void setId(String id) {
		this.id = id;
	}

	public String getName() {
		return name;
	}

	public void setName(String name) {
		this.name = name;
	}

	public String getCategory() {
		return category;
	}

	public void setCategory(String category) {
		this.category = category;
	}

	public int getAge() {
		return age;
	}

	public void setAge(int age) {
		this.age = age;
	}

	@Override
	public String toString() {
		return "Person [id=" + id + ", name=" + name + ", age=" + age
				+ ", category=" + category + "]";
	}
}

Step2: Create a simple Interface.

Created a simple interface with CRUD methods. I have also includes createColletions and dropCollections into this same interface.

package com.orangeslate.naturestore.repository;

import java.util.List;

import com.mongodb.WriteResult;

public interface Repository<T> {

	public List<T> getAllObjects();

	public void saveObject(T object);

	public T getObject(String id);

	public WriteResult updateObject(String id, String name);

	public void deleteObject(String id);

	public void createCollection();

	public void dropCollection();
}

Step 3: Create an implementation class specifically for Tree domain object. It also initializes the MongoDB Collections.

package com.orangeslate.naturestore.repository;

import java.util.List;

import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.query.Criteria;
import org.springframework.data.mongodb.core.query.Query;
import org.springframework.data.mongodb.core.query.Update;

import com.mongodb.WriteResult;
import com.orangeslate.naturestore.domain.Tree;

public class NatureRepositoryImpl implements Repository<Tree> {

	MongoTemplate mongoTemplate;

	public void setMongoTemplate(MongoTemplate mongoTemplate) {
		this.mongoTemplate = mongoTemplate;
	}

	/**
	 * Get all trees.
	 */
	public List<Tree> getAllObjects() {
		return mongoTemplate.findAll(Tree.class);
	}

	/**
	 * Saves a {@link Tree}.
	 */
	public void saveObject(Tree tree) {
		mongoTemplate.insert(tree);
	}

	/**
	 * Gets a {@link Tree} for a particular id.
	 */
	public Tree getObject(String id) {
		return mongoTemplate.findOne(new Query(Criteria.where("id").is(id)),
				Tree.class);
	}

	/**
	 * Updates a {@link Tree} name for a particular id.
	 */
	public WriteResult updateObject(String id, String name) {
		return mongoTemplate.updateFirst(
				new Query(Criteria.where("id").is(id)),
				Update.update("name", name), Tree.class);
	}

	/**
	 * Delete a {@link Tree} for a particular id.
	 */
	public void deleteObject(String id) {
		mongoTemplate
				.remove(new Query(Criteria.where("id").is(id)), Tree.class);
	}

	/**
	 * Create a {@link Tree} collection if the collection does not already
	 * exists
	 */
	public void createCollection() {
		if (!mongoTemplate.collectionExists(Tree.class)) {
			mongoTemplate.createCollection(Tree.class);
		}
	}

	/**
	 * Drops the {@link Tree} collection if the collection does already exists
	 */
	public void dropCollection() {
		if (mongoTemplate.collectionExists(Tree.class)) {
			mongoTemplate.dropCollection(Tree.class);
		}
	}
}

Step 4: Creating Spring context.

Declare all the spring beans and mongodb objects in Spring context file. Lets call it as applicationContext.xml. Note we are creating not created a database with name “nature” yet. MongoDB will create it once we saves our first data.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context"
	xsi:schemaLocation="http://www.springframework.org/schema/beans

http://www.springframework.org/schema/beans/spring-beans-3.0.xsd


http://www.springframework.org/schema/context

        http://www.springframework.org/schema/context/spring-context-3.0.xsd">

	<bean id="natureRepository"
		class="com.orangeslate.naturestore.repository.NatureRepositoryImpl">
		<property name="mongoTemplate" ref="mongoTemplate" />
	</bean>

	<bean id="mongoTemplate" class="org.springframework.data.mongodb.core.MongoTemplate">
		<constructor-arg name="mongo" ref="mongo" />
		<constructor-arg name="databaseName" value="nature" />
	</bean>

	<!-- Factory bean that creates the Mongo instance -->
	<bean id="mongo" class="org.springframework.data.mongodb.core.MongoFactoryBean">
		<property name="host" value="localhost" />
		<property name="port" value="27017" />
	</bean>

	<!-- Activate annotation configured components -->
	<context:annotation-config />

	<!-- Scan components for annotations within the configured package -->
	<context:component-scan base-package="com.orangeslate.naturestore">
		<context:exclude-filter type="annotation"
			expression="org.springframework.context.annotation.Configuration" />
	</context:component-scan>

</beans>

Step 5: Creating a Test class

Here I have created a simple test class and initializing context inside using ClassPathXmlApplicationContext.

package com.orangeslate.naturestore.test;

import org.springframework.context.ConfigurableApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;

import com.orangeslate.naturestore.domain.Tree;
import com.orangeslate.naturestore.repository.NatureRepositoryImpl;
import com.orangeslate.naturestore.repository.Repository;

public class MongoTest {

	public static void main(String[] args) {

		ConfigurableApplicationContext context = new ClassPathXmlApplicationContext(
				"classpath:/spring/applicationContext.xml");

		Repository repository = context.getBean(NatureRepositoryImpl.class);

		// cleanup collection before insertion
		repository.dropCollection();

		// create collection
		repository.createCollection();

		repository.saveObject(new Tree("1", "Apple Tree", 10));

		System.out.println("1. " + repository.getAllObjects());

		repository.saveObject(new Tree("2", "Orange Tree", 3));

		System.out.println("2. " + repository.getAllObjects());

		System.out.println("Tree with id 1" + repository.getObject("1"));

		repository.updateObject("1", "Peach Tree");

		System.out.println("3. " + repository.getAllObjects());

		repository.deleteObject("2");

		System.out.println("4. " + repository.getAllObjects());
	}
}

Lets run it as Java application. We can see the below output. First method saves “Apple Tree” into the database. Second method saves “OrangeTree” also into the database. Third method demonstrates finding an object with its id. Fourth one updates an existing object name with “Peach Tree”. And at last; the last method deletes the second object from DB.

1. [Person [id=1, name=Apple Tree, age=10, category=null]]
2. [Person [id=1, name=Apple Tree, age=10, category=null], Person [id=2, name=Orange Tree, age=3, category=null]]
Tree with id 1Person [id=1, name=Apple Tree, age=10, category=null]
3. [Person [id=1, name=Peach Tree, age=10, category=null], Person [id=2, name=Orange Tree, age=3, category=null]]
4. [Person [id=1, name=Peach Tree, age=10, category=null]]

NOTE: You can download all this code from Github!

11 OPEN Document-Oriented Databases which comes under NoSQL DB Category!

A document-oriented database is a designed for storing, retrieving, and managing document-oriented, or semi structured data. Document-oriented databases are one of the main categories of NoSQL databases. The central concept of a document-oriented database is the notion of a Document. While each document-oriented database implementation differs on the details of this definition, in general, they all assume documents encapsulate and encode data (or information) in some standard format(s) (or encoding(s)). Encodings in use include XML, YAML, JSON and BSON, as well as binary forms like PDF and Microsoft Office documents (MS Word, Excel, and so on).

  • MongoDB:  MongoDB is a collection-oriented, schema-free document database. Data is grouped into sets that are called ‘collections’. Each collection has a unique name in the database, and can contain an unlimited number of documents. Collections are analogous to tables in a RDBMS, except that they don’t have any defined schema.

It store data (which is in BASON – “Binary Serialized dOcument Notation” format) that is a structured collection of key-value pairs, where keys are strings, and values are any of a rich set of data types, including arrays and documents.

Home: http://www.mongodb.org/
Quick Start: http://www.mongodb.org/display/DOCS/Quickstart
Download: http://www.mongodb.org/downloads

  • CouchDB:  CouchDB is a document database server, accessible via a RESTful JSON API.  It is Ad-hoc and schema-free with a flat address space. Its Query-able and index-able, featuring a table oriented reporting engine that uses JavaScript as a query language. A CouchDB document is an object that consists of named fields. Field values may be strings, numbers, dates, or even ordered lists and associative maps.

Home: http://couchdb.apache.org/
Quick Start: http://couchdb.apache.org/docs/intro.html
Download: http://couchdb.apache.org/downloads.html

  • Terrastore: Terrastore is a modern document store which provides advanced scalability and elasticity features without sacrificing consistency. It is based on Terracotta, so it relies on an industry-proven, fast clustering technology.

Home: http://code.google.com/p/terrastore/
Quick Start: http://code.google.com/p/terrastore/wiki/Documentation
Download: http://code.google.com/p/terrastore/downloads/list

  • RavenDB: Raven is a .NET Linq enabled Document Database, focused on providing high performance, schema-less, flexible and scalable NoSQL data store for the .NET and Windows platforms.
    Raven store any JSON document inside the database. It is schema-less database where you can define indexes using C#’s Linq syntax.

Home: http://ravendb.net/
Quick Start: http://ravendb.net/tutorials
Download: http://ravendb.net/download

  • OrientDB: OrientDB is an open source NoSQL database management system written in Java. Even if it is a document-based database, the relationships are managed as in graph databases with direct connections between records. It supports schema-less, schema-full and schema-mixed modes. It has a strong security profiling system based on users and roles and supports SQL as a query languages.

Home: http://www.orientechnologies.com/
Quick Start: http://code.google.com/p/orient/wiki/Tutorials
Download: http://code.google.com/p/orient/wiki/Download

  • ThruDB: Thrudb is a set of simple services built on top of the Apache Thrift framework that provides indexing and document storage services for building and scaling websites. Its purpose is to offer web developers flexible, fast and easy-to-use services that can enhance or replace traditional data storage and access layers.
    It supports multiple storage backends such as BerkeleyDB, Disk, MySQL and also having     Memcache and Spread integration.

Home: http://code.google.com/p/thrudb/
Quick Start: http://thrudb.googlecode.com/svn/trunk/doc/Thrudb.pdf
Download: http://code.google.com/p/thrudb/source/checkout

  • SisoDB:  SisoDb is a document-oriented db-provider for Sql-Server written in C#. It lets you store object graphs of POCOs (plain old clr objects) without having to configure any mappings. Each entity is treated as an aggregate root and will get separate tables created on the fly.

Home: http://www.sisodb.com
Quick Start: http://www.sisodb.com/Wiki
Download: https://github.com/danielwertheim/SisoDb-Provider/

  • RaptorDB: RaptorDB is a extremely small size and fast embedded, noSql, persisted dictionary database using b+tree or MurMur hash indexing. It was primarily designed to store JSON data (see my fastJSON implementation), but can store any type of data that you give it.

Home: http://www.codeproject.com/KB/database/RaptorDB.aspx
Quick Start: http://www.codeproject.com/KB/database/RaptorDB.aspx
Download: http://www.codeproject.com/KB/database/RaptorDB.aspx

  • CloudKit: CloudKit provides schema-free, auto-versioned, RESTful JSON storage with optional OpenID and OAuth support, including OAuth Discovery.

Home: http://getcloudkit.com/
Quick Start: http://getcloudkit.com/api/
Download: https://github.com/jcrosby/cloudkit

  • Perservere: Persevere is an open source set of tools for persistence and distributed computing using an intuitive standards-based JSON interfaces of HTTP REST, JSON-RPC, JSONPath, and REST Channels. The core of the Persevere project is the Persevere Server. The Persevere server includes a Persevere JavaScript client, but the standards-based interface is intended to be used with any framework or client.

Home: http://code.google.com/p/persevere-framework/
Quick Start: http://code.google.com/p/persevere-framework/w/list
Download: http://code.google.com/p/persevere-framework/downloads/list

  • Jackrabbit: The Apache Jackrabbit™ content repository is a fully conforming implementation of the Content Repository for Java Technology API (JCR, specified in JSR 170 and 283). A content repository is a hierarchical content store with support for structured and unstructured content, full text search, versioning, transactions, observation, and more.

Home: http://jackrabbit.apache.org
Quick Start: http://jackrabbit.apache.org/getting-started-with-apache-jackrabbit.html
Download: http://jackrabbit.apache.org/downloads.html

Conclusion:
Document databases store and retrieve documents and basic atomic stored unit is a document.  As always your requirement leads into the decision. You need to think about your data-access patterns / use-cases to create a smart document-model. When your domain model can be split and partitioned across some documents, a document-database will be a suitable one for you. For example for a blog-software, a CMS or a wiki-software a document-db works extremely well. But at the same time a non-relational database is not better than a relational one in some cases where  your database have a lot of relations and normalization.

Just check the following link from stackoverflow also to cover the pros/cons of Relational Vs Document based databases.
http://stackoverflow.com/questions/337344/pros-cons-of-document-based-databases-vs-relational-databases

Wink – A framework for RESTful web services from Apache

Apache Wink 1.0 is a complete Java based solution for implementing and consuming REST based Web Services. The goal of the Wink framework is to provide a reusable and extendable set of classes and interfaces that will serve as a foundation on which a developer can efficiently construct applications.

Taken from Apache Wink official site: Click Here

Wink consists of a Server module for developing REST services, and of a Client module for consuming REST services. It cleanly separates the low-level protocol aspects from the application aspects. Therefore, in order to implement and consume REST Web Services the developer only needs to focus on the application business logic and not on the low-level technical details.

REST Web Service design structure

The Wink Server module is a complete implementation of the JAX-RS v1.0 specification. On top of this implementation, the Wink Server module provides a set of additional features that were designed to facilitate the development of RESTful Web services.

The Wink Client module is a Java based framework that provides functionality for communicating with RESTful Web services. The framework is built on top of the JDK HttpURLConnection and adds essential features that facilitate the development of such client applications.

How to create a RESTful service using Wink? [Coming soon - next post :-) ]

Story of an Old Carpenter

“Your life today is the result of your attitudes and choices in the past. Your life tomorrow will be the result.”

This is a story of an elderly carpenter who had been working for a contractor for the past 53 years. He had built many beautiful houses but now as he was getting old, he wanted to retire and lead a leisurely life with his family. So, he goes to the contractor and tells him about his plan of retiring. The contractor feels sad at the prospect of losing a good worker but agrees to the plan because the carpenter had indeed become too fragile for the tough building work. But as a last request, he asks the old carpenter to construct just one last house.
The old man agrees and starts working but his heart was not in his work any more. He had lost the motivation towards work. So, he resorted to shoddy workmanship and constructed the house half-heartedly. After the house was built, the contractor came to visit his employee’s last piece of work. After inspecting the house, he handed over the front door keys to the carpenter and said, “This is your new house. My gift to you.” The carpenter was shocked and upset. Had he known that he was building his own house, he would have done a better job! Now, he would have to live in the house, which is not worth staying.
Think of yourself as the carpenter. You work hard every day but are you giving your best? We put our least to the work we don’t like or do not have interest in. Later, we get shocked at the situation we have created for ourselves and try to figure out why we didn’t do it differently.
Enjoy your tasks and carry on your responsibilities with pleasure and not with pain. “Life is a do-it-yourself project”. Do your job enthusiastically and with devotion, a positive output and a pleasing life will certainly be on your way.