A collection of Big Data Books from Packt Publication


I found that Packt publication have few great books on Big Data and here is a collection of few books which I found very useful:Screen Shot 2014-09-30 at 11.50.08 AM

Packt is giving its readers a chance to dive into their comprehensive catalog of over 2000 books and videos for the next 7 days with LevelUp program:

packt

Packt is offering all of its eBooks and Videos at just $10 each or less

The more EXP customers want to gain, the more they save:

  • Any 1 or 2 eBooks/Videos – $10 each
  • Any 3 to 5 eBooks/Videos – $8 each
  • Any 6 or more eBooks/Videos – $6 each

More Information is available at bit.ly/Yj6oWq  |  bit.ly/1yu4679

For more information please visit : www.packtpub.com/packt/offers/levelup

Big Data 1B dollars Club – Top 20 Players


Here is a list of top players in Big Data world having influence over billion dollars (or more) Big Data projects directly or indirectly (not in order):

  1. Microsoft
  2. Google
  3. Amazon
  4. IBM
  5. HP
  6. Oracle
  7. VMWare
  8. Terradata
  9. EMC
  10. Facebook
  11. GE
  12. Intel
  13. Cloudera
  14. SAS
  15. 10Gen
  16. SAP
  17. Hortonworks
  18. MapR
  19. Palantir
  20. Splunk

The list is based on each above companies involvement in Big data directly or indirectly along with a direct product or not. All of above companies are involved in Big Data projects worth considering Billion+ …

Postgresql – Tips and Trics


Login:

$ psql <dbname> -U <user_name>

After login at Postgresql Console:

  •  Exit:
    • dbname=# \q
  • List all tables:
    • dbname=# \dt
  • Info about specific table
    • dbname=# \d+ <table_name>

List all rows where column value is null

  • perspectivedb=# select * from cluster_config_property where key is null;

Deleting all rows where column value is null:

  • perspectivedb=# delete from cluster_config_property where key is null;

Backing up a single table:

  • This is done from regular prompt (not when you are logged into psql)
  • $ pg_dump -t <table_name> <db_name> -U <user_name> > <target_file_name>.sql

Restoring a single table back into database:

Watch Spark Summit 2014 on UStream


You can check out the Spark Summit 2014 agenda here: http://spark-summit.org/2014/agenda

logo2014

 

 

 

 

 

 

U Stream Sessions :

http://www.ustream.tv/channel/spark-summit-2014

Please register yourself at the summit site to get more details information.

Keywords: Spark Summit, Hadoop, Spark,

Accessing Remote Hadoop Server using Hadoop API or Tools from local machine (Example: Hortonworks HDP Sandbox VM)


Sometimes you may need to access Hadoop runtime from a machine where Hadoop services are not running. In this process you will create password-less SSH access to Hadoop machine from your local machine and once ready you can use Hadoop API to access Hadoop cluster or you can directly use Hadoop commands from local machine by passing proper Hadoop configuration.

Starting Hortonworks HDP 1.3 and/or 2.1 VM

You can use these instructions on any VM running Hadoop or you can download HDP 1.3 or 2.1 Images from the link below:

http://hortonworks.com/products/hortonworks-sandbox/#install

Now start your VM and make sure your Hadoop cluster is up and running. Once you VM is up and running you will get IP address and hostname on the VM screen which is mostly 192.168.21.xxx as shown below:

Screen Shot 2014-06-05 at 1.21.53 PM

Accessing Hortonworks HDP 1.3 and/or 2.1 from browser:

Using the IP address provided you can check the Hadoop server status on port 8000 as below

HDP 1.3 – http://192.168.21.187:8000/about/

HDP 2.1 – http://192.168.21.186:8000/about/

The UI for both HDP1.3 and HDP 2.1 looks as below:

hdp13-21

 

 

 

 

 

 

 

 

 

 

 

Now from your host machine you can also try to ssh to any of the machine using user name root and password hadoop as below:

$ssh root@192.168.21.187

The authenticity of host ‘192.168.21.187 (192.168.21.187)’ can’t be established.
RSA key fingerprint is b2:c0:9a:4b:10:b4:0f:c0:a0:da:7c:47:60:84:f5:dc.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.21.187’ (RSA) to the list of known hosts.
root@192.168.21.187’s password: hadoop
Last login: Thu Jun 5 03:55:17 2014

Now we will add password less SSH access to these VM and there could be two option:

Option 1: You already have SSH key created for yourself earlier and want to reuse here:

In this option, first we will make sure we have RSA based key for SSH session in our local machine and then we will use it for password less SSH access:

  1. In your home folder (/Users/<yourname>) visit to folder name .ssh
  2. Identify a file name id_rsa.pub  (/Users/avkashchauhan/.ssh/id_rsa.pub) and you will see a long string key there
  3. Now also identify another file name  authorized_keys there (i.e. /Users/avkashchauhan/.ssh/authorized_keys) and you will see one or more long string keys there.
  4. Check the content of id_rsa.pub and make sure that this key is also available into authorized_keys files along with other keys (if there)
  5. Now copy the key string from id_rsa.pub file in memory.
  6. SSH to your HDP machine as in previous step using username and password
  7. visit to /root/.ssh folder
  8. You will find authorized_keys file there so open this file in editor and append the key here which you have copied in previous step #5.
  9. Save authorized_keys files
  10. Now in the same VM you will find id_rsa.pub file and please copy its content in memory.
  11. Exit the HDP VM
  12. In your host machine you have already checked authorized_keys in step #3, append the key from HDP VM into authorized_keys file and save it.
  13. Now try logging HDP VM as below:

ssh root@192.168.21.187

Last login: Thu Jun 5 06:35:31 2014 from 192.168.21.1

Note: You will see  that password is not needed this time as Password less SSH is working.

Option 2: You haven’t created SSH key in your local machine and will do everything from scratch:

In this option first we will create a SSH based key first and then use it exactly with Option #1.

  • Log into your host machine and open terminal
  • For example your home folder will be /Users/<username>
  • Create a folder name .ssh inside your working folder
  • now go inside .ssh folder and run the following command

$ ssh-keygen -C ‘SSH Access Key’ -t rsa

Enter file in which to save the key (/home/avkashchauhan/.ssh/id_rsa): ENTER

Enter passphrase (empty for no passphrase): ENTER

Enter same passphrase again: ENTER

  • You will see id_rsa and id_rsa.pub files are created. Now we will append the contents of id_rsa.pub into authorized_keys files and it is not there then we will create and add. For both the command is as below:

$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

  • In the above step you will see the contents of id_rsa.pub are included into authorized_keys.
  • Now we will set proper permissions for keys and folders as below:

$ chmod 700 $HOME && chmod 700 ~/.ssh && chmod 600 ~/.ssh/*

  • Finally we can follow Option #1 now to add both id_rsa.pub keys in both machines authorized_keys files to have password less ssh working.

Adding correct Java Home path to java

Migrating Hadoop configuration from Remote Machine to local Machine:

To get this working we will have to get Hadoop configuration files from HDP server to local machine and to do this you just need to copy Hadoop configuration files from HDP servers as below:

HDP 1.3:

Create a folder name hdp13 in your working folder and now use SCP command to copy configuration files as below over password less SSH:

$ scp -r root@192.168.21.187:/etc/hadoop/conf.empty/ ~/hdp13

HDP 2.1:

Create a folder name hdp21 in your working folder and now use SCP command to copy configuration files as below over password less SSH:

$ scp -r root@192.168.21.186:/etc/hadoop/conf/ ~/hdp21

Adding correct JAVA_HOME to imported Hadoop configuration hadoop-env.sh

Now visit to your hdp13 or hdp21 folder and edit hadoop-env.sh file with correct JAVA_HOME as below:

# The java implementation to use. Required.
# export JAVA_HOME=/usr/jdk/jdk1.6.0_31  
export JAVA_HOME=`/usr/libexec/java_home -v 1.7`

Adding correct HDP Hostname into local machine hosts entries:

Now you would need to add Hortonworks HDP hostnames into your local machines hosts file. On Mac OSX you would need to edit /private/etc/hosts file to add the following:

#HDP 2.1
192.168.21.186 sandbox.hortonworks.com
#HDP 1.3
192.168.21.187 sandbox

 

Once added make sure you can ping the hosts by name as below:

$ ping sandbox

PING sandbox (192.168.21.187): 56 data bytes
64 bytes from 192.168.21.187: icmp_seq=0 ttl=64 time=0.461 ms

And for HDP 2.1

$ ping sandbox.hortonworks.com
PING sandbox.hortonworks.com (192.168.21.186): 56 data bytes
64 bytes from 192.168.21.186: icmp_seq=0 ttl=64 time=0.420 ms

Access Hadoop Runtime on Remote Machine from Hadoop commands (or API) at Local Machine:

Now using local machine Hadoop runtime you can connect to Hadoop at HDP VM as below:

HDP 1.3

$ ./hadoop –config /Users/avkashchauhan/hdp13/conf.empty fs -ls /
Found 4 items
drwxr-xr-x – hdfs hdfs 0 2013-05-30 10:34 /apps
drwx—— – mapred hdfs 0 2014-06-05 03:54 /mapred
drwxrwxrwx – hdfs hdfs 0 2014-06-05 06:19 /tmp
drwxr-xr-x – hdfs hdfs 0 2013-06-10 14:39 /user

HDP 2.1

$ ./hadoop –config /Users/avkashchauhan/hdp21/conf fs -ls /
Found 6 items
drwxrwxrwx – yarn hadoop 0 2014-04-21 07:21 /app-logs
drwxr-xr-x – hdfs hdfs 0 2014-04-21 07:23 /apps
drwxr-xr-x – mapred hdfs 0 2014-04-21 07:16 /mapred
drwxr-xr-x – hdfs hdfs 0 2014-04-21 07:16 /mr-history
drwxrwxrwx – hdfs hdfs 0 2014-05-23 11:35 /tmp
drwxr-xr-x – hdfs hdfs 0 2014-05-23 11:35 /user

If you are using Hadoop API then you can pass the CONF file path to API and have access to Hadoop runtime.

 

Apache Ambari 1.6.0 support with Blueprints is released


What is Ambari Blueprint?

Ambari Blueprint allows an operator to instantiate a Hadoop cluster quickly—and reuse the blueprint to replicate cluster instances elsewhere, for example, as development and test clusters, staging clusters, performance testing clusters, or co-located clusters.

Release URL: http://www.apache.org/dyn/closer.cgi/ambari/ambari-1.6.0

blueprints

 

 

 

 

 

 

Ambari Blueprint supports PostgreSQL:

Ambari now extends database support for Ambari DB, Hive and Oozie to include PostgreSQL. This means that Ambari now provides support for the key databases used in enterprises today: PostgreSQL, MySQL and Oracle. The PostgreSQL configuration choice is reflected in this database support matrix.

More Links:

 

Content Source: http://hortonworks.com/blog/apache-ambari-1-6-0-released-blueprints