hadoop NullPointerException
NickName:emiljho Ask DateTime:2011-03-31T02:52:42

hadoop NullPointerException

I was trying to setup a multi node cluster of hadoop michael-noll's way using two computers.

When I tried to format the hdfs it showed a NullPointerException.

hadoop@psycho-O:~/project/hadoop-0.20.2$ bin/start-dfs.sh
starting namenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-psycho-O.out
slave: bash: line 0: cd: /home/hadoop/project/hadoop-0.20.2/bin/..: No such file or directory
slave: bash: /home/hadoop/project/hadoop-0.20.2/bin/hadoop-daemon.sh: No such file or directory
master: starting datanode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-psycho-O.out
master: starting secondarynamenode, logging to /home/hadoop/project/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-psycho-O.out
master: Exception in thread "main" java.lang.NullPointerException
master:     at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)
master:     at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115)
master:     at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)
hadoop@psycho-O:~/project/hadoop-0.20.2$ 

I dunno what is causing this. Please help me figure out the problem. I am not a fresher in the topic, so please make your answer less techy as possible. :)

If some more information is needed kindly tell me.

Copyright Notice:Content Author:「emiljho」,Reproduced under the CC 4.0 BY-SA copyright license with a link to the original source and this disclaimer.
Link to original article:https://stackoverflow.com/questions/5490805/hadoop-nullpointerexception

Answers
DarKeViLzAc 2012-03-30T02:14:36

master: at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)\nmaster: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)\nmaster: at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)\n\n\nIt seems that your secondary namenode has trouble connecting to the primary namenode, which is definitely required for the whole system to rock the road, for there's checkpointing things need to be done. So I guess there's something wrong with your network configuration, including: \n\n\n${HADOOP_HOME}/conf/core-site.xml,which contains something like this:\n\n<!-- Put site-specific property overrides in this file. -->\n<configuration>\n <property>\n <name>hadoop.tmp.dir</name>\n <value>/app/hadoop/tmp</value>\n <description>A base for other temporary directories.</description>\n </property>\n\n <property>\n <name>fs.default.name</name>\n <value>hdfs://master:54310</value>\n <description>The name of the default file system. A URI whose\n scheme and authority determine the FileSystem implementation. The\n uri's scheme determines the config property (fs.SCHEME.impl) naming\n the FileSystem implementation class. The uri's authority is used to\n determine the host, port, etc. for a filesystem.</description>\n </property>\n</configuration>\n\nand the /etc/hosts. This file is really a slippy slope, you gotta be careful with these ip alias name, which should be consistent with the hostname of the machine with that ip.\n\n 127.0.0.1 localhost\n 127.0.1.1 zac\n\n # The following lines are desirable for IPv6 capable hosts\n ::1 ip6-localhost ip6-loopback\n fe00::0 ip6-localnet\n ff00::0 ip6-mcastprefix\n ff02::1 ip6-allnodes\n ff02::2 ip6-allrouters\n\n 192.168.1.153 master #pay attention to these two!!!\n 192.168.99.146 slave1\n\n",


tuxdna 2014-04-17T05:47:23

Apparently the defaults are not correct so you have to add them yourself as described in this post\n\n\nhttp://amitava1.blogspot.in/2010/01/hadoop-0201-null-pointer-exception-on.html\n\n\nIt worked for me.",


user5393067 2015-12-09T15:06:08

It seems you have not installed hadoop in your datanode(slave) at all (or) you have done it in a wrong path. The correct path in your case should be /home/hadoop/project/hadoop-0.20.2/",


Thomas Jungblut 2011-03-30T19:08:29

Your bash scripts seem not to have the execute rights or don't even exist:\n\n\n slave: bash: line 0: cd: /home/hadoop/project/hadoop-0.20.2/bin/..: No such file or directory\n slave: bash: /home/hadoop/project/hadoop-0.20.2/bin/hadoop-daemon.sh: No such file or directory\n",


More about “hadoop NullPointerException” related questions

hadoop NullPointerException

I was trying to setup a multi node cluster of hadoop michael-noll's way using two computers. When I tried to format the hdfs it showed a NullPointerException. hadoop@psycho-O:~/project/hadoop-0....

Show Detail

Hadoop: NullPointerException when calling getFsStatistics

I'm encountering the following Exception when running a MapReduce job taking a file stored on HDFS as input: 15/03/27 17:18:12 INFO mapreduce.Job: Task Id : attempt_1427398929405_0005_m_000005_2, ...

Show Detail

Hadoop CustomInputFormat NullPointerException

I developed a custom non-splittable InputFormat for Hadoop, but I keep getting a NullPointerException when calling the record reader. Strangely, even when I update the code, rebuild, and add the ja...

Show Detail

Unable to use SnappyCodec with hadoop jar: NullPointerException

I'm trying to use Hadoop's compression libraries in a simplistic java program. However I'm unable to use the Snappy codec: execution yields a NullPointerException in method SnappyCodec.createCompre...

Show Detail

Hadoop Job throws NullPointerException in FBUtilities.java

I am getting a NullPointerException when trying to start my hadoop job with access to Cassandra. Here comes the stack trace: Exception in thread "main" java.lang.NullPointerException at org.

Show Detail

Hadoop: NullPointerException with Custom InputFormat

I've developed a custom InputFormat for Hadoop (including a custom InputSplit and a custom RecordReader) and I'm experiencing a rare NullPointerException. These classes are going to be used for qu...

Show Detail

Why does checking whether a file exists in hadoop cause a NullPointerException?

I'm trying to create or open a file to store some output in HDFS, but I'm getting a NullPointerException when I call the exists method in the second to last line of the code snippet below:

Show Detail

Hadoop 2.6 java.lang.nullpointerexception

I am trying to run the wordcount example class WordCountMapper : public HadoopPipes::Mapper { public: // constructor: does nothing WordCountMapper( HadoopPipes::TaskContext&amp; context ) { ...

Show Detail

Hadoop: NullPointerException when redirecting to job history server

I have a Hadoop cluster (HDP 2.1). Everything has been working for a long time, but suddenly jobs have started to return the following recurrent error: 16/10/13 16:21:11 INFO Configuration.depreca...

Show Detail

Hadoop mapreduce - mapping NullPointerException

I need to write a simple map-reduce program that , given as input a directed graph represented as a list of edges, produces the same graph where each edge (x,y) with x>y is replaced by (y,x) and th...

Show Detail