我目前正在一台运行CentOs的服务器上配置hadoop。当我运行start-dfs.sh或stop-dfs.sh时,我得到以下错误:
警告跑龙套。NativeCodeLoader:无法加载原生hadoop库
你的平台……在适用的地方使用内置java类
我运行的是Hadoop 2.2.0。
我在网上搜索了一下,找到了这个链接:http://balanceandbreath.blogspot.ca/2013/01/utilnativecodeloader-unable-to-load.html
但是,hadoop 2上的/native/目录的内容。x似乎不同,所以我不知道该怎么办。
我还在hadoop-env.sh中添加了以下两个环境变量:
出口HADOOP_OPTS = " HADOOP_OPTS美元
-Djava.library.path = / usr /地方/ hadoop / lib /”
出口HADOOP_COMMON_LIB_NATIVE_DIR = " / usr /地方/ hadoop / lib /本地/”
什么好主意吗?
对于那些在OSX上通过Homebrew安装Hadoop的用户,请按照以下步骤替换路径和Hadoop版本
wget http://www.eu.apache.org/dist/hadoop/common/hadoop-2.7.1/hadoop-2.7.1-src.tar.gz
tar xvf hadoop-2.7.1-src.tar.gz
cd hadoop-2.7.1-src
mvn package -Pdist,native -DskipTests -Dtar
mv lib /usr/local/Cellar/hadoop/2.7.1/
然后更新hadoop-env.sh
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc= -Djava.library.path=/usr/local/Cellar/hadoop/2.7.1/lib/native"
除了@zhutoulala接受的答案之外,这里是一个更新,使其能够在ARMHF平台(树莓派3模型B)上使用最新的稳定版本(2.8)。
首先,我可以确认您必须将本机库重新编译为64位ARM,这里基于设置一些环境变量的其他答案将不起作用。正如Hadoop文档中所指出的,预构建的本机库是32位的。
第一个链接(http://www.ercoppa.org/posts/how-to-compile-apache-hadoop-on-ubuntu-linux.html)中给出的高级步骤是正确的。
在这个网址http://www.instructables.com/id/Native-Hadoop-260-Build-on-Pi/上,你可以获得更多关于树莓派的详细信息,但不包括Hadoop 2.8版。
以下是我对Hadoop 2.8的建议:
there is still no protobuf package on latest Raspbian so you must compile it yourself and version must be exactly protobuf 2.5 (https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz)
CMake file patching method must be changed. Moreovere, files to patch are not the same. Unfortunately, there is no accepted patch on JIRA specific to 2.8. On this URL (https://issues.apache.org/jira/browse/HADOOP-9320) you must copy and paste Andreas Muttscheller proposed patch on your namenode :
:hadoop-2.8.0-src/hadoop-common-project/hadoop-common $ touch HADOOP-9320-v2.8.patch
:hadoop-2.8.0-src/hadoop-common-project/hadoop-common $ vim HADOOP-9320-v2.8.patch
#copy and paste proposed patch given here : https://issues.apache.org/jira/browse/HADOOP-9320?focusedCommentId=16018862&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16018862
:hadoop-2.8.0-src/hadoop-common-project/hadoop-common $ patch < HADOOP-9320-v2.8.patch
patching file HadoopCommon.cmake
patching file HadoopJNI.cmake
:hadoop-2.8.0-src/hadoop-common-project/hadoop-common $ cd ../..
:hadoop-2.8.0-src $ sudo mvn package -Pdist,native -DskipTests -Dtar
一旦构建成功:
:hadoop-2.8.0-src/hadoop-dist/target/hadoop-2.8.0/lib/native $ tar -cvf nativelibs.tar *
用这个归档文件的内容替换Hadoop安装的lib/native目录的内容。运行Hadoop时的警告消息应该消失。
这个答案混合了@chromeeagle的分析和这个链接(Nan-Xiao)。
对于那些其他解决方案根本不起作用的人,请遵循以下步骤:
Edit the file $HADOOP_HOME/etc/hadoop/log4j.properties (credits to @chromeeagle). Add the line at the end:
log4j.logger.org.apache.hadoop.util.NativeCodeLoader=DEBUG
Launch your spark/pyspark shell. You will see additional log information regarding the native library not loading. In my case I had the following error:
Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path
To fix this specific problem, add the Hadoop native library path to the LD_LIBRARY_PATH environment variable in your user's profile:
export LD_LIBRARY_PATH="$HADOOP_HOME/lib/native:$LD_LIBRARY_PATH"
希望这能有所帮助。我在几个HADOOP安装中遇到了这个问题,它在两个上都有效。