Unable to load native-hadoop library for your platform
環(huán)境:
ubuntu-linux 16.04
spark-2.3.1-bin-hadoop2.7
hadoop-2.7.7
可能的原因:
1.so文件版本不對
查看命令:
file libhadoop.so.1.0.0
libhadoop.so.1.0.0: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=c08a9ec9d1c3cf9bccf3ea87ed51d077b5651b1a, not stripped
2.ldd以后缺少連接:
查看命令:
(python2.7) appleyuchi@ubuntu:~/bigdata/hadoop-2.7.7/lib/native$ ldd libhdfs.so.0.0.0
? ? linux-vdso.so.1 => ?(0x00007fff80b18000)
? ? libjvm.so => /home/appleyuchi/Java/jdk1.8.0_131/jre/lib/amd64/server/libjvm.so (0x00007febd438b000)
? ? libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007febd4187000)
? ? libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007febd3f6a000)
? ? libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007febd3ba0000)
? ? /lib64/ld-linux-x86-64.so.2 (0x00007febd558d000)
? ? libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007febd3897000)
(python2.7) appleyuchi@ubuntu:~/bigdata/hadoop-2.7.7/lib/native$ ldd libhdfs.so
? ? linux-vdso.so.1 => ?(0x00007ffc710e4000)
? ? libjvm.so => /home/appleyuchi/Java/jdk1.8.0_131/jre/lib/amd64/server/libjvm.so (0x00007f744d082000)
? ? libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f744ce7e000)
? ? libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f744cc61000)
? ? libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f744c897000)
? ? /lib64/ld-linux-x86-64.so.2 (0x00007f744e284000)
? ? libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f744c58e000)
要確保每個=>右側(cè)都不能出現(xiàn)not found
否則就修改.bashrc中LD_LIBRARY_PATH來增加路徑
3.如果想自己編譯,想要安裝protobuf,那么可以參考以下連接
https://blog.csdn.net/blue_it/article/details/53996216
https://blog.csdn.net/appleyuchi/article/details/81667992
spark出現(xiàn)Unable to load native-hadoop library for your platform
修改spark-env.sh如下:
export HADOOP_HOME=~/bigdata/hadoop-2.7.7
export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native
#export SPARK_MASTER_IP=master
export SPARK_LOCAL_IP=127.0.0.1
export HIVE_HOME=~/bigdata/apache-hive-3.0.0-bin
export SPARK_CLASSPATH=$HIVE_HOME/lib/*:$SPARK_CLASSPATH
?
?
hadoop啟動出現(xiàn)Unable to load native-hadoop library for your platform
修改hadoop-env.sh,不要去.bashrc還有什么/etc/下面的那個profile修改,沒用的
export HADOOP_HOME=~/bigdata/hadoop-2.7.7
export JAVA_HOME=~/Java/jdk1.8.0_131
export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib/native"?
?
總結(jié)
以上是生活随笔為你收集整理的Unable to load native-hadoop library for your platform的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: maven零基础从配置到运行hellow
- 下一篇: pyspark读写SequenceFil