使用aconda3-5.1.0(Python3.6.4) 搭建pyspark远程部署
參考:http://ihoge.cn/2018/anacondaPyspark.html
前言
首次安裝的環(huán)境搭配是這樣的:
jdk8
hadoop2.6.5
spark2.1
scala2.12.4
Anaconda3-5.1.0
一連串的報(bào)錯(cuò)讓人驚喜無限,盡管反復(fù)調(diào)整配置始終無法解決。
坑了一整天后最后最終發(fā)現(xiàn)是版本不兼容!!再次提醒自己一定要重視各組件版本的問題。這里最主要的是spark和Anaconda版本的兼容問題,為了兼容python3盡量用新版的spark。最終解決方案的版本搭配如下:
jdk8
hadoop2.7.5
spark2.3.0
scala2.11.12
Anaconda3-5.1.0
一、VM安裝Ubuntu16.04虛擬機(jī)
sudo apt-get update sudo apt-get install vim sudo apt-get install openssh-server# 配置ssh免密登陸 ssh localhost ssh-keygen -t rsa //一路回車 cat id_rsa.pub >> authorized_keyssudo vi /etc/hosts //添加各個(gè)節(jié)點(diǎn)ip 192.168.221.132 master 192.168.221.133 slave1 192.168.221.134 slave2# sudo vi /etc/hostname master二、配置profile環(huán)境變量
#Java export JAVA_HOME=/home/hadoop/jdk1.8.0_161 export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jar #Hadoop export HADOOP_HOME=/home/hadoop/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin #Scala export SCALA_HOME=/home/hadoop/scala export PATH=$PATH:$SCALA_HOME/bin #Anaconda export PATH=/home/hadoop/anaconda3/bin:$PATH export PYSPARK_DRIVER_PYTHON=/home/hadoop/anaconda3/bin/jupyter export PYSPARK_DRIVER_PYTHON_OPTS="notebook" export PYSPARK_PYTHON=/home/hadoop/anaconda3/bin/python #Spark export SPARK_HOME=/home/hadoop/spark export PATH=$PATH:$SPARK_HOME/bin三、hadoop 六個(gè)配置文件
# hadoop-env.sh export JAVA_HOME=/home/hadoop/hadoop/jdk1.8.0_161# core-site.xml <configuration><property><name>hadoop.tmp.dir</name><value>file:/home/hadoop/hadoop/tmp</value><description>Abase for other temporary directories.</description></property><property><name>fs.defaultFS</name><value>hdfs://master:9000</value></property> </configuration># hdfs-site.xml <configuration><property><name>dfs.namenode.secondary.http-address</name><value>master:50090</value></property><property><name>dfs.replication</name><value>3</value></property><property><name>dfs.namenode.name.dir</name><value>file:/home/hadoop/hadoop/tmp/dfs/name</value></property><property><name>dfs.datanode.data.dir</name><value>file:/home/hadoop/hadoop/tmp/dfs/data</value></property> </configuration># mapred-site.xml <configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property><property><name>mapreduce.jobhistory.address</name><value>master:10020</value></property><property><name>mapreduce.jobhistory.webapp.address</name><value>master:19888</value></property> </configuration># yarn-site.xml <configuration><property><name>yarn.resourcemanager.hostname</name><value>master</value></property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property> </configuration># slaves slave1 slave2三、spark兩個(gè)配置文件
# spark-env.sh #java export JAVA_HOME=/home/hadoop/jdk1.8.0_161 #scala export SCALA_HOME=/home/hadoop/scala #hadoop export HADOOP_HOME=/home/hadoop/hadoop export HADOOP_CONF_DIR=/home/hadoop/hadoop/etc/hadoop export YARN_CONF_DIR=/home/hadoop/hadoop/etc/hadoop #spark export SPARK_HOME=/home/hadoop/spark export SPARK_LOCAL_DIRS=/home/hadoop/spark export SPARK_DIST_CLASSPATH=$(/home/hadoop/hadoop/bin/hadoop classpath) export SPARK_WORKER_CORES=1 export SPARK_WORKER_INSTANCES=1 export SPARK_WORKER_MEMORY=1g export SPARK_MASTER_IP=master export SPARK_LIBRARY_PATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$HADOOP_HOME/lib/native# slaves slave1 slave2四、解壓縮文件
scp jdk-8u161-linux-x64.tar hadoop@master:~ scp Anaconda3-5.1.0-Linux-x86_64.sh hadoop@master:~ scp -r hadoop/ hadoop@master:~ scp -r scala/ hadoop@master:~ scp -r spark/ hadoop@master:~tar -xvf jdk-8u161-linux-x64.tar -C ./source ~/.profile 分別查看jdk版本、hadoop版本、scala版本# 集群模式啟動(dòng)spark查看jps spark-shell --master spark://master:7077 --executor-memory 512m --total-executor-cores 2五、安裝Anaconda
bash Anaconda3-5.1.0-Linux-x86_64.sh -b# 創(chuàng)建配置jupyter_notebook_config.py jupyter notebook --generate-config vim ~/.jupyter/jupyter_notebook_config.pyc = get_config() c.IPKernelApp.pylab = 'inline' c.NotebookApp.ip = '*' c.NotebookApp.open.browser = False c.NotebookApp.password = u'' c.NotebookApp.port = 8888六、關(guān)機(jī)后克隆出兩個(gè)新節(jié)點(diǎn)并配置相關(guān)內(nèi)容
sudo vi /etc/hostnamesudo vi /etc/hosts七、遠(yuǎn)程測試pyspark集群
# 服務(wù)器端啟動(dòng)集群 start-all.sh spark/sbin/start-all.sh# hadoop和spark的進(jìn)程都顯示正常后開始啟動(dòng)pyspark 1、local模式運(yùn)行 pyspark2、Stand Alone運(yùn)行模式 MASTER=spark://master:7077 pyspark --num-executors 1 --total-executor-cores 3 --executor-memory 512m然后在遠(yuǎn)程Web端輸入192.168.221.132:8888
頁面打開后需要輸入驗(yàn)證信息(第一次驗(yàn)證即可):
輸入上圖token后面的字符串和用戶密碼
輸入sc測試
至此,aconda3-5.1.0(Python3.6.4) 搭建pyspark遠(yuǎn)程服務(wù)器部署成功。
參考:http://ihoge.cn/2018/anacondaPyspark.html
總結(jié)
以上是生活随笔為你收集整理的使用aconda3-5.1.0(Python3.6.4) 搭建pyspark远程部署的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: PCA主成分分析+SVM实现人脸识别
- 下一篇: hive集成spark和mysql