安装Spark集群
- 安裝Spark集群
- Spark安裝劃分
| 節(jié)點 | Master | Worker | Worker |
| c7node1 | ★ | ? | ? |
| c7node2 | ? | ★ | ? |
| c7node3 | ? | ? | ★ |
- 上傳解壓包
| #將安裝包spark-2.3.1-bin-hadoop2.7.tgz上傳,解壓 tar -zxvf ./spark-2.3.1-bin-hadoop2.7.tgz -C /software/ --no-same-owner rm -rf ./spark-2.3.1-bin-hadoop2.7.tgz #創(chuàng)建Spark 軟連接 ??ln -sf ./spark-2.3.1-bin-hadoop2.7/ spark |
- 配置Spark
| #配置Worker節(jié)點 cp /software/spark/conf/slaves.template slaves vim /software/spark/conf/slaves 添加worker節(jié)點: ??c7node2 ??c7node3 #配置Master節(jié)點 ??cp /software/spark/conf/spark-env.sh.template spark-env.sh ??vim /software/spark/conf/spark-env.sh ??添加配置: ????export SPARK_MASTER_HOST=c7node1 export SPARK_MASTER_PORT=7077 export SPARK_WORKER_CORES=2 export SPARK_WORKER_MEMORY=3g #將配置好的spark包發(fā)送到其他節(jié)點,其他兩個節(jié)點也配置軟連接 ??c7node1:?scp -r /software/spark-2.3.1-bin-hadoop2.7/ c7node2:/software/ scp -r /software/spark-2.3.1-bin-hadoop2.7/ c7node3:/software/ ??c7node2: ln -sf ./spark-2.3.1-bin-hadoop2.7/ spark ??c7node3: ln -sf ./spark-2.3.1-bin-hadoop2.7/ spark |
?
- 啟動Spark
| #進入c7node1節(jié)點的/software/spark/sbin/start-all.sh 啟動Spark集群 cd /software/spark/sbin/ ./start-all.sh |
?
- 搭建Spark 提交任務(wù)的客戶端
原封不動的將Spark安裝包復(fù)制到一臺新的節(jié)點就可以,這里是c7node4
| scp -r /software/spark-2.3.1-bin-hadoop2.7/ c7node4:/software/ ln -sf ./spark-2.3.1-bin-hadoop2.7/ spark |
?
- 配置Spark運行在Yarn上
| #進入c7node4中 /software/spark/conf中 vim /software/spark/conf/spark-env.sh 添加: ??export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop |
總結(jié)
- 上一篇: HDFS的Shell客户端操作
- 下一篇: 随机数文件,上传到hdfs的特定目录/l