Spark入门(四)Idea远程提交项目到spark集群
生活随笔
收集整理的這篇文章主要介紹了
Spark入门(四)Idea远程提交项目到spark集群
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
一、依賴包配置
scala與spark的相關(guān)依賴包,spark包后尾下劃線的版本數(shù)字要跟scala的版本第一二位要一致,即2.11
pom.xml
<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>com.mk</groupId><artifactId>spark-test</artifactId><version>1.0</version><name>spark-test</name><url>http://spark.mk.com</url><properties><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><maven.compiler.source>1.8</maven.compiler.source><maven.compiler.target>1.8</maven.compiler.target><scala.version>2.11.1</scala.version><spark.version>2.4.4</spark.version><hadoop.version>2.6.0</hadoop.version></properties><dependencies><!-- scala依賴--><dependency><groupId>org.scala-lang</groupId><artifactId>scala-library</artifactId><version>${scala.version}</version></dependency><!-- spark依賴--><dependency><groupId>org.apache.spark</groupId><artifactId>spark-core_2.11</artifactId><version>${spark.version}</version></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-sql_2.11</artifactId><version>${spark.version}</version></dependency><dependency><groupId>junit</groupId><artifactId>junit</artifactId><version>4.11</version><scope>test</scope></dependency></dependencies><build><pluginManagement><plugins><plugin><artifactId>maven-clean-plugin</artifactId><version>3.1.0</version></plugin><plugin><artifactId>maven-resources-plugin</artifactId><version>3.0.2</version></plugin><plugin><artifactId>maven-compiler-plugin</artifactId><version>3.8.0</version></plugin><plugin><artifactId>maven-surefire-plugin</artifactId><version>2.22.1</version></plugin><plugin><artifactId>maven-jar-plugin</artifactId><version>3.0.2</version></plugin></plugins></pluginManagement></build> </project>?
二、PI例子
java重新編寫scala的PI例子
package com.mk;import org.apache.spark.SparkConf; import org.apache.spark.api.java.JavaSparkContext; import org.apache.spark.sql.SparkSession;import java.util.ArrayList; import java.util.List;public class App {public static void main( String[] args ){SparkConf sparkConf = new SparkConf();if(System.getProperty("os.name").toLowerCase().contains("win")) { // sparkConf.setMaster("local[2]"); // System.out.println("使用本地模擬是spark"); // }else // {sparkConf.setMaster("spark://hadoop01:7077,hadoop02:7077,hadoop03:7077");sparkConf.set("spark.driver.host","192.168.150.1");//本地ip,必須與spark集群能夠相互訪問,如:同一個(gè)局域網(wǎng)sparkConf.setJars(new String[] {".\\out\\artifacts\\spark_test\\spark-test.jar"});//項(xiàng)目構(gòu)建生成的路徑}SparkSession session = SparkSession.builder().appName("Pi").config(sparkConf).config(sparkConf).getOrCreate();int slices =2;int n = (int)Math.min(100_000L * slices, Integer.MAX_VALUE);JavaSparkContext sparkContext = new JavaSparkContext(session.sparkContext());List<Integer> list = new ArrayList<>(n);for (int i = 0; i < n; i++)list.add(i + 1);int count = sparkContext.parallelize(list, slices).map(v -> {double x = Math.random() * 2 - 1;double y = Math.random() * 2 - 1;if (x * x + y * y < 1)return 1;return 0;}).reduce((Integer a, Integer b) ->a+b);System.out.println("PI:"+ 4.0 * count / n);session.stop();} }?
三、直接在idea本地運(yùn)行
輸出PI
?
?
四、局限性
注意:項(xiàng)目機(jī)器的本地ip,必須與spark集群能夠相互訪問,如:同一個(gè)局域網(wǎng)。
不在同一個(gè)網(wǎng)絡(luò)提交失敗,任務(wù)一直重試無法退出
總結(jié)
以上是生活随笔為你收集整理的Spark入门(四)Idea远程提交项目到spark集群的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: x战记结局 有生之年还能看到x战记完结吗
- 下一篇: Spark入门(五)Spark SQL