龙空技术网

大数据之spark搭建

极目馆主 331

前言:

当前小伙伴们对“spark 搭建”大约比较重视,小伙伴们都想要了解一些“spark 搭建”的相关知识。那么小编同时在网络上汇集了一些有关“spark 搭建””的相关资讯,希望你们能喜欢,你们快快来学习一下吧!

一、搭建1、解压

tar -zxvf spark-3.0.0-bin-hadoop3.2.tgz -C /opt/modulecd /opt/module mv spark-3.0.0-bin-hadoop3.2 spark-local
2、local环境
bin/spark-shell

1、本地提交作业

bin/spark-submit \--class org.apache.spark.examples.SparkPi \--master local[2] \./examples/jars/spark-examples_2.12-3.0.0.jar \10
3、Standalone环境1、 修改slaves.template文件名为slaves
bigdata
2、修改spark-env.sh.template文件名为spark-env.sh
export JAVA_HOME=/opt/module/jdk1.8.0_212SPARK_MASTER_HOST=bigdataSPARK_MASTER_PORT=7077
3、启动集群
sbin/start-all.sh
4、查看UI界面

5、提交作业

bin/spark-submit \--class org.apache.spark.examples.SparkPi \--master spark://bigdata:7077 \./examples/jars/spark-examples_2.12-3.0.0.jar \10
6、配置历史服务

1) 修改spark-defaults.conf.template文件名为spark-defaults.conf

mv spark-defaults.conf.template spark-defaults.conf

2) 修改spark-default.conf文件,配置日志存储路径

spark.eventLog.enabled     truespark.eventLog.dir        hdfs://linux1:8020/directory

注意:需要启动hadoop集群,HDFS上的directory目录需要提前存在。

sbin/start-dfs.shhadoop fs -mkdir /directory

3) 修改spark-env.sh文件, 添加日志配置

export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080 -Dspark.history.fs.logDirectory=hdfs://linux1:8020/directory -Dspark.history.retainedApplications=30"

4)启动集群和历史服务

sbin/start-all.shsbin/start-history-server.sh

5)提交任务

bin/spark-submit \--class org.apache.spark.examples.SparkPi \--master spark://bigdata:7077 \./examples/jars/spark-examples_2.12-3.0.0.jar \10

6) 查看历史服务:

7) 配置高可用

条件:停止spark和启动zk

vim park-env.sh注释如下内容:#SPARK_MASTER_HOST=bigdata#SPARK_MASTER_PORT=7077添加如下内容:#Master监控页面默认访问端口为8080,但是可能会和Zookeeper冲突,所以改成8989,也可以自定义,访问UI监控页面时请注意SPARK_MASTER_WEBUI_PORT=8989export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=bigdata -Dspark.deploy.zookeeper.dir=/spark"

再启动集群

sbin/start-all.sh sbin/start-master.sh 

提交job

bin/spark-submit \--class org.apache.spark.examples.SparkPi \--master spark://bigdata:7077 \./examples/jars/spark-examples_2.12-3.0.0.jar \10
4、yarn模式1、修改yarn-site.xml
<!--是否启动一个线程检查每个任务正使用的物理内存量,如果任务超出分配值,则直接将其杀掉,默认是true --><property>     <name>yarn.nodemanager.pmem-check-enabled</name>     <value>false</value></property><!--是否启动一个线程检查每个任务正使用的虚拟内存量,如果任务超出分配值,则直接将其杀掉,默认是true --><property>     <name>yarn.nodemanager.vmem-check-enabled</name>     <value>false</value></property>
2、修改spark-env.sh
cd /opt/module/spark-local/confmv spark-env.sh.template spark-env.shexport JAVA_HOME=/opt/module/jdk1.8.0_212YARN_CONF_DIR=/opt/module/hadoop-3.1.3/etc/hadoop
3、启动hdfs和yarn集群4、提交job
bin/spark-submit \--class org.apache.spark.examples.SparkPi \--master yarn \--deploy-mode cluster \./examples/jars/spark-examples_2.12-3.0.0.jar \10
5、配置历史服务器
cp spark-defaults.conf.template spark-defaults.confspark.eventLog.enabled          truespark.eventLog.dir               hdfs://bigdata:9820/spark-directory

注意:需要启动hadoop集群,HDFS上的目录需要提前存在。

 hadoop fs -mkdir /spark-directory
6、 修改spark-env.sh文件, 添加日志配置
export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=18080 -Dspark.history.fs.logDirectory=hdfs://bigdata:9820/spark-directory-Dspark.history.retainedApplications=30"
7、修改spark-defaults.conf
spark.yarn.historyServer.address=bigdata:18080spark.history.ui.port=18080
8、重新提交job
bin/spark-submit \--class org.apache.spark.examples.SparkPi \--master yarn \--deploy-mode client \./examples/jars/spark-examples_2.12-3.0.0.jar \10
9、页面查看
bigdata:8088
10、配置高可用
vim /opt/module/spark-local/conf/spark-env.sh
SPARK_MASTER_WEBUI_PORT=8989export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=bigdata -Dspark.deploy.zookeeper.dir=/spark"
cp slaves.template slavesvim slavesbigdata

注意:启动spark前,先启动zk,hdfs和yarn

5、启动脚本

vim  /home/bigdata/bin/mysparkservices.sh
#!/bin/bashif [ $# -lt 1 ]then   echo "Input Args Error....."  exitfifor i in bigdatadocase $1 instart)  echo "==================START $i Spark集群==================="  ssh $i /opt/module/spark-local/sbin/start-all.sh  echo "==================START $i Spark历史服务器==================="  ssh $i /opt/module/spark-local/sbin/start-history-server.sh   echo "==================START $i Spark的thriftserver的hive2==================="  ssh $i /opt/module/spark-local/sbin/start-thriftserver.sh;;stop)  echo "==================STOP $i Spark集群==================="  ssh $i /opt/module/spark-local/sbin/stop-all.sh  echo "==================STOP $i Spark历史服务器==================="  ssh $i /opt/module/spark-local/sbin/stop-history-server.sh  echo "==================STOP $i Spark的thriftserver的hive2==================="  ssh $i /opt/module/spark-local/sbin/stop-thriftserver.sh;;*) echo "Input Args Error....." exit;;  esacdone
#授权chmod +x mysparkservices.sh#启动sh mysparkservices.sh start#关闭sh mysparkservices.sh stop
6、hive on saprk1、环境准备启动hive metastorehive --service metastore 2>&1 >> /opt/module/hive/logs/metastore.log &启动spark thriftserver注意:相当于启动hive2sh /opt/module/spark-local/sbin/start-thriftserver.sh2、搭建1、复制hive-site.xml到Spark的conf目录
cp  /opt/module/hive/conf/hive-site.xml  /opt/module/spark-local/conf编辑Spark的conf目录下的hive-site.xml配置文件,开启动态分区vim /opt/module/spark-local/conf/hive-site.xml  增加以下属性<property>    <name>hive.exec.dynamic.partition.mode</name>    <value>nonstrict</value></property>
2、在/opt/module/spark/jars目录下增加mysql驱动和lzo依赖
cp /opt/module/hive/lib/mysql-connector-java-5.1.37.jar cp /opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-lzo-0.4.20.jar /opt/module/spark/jars/
3、配置spark-default.conf
#指定Spark master为yarnspark.master=yarn#是否记录Spark任务日志spark.eventLog.enabled=true#Spark任务日志的存储路径spark.eventLog.dir=hdfs://bigdata:9820/spark_historylog#Spark历史服务器地址spark.yarn.historyServer.address=bigdata:18080#Spark历史服务器读取历史任务日志的路径spark.history.fs.logDirectory=hdfs://bigdata:9820/spark_historylog#开启Spark-sql自适应优化spark.sql.adaptive.enabled=true#开启Spark-sql中Reduce阶段分区数自适应spark.sql.adaptive.coalescePartitions.enabled=true#使用Hive提供的Parquet文件的序列化和反序列化工具,以兼容Hivespark.sql.hive.convertMetastoreParquet=false#使用老版的Parquet文件格式,以兼容Hivespark.sql.parquet.writeLegacyFormat=true#解决SPARK-21725问题spark.hadoop.fs.hdfs.impl.disable.cache=true#降低Spark-sql中类型检查级别,兼容Hivespark.sql.storeAssignmentPolicy=LEGACY
4、配置spark-env.sh
YARN_CONF_DIR=/opt/module/hadoop-3.1.3/etc/hadoop
5、增加ApplicationMaster资源比例
vim /opt/module/hadoop-3.1.3/etc/hadoop/capacity-scheduler.xml<property>    <name>yarn.scheduler.capacity.maximum-am-resource-percent</name>    <value>0.8</value></property
6、启动集群
sh /home/bigdata/bin/mysparkservices.sh start
二、使用

标签: #spark 搭建