DSS部署-完整版
文章目錄
- DSS部署流程
- 第一部分、 背景
- 第二部分、準備虛擬機、環境初始化
- 1、準備虛擬機
- 2、環境初始化
- 關閉防火墻
- 關閉selinux
- 關閉swap
- 根據規劃設置主機名
- 在master添加hosts
- 將橋接的IPv4流量傳遞到iptables的鏈
- 時間同步
- 安裝如下軟件
- 3、準備備如下軟件包
- 第三部分、創建hadoop用戶
- 第四部分、配置JDK
- 卸載原JDK
- 步驟一:查詢系統是否以安裝jdk
- 步驟二:卸載已安裝的jdk
- 步驟三:驗證一下是還有jdk
- 實際操作記錄如下
- 安裝新JDK
- (1) 去下載Oracle版本Java JDK:jdk-8u261-linux-x64.tar.gz
- (2) 將jdk-7u67-linux-x64.tar.gz解壓到/opt/modules目錄下
- (3) 添加環境變量
- (4)安裝后再次執行 java –version,可以看見已經安裝完成。
- 第五部分 Scala部署
- 第六部分、安裝MySQL5.7.25
- 第七部分、安裝python3
- 安裝依賴環境
- 第八部分、nginx【dss會自動安裝】
- 安裝所需環境
- 官網下載
- 解壓
- 配置
- 編譯安裝
- 啟動、停止nginx
- 重啟 nginx
- ==設置nginx的服務(dss所需)==
- ==增加conf.d文件夾(dss所需)==
- 開機自啟動
- 徹底刪除nginx
- 第九部分、安裝hadoop(偽分布式)
- 解壓Hadoop目錄文件
- 配置Hadoop
- 1、 配置Hadoop環境變量
- 2、 配置 hadoop-env.sh、mapred-env.sh、yarn-env.sh文件的JAVA_HOME參數
- 3、 配置core-site.xml
- 配置、格式化、啟動HDFS
- 1、 配置hdfs-site.xml
- 2、 格式化HDFS
- 3、 啟動NameNode
- 4、 啟動DataNode
- 5、 啟動SecondaryNameNode
- 6、 JPS命令查看是否已經啟動成功,有結果就是啟動成功了。
- 7、 HDFS上測試創建目錄、上傳、下載文件
- 配置、啟動YARN
- 1、 配置mapred-site.xml
- 2、 配置yarn-site.xml
- 3、 啟動Resourcemanager
- 4、 啟動nodemanager
- 5、 查看是否啟動成功
- 6、 YARN的Web頁面
- 運行MapReduce Job
- 1、 創建測試用的Input文件
- 2、 運行WordCount MapReduce Job
- 3、 查看輸出結果目錄
- Hadoop各個功能模塊的理解
- 1、 HDFS模塊
- 2、 YARN模塊
- 3、 MapReduce模塊
- 開啟歷史服務
- 歷史服務介紹
- Web查看job執行歷史
- 1、 運行一個mapreduce任務
- 2、 job執行中
- 3、 查看job歷史
- 4、 日志聚集介紹
- 5、 開啟日志聚集
- 6、 測試日志聚集
- 第十部分、Hive安裝部署
- 安裝部署Hive
- 配置hive-site.xml
- 配置hive-log4j2.properties錯誤日志
- 修改hive-env.sh
- 初始化hive元數據
- 處理Driver異常
- 啟動hive
- 第十一部分、Spark on Yarn部署
- 相關配置
- 操作記錄如下
- spark-sql -e "show databases"
- 第十二部分、DSS一鍵安裝
- 一、使用前環境準備
- a. 基礎軟件安裝
- b. 創建用戶
- c.安裝準備
- d. 修改配置
- e. 修改數據庫配置
- f. 修改wedatasphere-dss-web-1.0.1-dist配置
- 二、安裝和使用
- 1. 執行安裝腳本:
- 2. 安裝步驟
- 3. 是否安裝成功:
- 4. 啟動服務
- (1) 啟動服務:
- (2) 查看是否啟動成功
- (3) 谷歌瀏覽器訪問:
- (4) 停止服務:
- (5) 安裝成功后,有6個DSS服務,8個Linkis服務
- 5.安裝日志 install.sh
- 6.啟動腳本 start-all.sh
- 7.日志說明
- 三、相關訪問地址
- 第十三部分、幫助
- 一、軟連接的創建、刪除、修改
- 1、軟鏈接創建
- 2、刪除
- 3、修改
- 二、sudo指令和/etc/sudoers文件說明
- 準備kvm虛擬機
- 環境準備:基礎軟件安裝[telnet,tar,sed,dos2unix,yum,zip,unzip,expect,net-tools,ping,curl,]
- 創建用戶
- 配置JDK:JDK (1.8.0_141以上)
- 配置Scala:scala
- 安裝MySQL:MySQL (5.5+)
- 安裝Python:phthon2(如果用python3需要修改dss中的相關配置,建議使用python2)
- 安裝nginx:nginx
- 安裝hadoop2.7.2
- 安裝hive2.3.3
- 安裝spark2.0
- 安裝準備[linkis,dss,dss-web],修改配置,修改數據庫配置
- 執行安裝腳本,安裝步驟,是否安裝成功,啟動服務
DSS部署流程
GitHub :linjie_830914
本文主要用于指導用戶進行 DataSphereStudio 的安裝、部署,以便用戶可以快速入手 和認識其核心功能。
第一部分、 背景
自主研發的大數據中臺產品,用以幫助用戶快速收集數據、整理數據、構建數倉、數據服務以及數據資產管理。其中涉及很多大數據組件,各個組件都有各自的API,導致開發者學習成本較高,也不易于維護。
故考慮部署學習DSS,基于DSS為客戶提供服務。
第二部分、準備虛擬機、環境初始化
1、準備虛擬機
首先通過 qemu-img 創建虛擬磁盤文件
#qemu-img create -f qcow2 -o size=50G,preallocation=metadata CentOS7.qcow2安裝虛擬機命令:
#virt-install --name=kvmdss --virt-type=kvm --vcpus=4 --ram=10240 --location=/home/kvm/iso/CentOS-7.2-x86_64-Minimal-1511.iso --disk path=/home/kvm/img/kvmdss.img,size=50,format=qcow2 --network bridge=virbr0 --graphics=none --extra-args='console=ttyS0' --force–name 虛擬機名
–memory 內存(默認單位: MB)
–disk 指定虛擬磁盤文件,format指定虛擬磁盤格式,bus 指定半虛擬化(virtio) cache 指定磁盤緩存(回寫)
–network 執行網絡,不指定網絡是無法啟動的。bridge 執行網橋設備 model 指定虛擬網卡為半虛擬化,優化性能
–graphics 通過什么方式訪問界面,這里使用 vnc ,否則無法輸入。
–noautoconsole 不用在界面直接彈出安裝界面,后面可以通過 virt-view centos 喚出圖形界面
查看此機器是否支持虛擬化:grep -i 'vmx\|svm' /proc/cpuinfo
vmx是英特爾CPU,svm是AMD的CPU
虛擬機操作
-
進入 virsh console kvmdss
-
查看 virsh list --all
-
啟動 virsh start kvmdss
-
重啟 virsh reboot kvmdss
-
暫停 virsh suspend kvmdss
-
恢復暫停 virsh resume kvmdss
-
關閉 virsh shutdown kvmdss
-
強制停止 virsh destroy kvmdss
-
開機啟動指定的虛擬機:virsh autostart feng01
-
取消開機啟動:virsh autostart --disable feng01
-
掛起虛擬機(相當于windows睡眠):virsh suspend feng01
-
恢復掛起的虛擬機:virsh resume feng01
克隆虛擬機:
克隆虛擬機前需要先關機,按feng01機器克隆feng02機器:
– original feng01 :克隆源
–name feng02 ;克隆機器的名字
–file /kvm_data/feng02.img : 文件放在那里
鏡像操作
- 創建鏡像 virsh snapshot-create-as kvmdss kvmdss-image
- 查看鏡像 virsh snapshot-lisk kvmdss
- 刪除鏡像 virsh snapshot-delete kvmdss kvmdss-image
- 恢復鏡像 virsh snapshot-revert kvmdss-image
- 查看當前快照版本:virsh snapshot-current kvmdss
2、環境初始化
關閉防火墻
systemctl stop firewalld && systemctl disable firewalld關閉selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久 setenforce 0 # 臨時關閉swap
swapoff -a # 臨時 sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久根據規劃設置主機名
hostnamectl set-hostname <hostname>在master添加hosts
cat >> /etc/hosts << EOF 192.168.100.61 k8s-master1 192.168.100.62 k8s-node1 192.168.100.63 k8s-node2 192.168.100.64 k8s-master2 EOF將橋接的IPv4流量傳遞到iptables的鏈
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system # 生效時間同步
yum install ntpdate -y && ntpdate time.windows.com安裝如下軟件
需要zip,官網少一個zip包
yum install -y wget vim telnet tar sed dos2unix zip unzip expect net-tools ping curl3、準備備如下軟件包
jdk\sscala\mysql\python2\nginx\hadoop2.7.2\hive2.3.3\spark2.0下載鏈接:https://pan.baidu.com/s/1ydHvk3jc_hAozbbQvBT2Wg,提取碼:ojn9
https://blog.csdn.net/weixin_33955681/article/details/92958527)
第三部分、創建hadoop用戶
1、創建一個名字為hadoop的普通用戶
[root@bigdata-senior01 ~]# useradd hadoop [root@bigdata-senior01 ~]# passwd hadoop2、 給hadoop用戶sudo權限
注意:如果root用戶無權修改sudoers文件,先手動為root用戶添加寫權限。
給hadoop用戶sudo授權
[root@bigdata-senior01 ~]# vim /etc/sudoers設置權限,學習環境可以將hadoop用戶的權限設置的大一些,但是生產環境一定要注意普通用戶的權限限制。
root ALL=(ALL) ALL hadoop ALL=(root) NOPASSWD:ALL3、 切換到hadoop用戶
[root@bigdata-senior01 ~]# su - hadoop [hadoop@bigdata-senior01 ~]$4、 創建存放hadoop文件的目錄
[hadoop@bigdata-senior01 ~]$ sudo mkdir /opt/{modules,data}5、 將hadoop文件夾的所有者指定為hadoop用戶
如果存放hadoop的目錄的所有者不是hadoop,之后hadoop運行中可能會有權限問題,那么就講所有者改為hadoop。
第四部分、配置JDK
參考資料
注意:Hadoop機器上的JDK,最好是Oracle的JavaJDK,不然會有一些問題,比如可能沒有JPS命令。
如果安裝了其他版本的JDK,卸載掉。
卸載原JDK
步驟一:查詢系統是否以安裝jdk
#rpm -qa|grep java
或
#rpm -qa|grep jdk
或
#rpm -qa|grep gcj
步驟二:卸載已安裝的jdk
#rpm -e --nodeps java-1.8.0-openjdk-1.8.0.131-11.b12.el7.x86_64 #rpm -e --nodeps java-1.7.0-openjdk-1.7.0.141-2.6.10.5.el7.x86_64 #rpm -e --nodeps java-1.8.0-openjdk-headless-1.8.0.131-11.b12.el7.x86_64 #rpm -e --nodeps java-1.7.0-openjdk-headless-1.8.0.131-11.b12.el7.x86_64步驟三:驗證一下是還有jdk
#rpm -qa|grep java #java -version沒有內容證明已經卸載干凈了
實際操作記錄如下
[root@localhost ~]# java -version openjdk version "1.8.0_262" OpenJDK Runtime Environment (build 1.8.0_262-b10) OpenJDK 64-Bit Server VM (build 25.262-b10, mixed mode) [root@localhost ~]# rpm -qa|grep jdk java-1.8.0-openjdk-headless-1.8.0.262.b10-1.el7.x86_64 copy-jdk-configs-3.3-10.el7_5.noarch java-1.7.0-openjdk-headless-1.7.0.261-2.6.22.2.el7_8.x86_64 java-1.8.0-openjdk-1.8.0.262.b10-1.el7.x86_64 java-1.7.0-openjdk-1.7.0.261-2.6.22.2.el7_8.x86_64 [root@localhost ~]# [root@localhost ~]# rpm -e --nodeps java-1.8.0-openjdk-headless-1.8.0.262.b10-1.el7.x86_64 copy-jdk-configs-3.3-10.el7_5.noarch java-1.7.0-openjdk-headless-1.7.0.261-2.6.22.2.el7_8.x86_64 java-1.8.0-openjdk-1.8.0.262.b10-1.el7.x86_64 java-1.7.0-openjdk-1.7.0.261-2.6.22.2.el7_8.x86_64 [root@localhost ~]# rpm -qa|grep jdk [root@localhost ~]#安裝新JDK
(1) 去下載Oracle版本Java JDK:jdk-8u261-linux-x64.tar.gz
Java Archive Downloads - Java SE 7 (oracle.com)
(2) 將jdk-7u67-linux-x64.tar.gz解壓到/opt/modules目錄下
mkdir -p /opt/modules sudo tar -zxvf jdk-8u261-linux-x64.tar.gz -C /opt/modules(3) 添加環境變量
設置JDK的環境變量 JAVA_HOME。需要修改配置文件/etc/profile,追加
sudo vim /etc/profile export JAVA_HOME="/opt/modules/jdk1.8.0_261" export PATH=$JAVA_HOME/bin:$PATH修改完畢后,執行 source /etc/profile
(4)安裝后再次執行 java –version,可以看見已經安裝完成。
[root@localhost jdk1.8.0_261]# java -version java version "1.8.0_261" Java(TM) SE Runtime Environment (build 1.8.0_261-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode) [root@localhost hadoop]#第五部分 Scala部署
[hadoop@bigdata-senior01 modules]$ pwd /opt/modules [hadoop@bigdata-senior01 modules]$ tar xf scala-2.12.7.tgz [hadoop@bigdata-senior01 modules]$ ll total 1478576 drwxrwxr-x 10 hadoop hadoop 184 Jan 4 22:41 apache-hive-2.3.3 -rw-r--r-- 1 hadoop hadoop 232229830 Jan 4 21:41 apache-hive-2.3.3-bin.tar.gz drwxr-xr-x 10 hadoop hadoop 182 Jan 4 22:14 hadoop-2.8.5 -rw-r--r-- 1 hadoop hadoop 246543928 Jan 4 21:41 hadoop-2.8.5.tar.gz drwxr-xr-x 8 hadoop hadoop 273 Jun 17 2020 jdk1.8.0_261 -rw-r--r-- 1 hadoop hadoop 143111803 Jan 4 21:41 jdk-8u261-linux-x64.tar.gz drwxr-xr-x 2 root root 6 Jan 4 22:29 mysql-5.7.25-linux-glibc2.12-x86_64 -rw-r--r-- 1 root root 644862820 Jan 4 22:27 mysql-5.7.25-linux-glibc2.12-x86_64.tar.gz -rw-r--r-- 1 hadoop hadoop 1006904 Jan 4 21:41 mysql-connector-java-5.1.49.jar drwxrwxr-x 6 hadoop hadoop 50 Sep 27 2018 scala-2.12.7 -rw-r--r-- 1 hadoop hadoop 20415505 Jan 4 21:41 scala-2.12.7.tgz -rw-r--r-- 1 hadoop hadoop 225875602 Jan 4 21:41 spark-2.3.2-bin-hadoop2.7.tgz [hadoop@bigdata-senior01 scala-2.12.7]$ pwd /opt/modules/scala-2.12.7 [hadoop@bigdata-senior01 scala-2.12.7]$ [hadoop@bigdata-senior01 modules]$vim /etc/profile export SCALA_HOME="/opt/modules/scala-2.12.7" export PATH=$SCALA_HOME/bin:$PATH [hadoop@bigdata-senior01 scala-2.12.7]$ sudo vim /etc/profile [hadoop@bigdata-senior01 scala-2.12.7]$ source /etc/profile [hadoop@bigdata-senior01 scala-2.12.7]$ echo $SCALA_HOME /opt/modules/scala-2.12.7 [hadoop@bigdata-senior01 scala-2.12.7]$ scala Welcome to Scala 2.12.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_261). Type in expressions for evaluation. Or try :help.scala>參考
linux CentOS7 安裝scala
https://blog.csdn.net/weixin_33955681/article/details/92958527)
第六部分、安裝MySQL5.7.25
1、刪除centos系統自帶的mariadb數據庫防止發生沖突
rpm -qa|grep mariadb rpm -e mariadb-libs --nodeps2、安裝libaio庫
yum -y install libaio
3、下載并解壓mysql-5.7.25
cd /opt/modules/ wget https://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.25-linux-glibc2.12-x86_64.tar.gz tar xzvf mysql-5.7.25-linux-glibc2.12-x86_64.tar.gz4、查看是否有mysql用戶和mysql用戶組
cat /etc/passwd|grep mysql cat /etc/group|grep mysql# 如果存在,則刪除用戶和用戶組userdel -r mysql5、創建mysql用戶及其用戶組
groupadd mysql useradd -r -g mysql mysql6、設置mysql用戶為非登陸用戶
usermod -s /sbin/nologin mysql
7、創建basedir、datadir目錄、pid文件
mkdir /opt/mysql mkdir /opt/mysql/data mv mysql-5.7.25-linux-glibc2.12-x86_64/* /opt/mysql/ touch /opt/mysql/mysqld.pid chown -R mysql:mysql /opt/mysql8、創建日志
touch /var/log/mysqld.log chown mysql:mysql /var/log/mysqld.log9、創建socket文件
touch /tmp/mysql.sock chown mysql:mysql /tmp/mysql.sock10、創建配置文件vim /etc/my.cnf并加入如下內容
[mysqld] character-set-server=utf8 user=mysql port=3306 basedir=/opt/mysql datadir=/opt/mysql/data socket=/tmp/mysql.sock[mysqld_safe] log-error=/var/log/mysqld.log pid-file=/opt/mysql/mysqld.pid[client] port=3306 socket=/tmp/mysql.sock11、安裝初始化
cd /opt/mysql/bin/ ./mysqld --defaults-file=/etc/my.cnf --initialize --user=mysql成功即為如下圖所示,記錄臨時密碼。
[root@bigdata-senior01 modules]# mv mysql-5.7.25-linux-glibc2.12-x86_64/* /opt/mysql/ [root@bigdata-senior01 modules]# touch /opt/mysql/mysqld.pid [root@bigdata-senior01 modules]# chown -R mysql:mysql /opt/mysql [root@bigdata-senior01 modules]# touch /var/log/mysqld.log [root@bigdata-senior01 modules]# chown mysql:mysql /var/log/mysqld.log [root@bigdata-senior01 modules]# touch /tmp/mysql.sock [root@bigdata-senior01 modules]# chown mysql:mysql /tmp/mysql.sock [root@bigdata-senior01 modules]# vim /etc/my.cnf [root@bigdata-senior01 modules]# cd /opt/mysql/bin/ [root@bigdata-senior01 bin]# ./mysqld --defaults-file=/etc/my.cnf --initialize --user=mysql 2022-01-05T06:30:34.747800Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details). 2022-01-05T06:30:35.045935Z 0 [Warning] InnoDB: New log files created, LSN=45790 2022-01-05T06:30:35.085211Z 0 [Warning] InnoDB: Creating foreign key constraint system tables. 2022-01-05T06:30:35.167573Z 0 [Warning] No existing UUID has been found, so we assume that this is the first time that this server has been started. Generating a new UUID: fd58c915-6df0-11ec-96b4-000c297b38d9. 2022-01-05T06:30:35.179666Z 0 [Warning] Gtid table is not ready to be used. Table 'mysql.gtid_executed' cannot be opened. 2022-01-05T06:30:35.185087Z 1 [Note] A temporary password is generated for root@localhost: &:yE0ZgoexP1 [root@bigdata-senior01 bin]#臨時密碼:&:yE0ZgoexP1
kvm的虛擬機密碼:
Lkp>amgj>3Ys12、設置開機啟動
復制啟動腳本到資源目錄:
cp ../support-files/mysql.server /etc/rc.d/init.d/mysqld
增加mysqld控制腳本權限:
chmod +x /etc/rc.d/init.d/mysqld
將mysqld加入到系統服務:
chkconfig --add mysqld
檢查mysqld服務是否生效:
chkconfig --list mysqld
命令輸出類似如下:
現在即可使用service命令控制mysql啟動、停止。
PS:刪除啟動命令:
chkconfig --del mysqld
13、啟動mysqld服務
service mysqld start
14、環境變量配置
編輯/etc/profile,加入如下內容:
export PATH=$PATH:/opt/mysql/bin
執行命令使其生效:
15、登錄mysql(使用隨機生成的那個密碼)
mysql -uroot -p'Lkp>amgj>3Ys'
修改root密碼:
mysql> alter user "root"@"localhost" identified by "abcd123";
刷新權限:
mysql> flush privileges;
退出mysql,使用新密碼登錄mysql。
16、添加遠程登錄用戶
默認只允許 root 帳戶在本地登錄mysql,如果要在其它機器上連接MySQL,必須修改 root 允許遠程連接,或者添加一個允許遠程連接的帳戶,為了安全起見,可以添加一個新的帳戶。
mysql> grant all privileges on *.* to "root"@"%" identified by "abcd123" with grant option;17、開啟防火墻mysql3306端口的外部訪問
firewall-cmd --zone=public --add-port=3306/tcp --permanent firewall-cmd --reload參數說明:
–zone:作用域,網絡區域定義了網絡連接的可信等級。
–add-port:添加端口與通信協議,格式:端口/通信協議,協議為tcp或udp。
–permanent:永久生效,沒有此參數系統重啟后端口訪問失敗。
18、重新啟動mysql
[root@test ~]# systemctl restart mysqld.service
19、常見錯誤
1 mysql5.7初始化密碼報錯 ERROR 1820 (HY000): You must reset your password using ALTER USER statement before
2 修改mysql密碼出現報錯:ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corres
3 ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this st
4 centos mysql 端口_Linux CentOS Mysql修改默認端口
5 mysql 5.7安全策略設置 報錯ERROR 1193 (HY000): Unknown system variable 'validate_password
6 CentOS 7下啟動、關閉、重啟、查看MySQL服務
7 centos7 安裝MySQL7 并更改初始化密碼
在Linux上安裝Python3 - lemon鋒 - 博客園 (cnblogs.com)
第七部分、安裝python3
dss默認使用python2,如果用python3需要該配置,建議默認使用python3,安裝方法與python3類似
下載pip(適用于python3)curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py下載pip(適用于python2)curl https://bootstrap.pypa.io/pip/2.7/get-pip.py -o get-pip.py安裝pippython get-pip.py升級pippip install --upgrade pip安裝matplotlibpython -m pip install -i https://pypi.tuna.tsinghua.edu.cn/simple matplotlibpip install matplotlib安裝依賴環境
輸入命令:
# 安裝依賴環境 yum -y install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gdbm-devel db4-devel libpcap-devel xz-devel# 下載python包 wget https://www.python.org/ftp/python/3.7.1/Python-3.7.1.tgztar -zxvf Python-3.7.1.tgzyum install gcc# 3.7版本之后需要一個新的包libffi-devel yum install libffi-devel -ycd Python-3.7.1./configure --prefix=/usr/local/python3 編譯: make編譯成功后,編譯安裝: make install檢查python3.7的編譯器: /usr/local/python3/bin/python3.7建立Python3和pip3的軟鏈: ln -s /usr/local/python3/bin/python3 /usr/bin/python3 ln -s /usr/local/python3/bin/pip3 /usr/bin/pip3# 會導致yum報錯,詳見本文最后并將/usr/local/python3/bin加入PATH(1)vim /etc/profile (2)按“I”,然后貼上下面內容:# vim ~/.bash_profile # .bash_profile # Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi # User specific environment and startup programs PATH=$PATH:$HOME/bin:/usr/local/python3/bin export PATHsource ~/.bash_profile檢查Python3及pip3是否正??捎?#xff1a; python3 -V pip3 -Vpython2的處理
yum -y install epel-release yum -y install python-pip第八部分、nginx【dss會自動安裝】
CentOS7安裝Nginx
文章轉自:https://www.cnblogs.com/liujuncm5/p/6713784.html
安裝所需環境
Nginx 是 C語言 開發,建議在 Linux 上運行,當然,也可以安裝 Windows 版本,本篇則使用 CentOS 7 作為安裝環境。
一. gcc 安裝
安裝 nginx 需要先將官網下載的源碼進行編譯,編譯依賴 gcc 環境,如果沒有 gcc 環境,則需要安裝:
二. PCRE pcre-devel 安裝
PCRE(Perl Compatible Regular Expressions) 是一個Perl庫,包括 perl 兼容的正則表達式庫。nginx 的 http 模塊使用 pcre 來解析正則表達式,所以需要在 linux 上安裝 pcre 庫,pcre-devel 是使用 pcre 開發的一個二次開發庫。nginx也需要此庫。命令:
三. zlib 安裝
zlib 庫提供了很多種壓縮和解壓縮的方式, nginx 使用 zlib 對 http 包的內容進行 gzip ,所以需要在 Centos 上安裝 zlib 庫。
四. OpenSSL 安裝
OpenSSL 是一個強大的安全套接字層密碼庫,囊括主要的密碼算法、常用的密鑰和證書封裝管理功能及 SSL 協議,并提供豐富的應用程序供測試或其它目的使用。
nginx 不僅支持 http 協議,還支持 https(即在ssl協議上傳輸http),所以需要在 Centos 安裝 OpenSSL 庫。
官網下載
1.直接下載.tar.gz安裝包,地址:https://nginx.org/en/download.html
2.使用wget命令下載(推薦)。確保系統已經安裝了wget,如果沒有安裝,執行 yum install wget 安裝。
wget -c https://nginx.org/download/nginx-1.21.6.tar.gz我下載的是1.12.0版本,這個是目前的穩定版。
解壓
依然是直接命令:
tar -zxvf nginx-1.21.6.tar.gz cd /opt/modules/nginx-1.21.6配置
其實在 nginx-1.12.0 版本中你就不需要去配置相關東西,默認就可以了。當然,如果你要自己配置目錄也是可以的。
1.使用默認配置
2.自定義配置(不推薦)
./configure \ --prefix=/usr/local/nginx \ --conf-path=/usr/local/nginx/conf/nginx.conf \ --pid-path=/usr/local/nginx/conf/nginx.pid \ --lock-path=/var/lock/nginx.lock \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --with-http_gzip_static_module \ --http-client-body-temp-path=/var/temp/nginx/client \ --http-proxy-temp-path=/var/temp/nginx/proxy \ --http-fastcgi-temp-path=/var/temp/nginx/fastcgi \ --http-uwsgi-temp-path=/var/temp/nginx/uwsgi \ --http-scgi-temp-path=/var/temp/nginx/scgi注:將臨時文件目錄指定為/var/temp/nginx,需要在/var下創建temp及nginx目錄
編譯安裝
make make install查找安裝路徑:
whereis nginx啟動、停止nginx
cd /usr/local/nginx/sbin/ ./nginx ./nginx -s stop ./nginx -s quit ./nginx -s reload 啟動時報80端口被占用: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)解決辦法:1、安裝net-tool 包:
yum install net-tools./nginx -s quit:此方式停止步驟是待nginx進程處理任務完畢進行停止。
./nginx -s stop:此方式相當于先查出nginx進程id再使用kill命令強制殺掉進程。
查詢nginx進程:
ps aux|grep nginx重啟 nginx
1.先停止再啟動(推薦):
對 nginx 進行重啟相當于先停止再啟動,即先執行停止命令再執行啟動命令。如下:
2.重新加載配置文件:
當 ngin x的配置文件 nginx.conf 修改后,要想讓配置生效需要重啟 nginx,使用-s reload不用先停止 ngin x再啟動 nginx 即可將配置信息在 nginx 中生效,如下:
./nginx -s reload
啟動成功后,在瀏覽器可以看到這樣的頁面:
設置nginx的服務(dss所需)
vim /usr/lib/systemd/system/nginx.servicevim /usr/local/nginx/logs/nginx.pidmkdir -p /etc/nginx/conf.dnginx.service腳本如下:
[Unit] Description=nginx - high performance web server Documentation=http://nginx.org/en/docs/ After=network.target remote-fs.target nss-lookup.target[Service] Type=forking PIDFile=/usr/local/nginx/logs/nginx.pid ExecStartPre=/usr/local/nginx/sbin/nginx -t -c /usr/local/nginx/conf/nginx.conf ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf ExecReload=/bin/kill -s HUP $MAINPID ExecStop=/bin/kill -s QUIT $MAINPID PrivateTmp=true[Install] WantedBy=multi-user.target開始開機啟動腳本如下:
systemctl enable nginx.service systemctl start nginx.service增加conf.d文件夾(dss所需)
創建 /etc/nginx/conf.d 文件夾
在原來文件/etc/nginx/nginx.conf 的http 塊下加一句話就可以了:
include /etc/nginx/conf.d/*.conf;增加nginx虛擬主機配置文件(conf.d) - 與f - 博客園 (cnblogs.com)
開機自啟動
即在rc.local增加啟動代碼就可以了。
vi /etc/rc.local增加一行 /usr/local/nginx/sbin/nginx
設置鏈接(安裝dss用到):
ln -s /usr/local/nginx /etc/nginx
設置執行權限:
chmod 755 rc.local到這里,nginx就安裝完畢了,啟動、停止、重啟操作也都完成了,當然,你也可以添加為系統服務,我這里就不在演示了。
CentOS7安裝Nginx - boonya - 博客園 (cnblogs.com)
--------------------------------------分割線 --------------------------------------
Nginx負載均衡配置實戰 http://www.linuxidc.com/Linux/2014-12/110036.htm
CentOS 6.2實戰部署Nginx+MySQL+PHP http://www.linuxidc.com/Linux/2013-09/90020.htm
使用Nginx搭建WEB服務器 http://www.linuxidc.com/Linux/2013-09/89768.htm
搭建基于Linux6.3+Nginx1.2+PHP5+MySQL5.5的Web服務器全過程 http://www.linuxidc.com/Linux/2013-09/89692.htm
CentOS 6.3下Nginx性能調優 http://www.linuxidc.com/Linux/2013-09/89656.htm
CentOS 6.3下配置Nginx加載ngx_pagespeed模塊 http://www.linuxidc.com/Linux/2013-09/89657.htm
CentOS 6.4安裝配置Nginx+Pcre+php-fpm http://www.linuxidc.com/Linux/2013-08/88984.htm
Nginx安裝配置使用詳細筆記 http://www.linuxidc.com/Linux/2014-07/104499.htm
Nginx日志過濾 使用ngx_log_if不記錄特定日志 http://www.linuxidc.com/Linux/2014-07/104686.htm
--------------------------------------分割線 --------------------------------------
Nginx 的詳細介紹:請點這里
Nginx 的下載地址:請點這里
徹底刪除nginx
在開局配置Nginx時有可能會配置錯誤,報各種錯誤代碼??床欢蛘邞械萌タ催@個報錯時,其實最簡單的方式是卸載并重裝咯。今天就帶大家一起學習下,如何徹底卸載nginx程序。
卸載nginx程序的詳細步驟
1、停止Nginx軟件
/usr/local/nginx/sbin/nginx -s stop
如果不知道nginx安裝路徑,可以通過執行ps命令找到nginx程序的PID,然后kill其PID
2、查找根下所有名字包含nginx的文件
find / -name nginx
3、執行命令 rm -rf *刪除nignx安裝的相關文件
說明:全局查找往往會查出很多相關文件,但是前綴基本都是相同,后面不同的部分可以用*代替,以便快速刪除~
[root@qll251 ~]# rm -rf /usr/local/sbin/nginx
[root@qll251 ~]# rm -rf /usr/local/nginx
[root@qll251 ~]# rm -rf /usr/src/nginx-1.11.1
[root@qll251 ~]# rm -rf /var/spool/mail/nginx
4、其他設置
如果設置了Nginx開機自啟動的話,可能還需要下面兩步
chkconfig nginx off
rm -rf /etc/init.d/nginx
刪除之后,便可重新安裝nginx了
————————————————
版權聲明:本文為CSDN博主「開源Linux」的原創文章,遵循CC 4.0 BY-SA版權協議,轉載請附上原文出處鏈接及本聲明。
原文鏈接:https://blog.csdn.net/weixin_38889300/article/details/106682750
第九部分、安裝hadoop(偽分布式)
解壓Hadoop目錄文件
1、 復制hadoop-2.7.2.tar.gz到/opt/modules目錄下。
2、 解壓hadoop-2.7.2.tar.gz
配置Hadoop
1、 配置Hadoop環境變量
[hadoop@bigdata-senior01 hadoop]# vim /etc/profile追加配置:
export HADOOP_HOME="/opt/modules/hadoop-2.7.2" export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH執行:source /etc/profile 使得配置生效
驗證HADOOP_HOME參數:
2、 配置 hadoop-env.sh、mapred-env.sh、yarn-env.sh文件的JAVA_HOME參數
[hadoop@bigdata-senior01 ~]$ sudo vim ${HADOOP_HOME}/etc/hadoop/hadoop-env.sh需要先配置jdk
cd /opt/modules tar -zxvf jdk-7u67-linux-x64.tar.gz修改JAVA_HOME參數為:
export JAVA_HOME="/opt/modules/jdk1.8.0_261" export JAVA_HOME="/opt/modules/jdk1.7.0_79"# The java implementation to use. export JAVA_HOME=/opt/modules/jdk1.7.0_79 # Location of Hadoop. export HADOOP_HOME=/opt/modules/hadoop-2.7.23、 配置core-site.xml
sudo vim ${HADOOP_HOME}/etc/hadoop/core-site.xml(1)先永久修改hostname
想永久修改,應該修改配置文件 /etc/sysconfig/network。
命令:[root@bigdata-senior01 ~] vim /etc/sysconfig/network
打開文件后,
NETWORKING=yes #使用網絡 HOSTNAME=dss #設置主機名然后配置Host
命令:[root@bigdata-senior01 ~] vim /etc/hosts
添加hosts: 192.168.100.20 bigdata-senior01.chybinmy.com
再進行fs.defaultFS參數配置的是HDFS的地址。
(2) hadoop.tmp.dir配置的是Hadoop臨時目錄,比如HDFS的NameNode數據默認都存放這個目錄下,查看*-default.xml等默認配置文件,就可以看到很多依賴hadoop.tmp.dir的配置。默認的hadoop.tmp.dir是/tmp/hadoop?{hadoop.tmp.dir}的配置。 默認的hadoop.tmp.dir是/tmp/hadoop-hadoop.tmp.dir的配置。默認的hadoop.tmp.dir是/tmp/hadoop?{user.name},此時有個問題就是NameNode會將HDFS的元數據存儲在這個/tmp目錄下,如果操作系統重啟了,系統會清空/tmp目錄下的東西,導致NameNode元數據丟失,是個非常嚴重的問題,所有我們應該修改這個路徑。
創建臨時目錄:
將臨時目錄的所有者修改為hadoop
[hadoop@bigdata-senior01 hadoop-2.7.2]$ sudo chown hadoop:hadoop -R /opt/data/tmp修改hadoop.tmp.dir
<property> <name>hadoop.tmp.dir</name> <value>/opt/data/tmp</value> </property>配置、格式化、啟動HDFS
1、 配置hdfs-site.xml
[hadoop@bigdata-senior01 hadoop-2.7.2]$ vim ${HADOOP_HOME}/etc/hadoop/hdfs-site.xml<property> <name>dfs.replication</name> <value>1</value> </property>dfs.replication配置的是HDFS存儲時的備份數量,因為這里是偽分布式環境只有一個節點,所以這里設置為1。
2、 格式化HDFS
先編輯 ~/.bash_profile 配置文件,增加 Hadoop 相關用戶環境變量,配置后就不需要到hadoop的路徑下執行相關hdfs命令了
添加后bash_profile文件完整內容如下:
同樣地,環境變量配置完之后,記得執行 source ~/.bash_profile 命令使環境變量生效。
格式化是對HDFS這個分布式文件系統中的DataNode進行分塊,統計所有分塊后的初始元數據的存儲在NameNode中。
格式化后,查看core-site.xml里hadoop.tmp.dir(本例是/opt/data目錄)指定的目錄下是否有了dfs目錄,如果有,說明格式化成功。
注意:
1.格式化時,這里注意hadoop.tmp.dir目錄的權限問題,應該hadoop普通用戶有讀寫權限才行,可以將/opt/data的所有者改為hadoop。
2.查看NameNode格式化后的目錄。
[hadoop@bigdata-senior01 ~]$ ll /opt/data/tmp/dfs/name/current
fsimage是NameNode元數據在內存滿了后,持久化保存到的文件。
fsimage*.md5 是校驗文件,用于校驗fsimage的完整性。
seen_txid 是hadoop的版本
vession文件里保存:
namespaceID:NameNode的唯一ID。
clusterID:集群ID,NameNode和DataNode的集群ID應該一致,表明是一個集群。
3、 啟動NameNode
[hadoop@bigdata-senior01 hadoop-2.7.2]$ ${HADOOP_HOME}/sbin/hadoop-daemon.sh start namenode starting namenode, logging to /opt/modules/hadoop-2.7.2/logs/hadoop-hadoop-namenode-bigdata-senior01.chybinmy.com.out4、 啟動DataNode
[hadoop@bigdata-senior01 hadoop-2.7.2]$ ${HADOOP_HOME}/sbin/hadoop-daemon.sh start datanode starting datanode, logging to /opt/modules/hadoop-2.7.2/logs/hadoop-hadoop-datanode-bigdata-senior01.chybinmy.com.out5、 啟動SecondaryNameNode
[hadoop@bigdata-senior01 hadoop-2.7.2]$ ${HADOOP_HOME}/sbin/hadoop-daemon.sh start secondarynamenode starting secondarynamenode, logging to /opt/modules/hadoop-2.7.2/logs/hadoop-hadoop-secondarynamenode-bigdata-senior01.chybinmy.com.out6、 JPS命令查看是否已經啟動成功,有結果就是啟動成功了。
[hadoop@bigdata-senior01 hadoop-2.7.2]$ jps 3034 NameNode 3233 Jps 3193 SecondaryNameNode 3110 DataNode現在可以打開:localhost:50070,但是需要關閉selinux才能在本機打開vmware workstation中的服務
7、 HDFS上測試創建目錄、上傳、下載文件
HDFS上創建目錄
[hadoop@bigdata-senior01 hadoop-2.7.2]$ ${HADOOP_HOME}/bin/hdfs dfs -mkdir /demo1上傳本地文件到HDFS上
[hadoop@bigdata-senior01 hadoop-2.7.2]$ ${HADOOP_HOME}/bin/hdfs dfs -put ${HADOOP_HOME}/etc/hadoop/core-site.xml /demo1讀取HDFS上的文件內容
[hadoop@bigdata-senior01 hadoop-2.7.2]$ ${HADOOP_HOME}/bin/hdfs dfs -cat /demo1/core-site.xml
從HDFS上下載文件到本地
配置、啟動YARN
1、 配置mapred-site.xml
默認沒有mapred-site.xml文件,但是有個mapred-site.xml.template配置模板文件。復制模板生成mapred-site.xml。
cd /opt/modules/hadoop-2.7.2 cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml添加配置如下:
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>指定mapreduce運行在yarn框架上。
2、 配置yarn-site.xml
添加配置如下:
<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>dss</value> </property>yarn.nodemanager.aux-services配置了yarn的默認混洗方式,選擇為mapreduce的默認混洗算法。
yarn.resourcemanager.hostname指定了Resourcemanager運行在哪個節點上。
3、 啟動Resourcemanager
[hadoop@bigdata-senior01 hadoop-2.7.2]$ ${HADOOP_HOME}/sbin/yarn-daemon.sh start resourcemanager4、 啟動nodemanager
[hadoop@bigdata-senior01 hadoop-2.7.2]$ ${HADOOP_HOME}/sbin/yarn-daemon.sh start nodemanager5、 查看是否啟動成功
[hadoop@bigdata-senior01 hadoop-2.7.2]$ jps 3034 NameNode 4439 NodeManager 4197 ResourceManager 4543 Jps 3193 SecondaryNameNode 3110 DataNode可以看到ResourceManager、NodeManager已經啟動成功了。
6、 YARN的Web頁面
現在可以打開:localhost:8088,但是需要關閉selinux才能在本機打開vmware workstation中的服務
YARN的Web客戶端端口號是8088,通過http://192.168.100.10:8088/可以查看。
運行MapReduce Job
在Hadoop的share目錄里,自帶了一些jar包,里面帶有一些mapreduce實例小例子,位置在share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar,可以運行這些例子體驗剛搭建好的Hadoop平臺,我們這里來運行最經典的WordCount實例。
1、 創建測試用的Input文件
創建輸入目錄:
[hadoop@bigdata-senior01 hadoop-2.7.2]$ bin/hdfs dfs -mkdir -p /wordcountdemo/input創建原始文件:
在本地/opt/data目錄創建一個文件wc.input,內容如下。
將wc.input文件上傳到HDFS的/wordcountdemo/input目錄中:
[hadoop@bigdata-senior01 hadoop-2.7.2]$ bin/hdfs dfs -put /opt/data/wc.input /wordcountdemo/input2、 運行WordCount MapReduce Job
[hadoop@bigdata-senior01 hadoop-2.7.2]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /wordcountdemo/input /wordcountdemo/output [hadoop@localhost hadoop-2.7.2]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /wordcountdemo/input /wordcountdemo/output 22/01/04 22:20:05 INFO client.RMProxy: Connecting to ResourceManager at bigdata-senior01.chybinmy.com/192.168.100.20:8032 22/01/04 22:20:06 INFO input.FileInputFormat: Total input files to process : 1 22/01/04 22:20:07 INFO mapreduce.JobSubmitter: number of splits:1 22/01/04 22:20:07 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1641363392087_0001 22/01/04 22:20:08 INFO impl.YarnClientImpl: Submitted application application_1641363392087_0001 22/01/04 22:20:08 INFO mapreduce.Job: The url to track the job: http://bigdata-senior01.chybinmy.com:8088/proxy/application_1641363392087_0001/ 22/01/04 22:20:08 INFO mapreduce.Job: Running job: job_1641363392087_0001 22/01/04 22:20:14 INFO mapreduce.Job: Job job_1641363392087_0001 running in uber mode : false 22/01/04 22:20:14 INFO mapreduce.Job: map 0% reduce 0% 22/01/04 22:20:19 INFO mapreduce.Job: map 100% reduce 0% 22/01/04 22:20:24 INFO mapreduce.Job: map 100% reduce 100% 22/01/04 22:20:25 INFO mapreduce.Job: Job job_1641363392087_0001 completed successfully 22/01/04 22:20:25 INFO mapreduce.Job: Counters: 49File System CountersFILE: Number of bytes read=94FILE: Number of bytes written=316115FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=206HDFS: Number of bytes written=60HDFS: Number of read operations=6HDFS: Number of large read operations=0HDFS: Number of write operations=2Job CountersLaunched map tasks=1Launched reduce tasks=1Data-local map tasks=1Total time spent by all maps in occupied slots (ms)=2534Total time spent by all reduces in occupied slots (ms)=2356Total time spent by all map tasks (ms)=2534Total time spent by all reduce tasks (ms)=2356Total vcore-milliseconds taken by all map tasks=2534Total vcore-milliseconds taken by all reduce tasks=2356Total megabyte-milliseconds taken by all map tasks=2594816Total megabyte-milliseconds taken by all reduce tasks=2412544Map-Reduce FrameworkMap input records=4Map output records=11Map output bytes=115Map output materialized bytes=94Input split bytes=135Combine input records=11Combine output records=7Reduce input groups=7Reduce shuffle bytes=94Reduce input records=7Reduce output records=7Spilled Records=14Shuffled Maps =1Failed Shuffles=0Merged Map outputs=1GC time elapsed (ms)=121CPU time spent (ms)=1100Physical memory (bytes) snapshot=426364928Virtual memory (bytes) snapshot=4207398912Total committed heap usage (bytes)=298844160Shuffle ErrorsBAD_ID=0CONNECTION=0IO_ERROR=0WRONG_LENGTH=0WRONG_MAP=0WRONG_REDUCE=0File Input Format CountersBytes Read=71File Output Format CountersBytes Written=60 [hadoop@localhost hadoop-2.7.2]$ bin/hdfs dfs -ls /wordcountdemo/output Found 2 items -rw-r--r-- 1 hadoop supergroup 0 2022-01-04 22:20 /wordcountdemo/output/_SUCCESS -rw-r--r-- 1 hadoop supergroup 60 2022-01-04 22:20 /wordcountdemo/output/part-r-00000 [hadoop@localhost hadoop-2.7.2]$3、 查看輸出結果目錄
[hadoop@bigdata-senior01 hadoop-2.7.2]$ bin/hdfs dfs -ls /wordcountdemo/output -rw-r--r-- 1 hadoop supergroup 0 2016-07-05 05:12 /wordcountdemo/output/_SUCCESS -rw-r--r-- 1 hadoop supergroup 60 2016-07-05 05:12 /wordcountdemo/output/part-r-00000output目錄中有兩個文件,_SUCCESS文件是空文件,有這個文件說明Job執行成功。
part-r-00000文件是結果文件,其中-r-說明這個文件是Reduce階段產生的結果,mapreduce程序執行時,可以沒有reduce階段,但是肯定會有map階段,如果沒有reduce階段這個地方有是-m-。
一個reduce會產生一個part-r-開頭的文件。
查看輸出文件內容。
結果是按照鍵值排好序的。
停止Hadoop
Hadoop各個功能模塊的理解
1、 HDFS模塊
HDFS負責大數據的存儲,通過將大文件分塊后進行分布式存儲方式,突破了服務器硬盤大小的限制,解決了單臺機器無法存儲大文件的問題,HDFS是個相對獨立的模塊,可以為YARN提供服務,也可以為HBase等其他模塊提供服務。
2、 YARN模塊
YARN是一個通用的資源協同和任務調度框架,是為了解決Hadoop1.x中MapReduce里NameNode負載太大和其他問題而創建的一個框架。
YARN是個通用框架,不止可以運行MapReduce,還可以運行Spark、Storm等其他計算框架。
3、 MapReduce模塊
MapReduce是一個計算框架,它給出了一種數據處理的方式,即通過Map階段、Reduce階段來分布式地流式處理數據。它只適用于大數據的離線處理,對實時性要求很高的應用不適用。
開啟歷史服務
歷史服務介紹
Hadoop開啟歷史服務可以在web頁面上查看Yarn上執行job情況的詳細信息??梢酝ㄟ^歷史服務器查看已經運行完的Mapreduce作業記錄,比如用了多少個Map、用了多少個Reduce、作業提交時間、作業啟動時間、作業完成時間等信息。
開啟歷史服務
[hadoop@bigdata-senior01 hadoop-2.7.2]$ sbin/mr-jobhistory-daemon.sh start historyserver
開啟后,可以通過Web頁面查看歷史服務器:
http://bigdata-senior01.chybinmy.com:19888/
Web查看job執行歷史
1、 運行一個mapreduce任務
[hadoop@bigdata-senior01 hadoop-2.7.2]$ bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /wordcountdemo/input /wordcountdemo/output1
2、 job執行中
3、 查看job歷史
歷史服務器的Web端口默認是19888,可以查看Web界面。
但是在上面所顯示的某一個Job任務頁面的最下面,Map和Reduce個數的鏈接上,點擊進入Map的詳細信息頁面,再查看某一個Map或者Reduce的詳細日志是看不到的,是因為沒有開啟日志聚集服務。
開啟日志聚集
4、 日志聚集介紹
MapReduce是在各個機器上運行的,在運行過程中產生的日志存在于各個機器上,為了能夠統一查看各個機器的運行日志,將日志集中存放在HDFS上,這個過程就是日志聚集。
5、 開啟日志聚集
配置日志聚集功能:
Hadoop默認是不啟用日志聚集的。在yarn-site.xml文件里配置啟用日志聚集。
yarn.log-aggregation-enable:是否啟用日志聚集功能。
yarn.log-aggregation.retain-seconds:設置日志保留時間,單位是秒。
將配置文件分發到其他節點:
重啟Yarn進程:
[hadoop@bigdata-senior01 hadoop-2.7.2]$ sbin/stop-yarn.sh [hadoop@bigdata-senior01 hadoop-2.7.2]$ sbin/start-yarn.sh重啟HistoryServer進程:
[hadoop@bigdata-senior01 hadoop-2.7.2]$ sbin/mr-jobhistory-daemon.sh stop historyserver [hadoop@bigdata-senior01 hadoop-2.7.2]$ sbin/mr-jobhistory-daemon.sh start historyserver6、 測試日志聚集
運行一個demo MapReduce,使之產生日志:
bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.2.jar wordcount /input /output1查看日志:
運行Job后,就可以在歷史服務器Web頁面查看各個Map和Reduce的日志了。
Hadoop(2.7.2,Hadoop其他版本需自行編譯Linkis) ,安裝的機器必須支持執行 hdfs dfs -ls / 命令
[hadoop@bigdata-senior01 ~]$ hdfs dfs -ls / Found 3 items drwxr-xr-x - hadoop supergroup 0 2022-01-04 22:14 /demo1 drwx------ - hadoop supergroup 0 2022-01-04 22:58 /tmp drwxr-xr-x - hadoop supergroup 0 2022-01-04 22:22 /wordcountdemohttps://blog.csdn.net/weixin_33955681/article/details/92958527)
第十部分、Hive安裝部署
安裝部署Hive
安裝Apache Hive-2.3.3
下載地址
wget http://archive.apache.org/dist/hive/hive-2.3.3/apache-hive-2.3.3-bin.tar.gz tar -zxvf apache-hive-2.3.3-bin.tar.gz -C /opt/modules //修改包名 $ mv apache-hive-2.3.3-bin apache-hive-2.3.3==配置hive的環境變量(root用戶)== vi /etc/profileexport HIVE_HOME="/opt/modules/apache-hive-2.3.3" export HIVE_CONF_DIR="/opt/modules/apache-hive-2.3.3/conf"在 export PATH后面添加 HIVEHOME/bin:‘‘‘exportPATH=HIVE_HOME/bin: ```export PATH=HIVEH?OME/bin:‘‘‘exportPATH=HIVE_HOME/bin:$PATH```
————————————————
版權聲明:本文為CSDN博主「數據的星辰大?!沟脑瓌撐恼?#xff0c;遵循CC 4.0 BY-SA版權協議,轉載請附上原文出處鏈接及本聲明。
原文鏈接:https://blog.csdn.net/qq_37554565/article/details/90477492
配置hive-site.xml
//進入hive/conf文件夾下
$ cd apache-hive-2.3.3/conf/
//拷貝hive-default.xml.template ,重命名為 hive-site.xml
$ cp hive-default.xml.template hive-site.xml
————————————————
————————————————
3. 修改元數據數據庫驅動,javax.jdo.option.ConnectionDriverName;
————————————————
4. 修改元數據數據庫用戶名,javax.jdo.option.ConnectionUserName;
————————————————
5. 元數據數據庫登陸密碼,javax.jdo.option.ConnectionPassword;
————————————————
6. 修改hive數據倉庫存儲地址(在hdfs上具體存儲地址),hive.metastore.warehouse.dir;
————————————————
7. 配置其他路徑;
————————————————
配置hive-log4j2.properties錯誤日志
拷貝 hive-log4j2.properties.template ,并命名為 hive-log4j2.properties
//按 esc 鍵,退出編輯; //按 wq 鍵,保存編輯;//查看 /conf $ ll //拷貝 hive-log4j2.properties.template ,并命名為 hive-log4j2.properties $ cp hive-log4j2.properties.template hive-log4j2.properties//編輯 hive-log4j2.properties $ vi hive-log4j2.properties//按 i 鍵,進入編輯狀態 //配置輸出log文件 property.hive.log.dir = /opt/tmp/hive/operation_logs————————————————
修改hive-log4j2.properties,配置hive的log
配置下面的參數(如果沒有logs目錄,在hive根目錄下創建它
修改hive-env.sh
cp hive-env.sh.template hive-env.sh
因為Hive使用了 Hadoop, 需要在 hive-env.sh 文件中指定 Hadoop 安裝路徑:
vim hive-env.sh
在打開的配置文件中,添加如下幾行:
export JAVA_HOME=/opt/modules/jdk1.8.0_261/ export HADOOP_HOME=/opt/modules/hadoop-2.7.2/ export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export HIVE_HOME=/opt/modules/apache-hive-2.3.3 export HIVE_CONF_DIR=$HIVE_HOME/conf export HIVE_AUX_JARS_PATH=$HIVE_HOME/lib初始化hive元數據
處理Driver異常
再次執行初始化,出現 schemaTool completed ,說明初始化成功;
————————————————
3. 登錄mysql,查看初始化后的hive庫
————————————————
啟動hive
找到hadoop的core-site.xml,添加如下配置:
[hadoop@bigdata-senior01 hadoop]$ pwd /opt/modules/hadoop-2.8.5/etc/hadoop [hadoop@bigdata-senior01 hadoop]$ vim core-site.xml<property><name>hadoop.proxyuser.hadoop.hosts</name><value>*</value> </property> <property><name>hadoop.proxyuser.hadoop.groups</name><value>*</value> </property>————————————————
2. hive的訪問方式分為兩種:
1). beeline,該方式僅支持本地機器進行操作;
啟動方式如下:$ bin/beeline -u jdbc:hive2://127.0.0.1:10000 -n hadoop-n : 代理用戶-u : 請求地址2). hiveserver2,該方式可提供不同的機器進行調用;
啟動方式如下:$ bin/hiveservice2查看是否已開放端口:netstat -ant | grep 10000 [hadoop@bigdata-senior01 apache-hive-2.3.3]$ pwd /opt/modules/apache-hive-2.3.3 [hadoop@bigdata-senior01 apache-hive-2.3.3]$ bin/hiveserver2 which: no hbase in (/opt/modules/hadoop-2.8.5/bin:/opt/modules/hadoop-2.8.5/sbin:/opt/modules/jdk1.8.0_261/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/mysql/bin:/home/hadoop/.local/bin:/home/hadoop/bin:/opt/modules/jdk1.8.0_261//bin:/opt/modules/hadoop-2.8.5//bin:/opt/modules/hadoop-2.8.5//sbin) 2022-01-04 22:58:41: Starting HiveServer2 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/modules/apache-hive-2.3.3/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/modules/hadoop-2.8.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]————————————————
Hive(2.3.3,Hive其他版本需自行編譯Linkis),安裝的機器必須支持執行hive -e "show databases"命令
參考
集群搭建–安裝apache-hive-2.3.4
裝Apache Hive-2.3.3
https://blog.csdn.net/weixin_33955681/article/details/92958527)
文章目錄
- DSS部署流程
- 第一部分、 背景
- 第二部分、準備虛擬機、環境初始化
- 1、準備虛擬機
- 2、環境初始化
- 關閉防火墻
- 關閉selinux
- 關閉swap
- 根據規劃設置主機名
- 在master添加hosts
- 將橋接的IPv4流量傳遞到iptables的鏈
- 時間同步
- 安裝如下軟件
- 3、準備備如下軟件包
- 第三部分、創建hadoop用戶
- 第四部分、配置JDK
- 卸載原JDK
- 步驟一:查詢系統是否以安裝jdk
- 步驟二:卸載已安裝的jdk
- 步驟三:驗證一下是還有jdk
- 實際操作記錄如下
- 安裝新JDK
- (1) 去下載Oracle版本Java JDK:jdk-8u261-linux-x64.tar.gz
- (2) 將jdk-7u67-linux-x64.tar.gz解壓到/opt/modules目錄下
- (3) 添加環境變量
- (4)安裝后再次執行 java –version,可以看見已經安裝完成。
- 第五部分 Scala部署
- 第六部分、安裝MySQL5.7.25
- 第七部分、安裝python3
- 安裝依賴環境
- 第八部分、nginx【dss會自動安裝】
- 安裝所需環境
- 官網下載
- 解壓
- 配置
- 編譯安裝
- 啟動、停止nginx
- 重啟 nginx
- ==設置nginx的服務(dss所需)==
- ==增加conf.d文件夾(dss所需)==
- 開機自啟動
- 徹底刪除nginx
- 第九部分、安裝hadoop(偽分布式)
- 解壓Hadoop目錄文件
- 配置Hadoop
- 1、 配置Hadoop環境變量
- 2、 配置 hadoop-env.sh、mapred-env.sh、yarn-env.sh文件的JAVA_HOME參數
- 3、 配置core-site.xml
- 配置、格式化、啟動HDFS
- 1、 配置hdfs-site.xml
- 2、 格式化HDFS
- 3、 啟動NameNode
- 4、 啟動DataNode
- 5、 啟動SecondaryNameNode
- 6、 JPS命令查看是否已經啟動成功,有結果就是啟動成功了。
- 7、 HDFS上測試創建目錄、上傳、下載文件
- 配置、啟動YARN
- 1、 配置mapred-site.xml
- 2、 配置yarn-site.xml
- 3、 啟動Resourcemanager
- 4、 啟動nodemanager
- 5、 查看是否啟動成功
- 6、 YARN的Web頁面
- 運行MapReduce Job
- 1、 創建測試用的Input文件
- 2、 運行WordCount MapReduce Job
- 3、 查看輸出結果目錄
- Hadoop各個功能模塊的理解
- 1、 HDFS模塊
- 2、 YARN模塊
- 3、 MapReduce模塊
- 開啟歷史服務
- 歷史服務介紹
- Web查看job執行歷史
- 1、 運行一個mapreduce任務
- 2、 job執行中
- 3、 查看job歷史
- 4、 日志聚集介紹
- 5、 開啟日志聚集
- 6、 測試日志聚集
- 第十部分、Hive安裝部署
- 安裝部署Hive
- 配置hive-site.xml
- 配置hive-log4j2.properties錯誤日志
- 修改hive-env.sh
- 初始化hive元數據
- 處理Driver異常
- 啟動hive
- 第十一部分、Spark on Yarn部署
- 相關配置
- 操作記錄如下
- spark-sql -e "show databases"
- 第十二部分、DSS一鍵安裝
- 一、使用前環境準備
- a. 基礎軟件安裝
- b. 創建用戶
- c.安裝準備
- d. 修改配置
- e. 修改數據庫配置
- f. 修改wedatasphere-dss-web-1.0.1-dist配置
- 二、安裝和使用
- 1. 執行安裝腳本:
- 2. 安裝步驟
- 3. 是否安裝成功:
- 4. 啟動服務
- (1) 啟動服務:
- (2) 查看是否啟動成功
- (3) 谷歌瀏覽器訪問:
- (4) 停止服務:
- (5) 安裝成功后,有6個DSS服務,8個Linkis服務
- 5.安裝日志 install.sh
- 6.啟動腳本 start-all.sh
- 7.日志說明
- 三、相關訪問地址
- 第十三部分、幫助
- 一、軟連接的創建、刪除、修改
- 1、軟鏈接創建
- 2、刪除
- 3、修改
- 二、sudo指令和/etc/sudoers文件說明
第十一部分、Spark on Yarn部署
相關配置
tar xf spark-2.3.2-bin-hadoop2.7.tgz cd /opt/modules/spark-2.3.2-bin-hadoop2.7/conf 更改配置文件 cp spark-env.sh.template spark-env.sh cp slaves.template slavesvi slaves 加入localhostvi spark-env.sh export JAVA_HOME=/opt/modules/jdk1.8.0_261 export SPARK_HOME=/opt/modules/spark-2.3.2-bin-hadoop2.7 #Spark主節點的IP export SPARK_MASTER_IP=hadoop #Spark主節點的端口號 export SPARK_MASTER_PORT=7077啟動: cd /opt/modules/spark-2.3.2-bin-hadoop2.7/sbin ./start-all.sh #vim /etc/profile #添加spark的環境變量,加如PATH下、export出來 #source /etc/profileexport SPARK_HOME="/opt/modules/spark-2.3.2-bin-hadoop2.7" export SPARK_MASTER_IP=master export SPARK_EXECUTOR_MEMORY=1G export PATH=$SPARK_HOME/bin:$PATH操作記錄如下
[hadoop@bigdata-senior01 ~]$ cd /opt/modules/ [hadoop@bigdata-senior01 modules]$ ll total 1478576 drwxrwxr-x 10 hadoop hadoop 184 Jan 4 22:41 apache-hive-2.3.3 -rw-r--r-- 1 hadoop hadoop 232229830 Jan 4 21:41 apache-hive-2.3.3-bin.tar.gz drwxr-xr-x 10 hadoop hadoop 182 Jan 4 22:14 hadoop-2.8.5 -rw-r--r-- 1 hadoop hadoop 246543928 Jan 4 21:41 hadoop-2.8.5.tar.gz drwxr-xr-x 8 hadoop hadoop 273 Jun 17 2020 jdk1.8.0_261 -rw-r--r-- 1 hadoop hadoop 143111803 Jan 4 21:41 jdk-8u261-linux-x64.tar.gz drwxr-xr-x 2 root root 6 Jan 4 22:29 mysql-5.7.25-linux-glibc2.12-x86_64 -rw-r--r-- 1 root root 644862820 Jan 4 22:27 mysql-5.7.25-linux-glibc2.12-x86_64.tar.gz -rw-r--r-- 1 hadoop hadoop 1006904 Jan 4 21:41 mysql-connector-java-5.1.49.jar -rw-r--r-- 1 hadoop hadoop 20415505 Jan 4 21:41 scala-2.12.7.tgz -rw-r--r-- 1 hadoop hadoop 225875602 Jan 4 21:41 spark-2.3.2-bin-hadoop2.7.tgz [hadoop@bigdata-senior01 modules]$ pwd /opt/modules [hadoop@bigdata-senior01 modules]$ tar xf scala-2.12.7.tgz [hadoop@bigdata-senior01 modules]$ ll total 1478576 drwxrwxr-x 10 hadoop hadoop 184 Jan 4 22:41 apache-hive-2.3.3 -rw-r--r-- 1 hadoop hadoop 232229830 Jan 4 21:41 apache-hive-2.3.3-bin.tar.gz drwxr-xr-x 10 hadoop hadoop 182 Jan 4 22:14 hadoop-2.8.5 -rw-r--r-- 1 hadoop hadoop 246543928 Jan 4 21:41 hadoop-2.8.5.tar.gz drwxr-xr-x 8 hadoop hadoop 273 Jun 17 2020 jdk1.8.0_261 -rw-r--r-- 1 hadoop hadoop 143111803 Jan 4 21:41 jdk-8u261-linux-x64.tar.gz drwxr-xr-x 2 root root 6 Jan 4 22:29 mysql-5.7.25-linux-glibc2.12-x86_64 -rw-r--r-- 1 root root 644862820 Jan 4 22:27 mysql-5.7.25-linux-glibc2.12-x86_64.tar.gz -rw-r--r-- 1 hadoop hadoop 1006904 Jan 4 21:41 mysql-connector-java-5.1.49.jar drwxrwxr-x 6 hadoop hadoop 50 Sep 27 2018 scala-2.12.7 -rw-r--r-- 1 hadoop hadoop 20415505 Jan 4 21:41 scala-2.12.7.tgz -rw-r--r-- 1 hadoop hadoop 225875602 Jan 4 21:41 spark-2.3.2-bin-hadoop2.7.tgz [hadoop@bigdata-senior01 modules]$ [hadoop@bigdata-senior01 modules]$ cd scala-2.12.7/ [hadoop@bigdata-senior01 scala-2.12.7]$ ll total 0 drwxrwxr-x 2 hadoop hadoop 162 Sep 27 2018 bin drwxrwxr-x 4 hadoop hadoop 86 Sep 27 2018 doc drwxrwxr-x 2 hadoop hadoop 244 Sep 27 2018 lib drwxrwxr-x 3 hadoop hadoop 18 Sep 27 2018 man [hadoop@bigdata-senior01 scala-2.12.7]$ pwd /opt/modules/scala-2.12.7 [hadoop@bigdata-senior01 scala-2.12.7]$ [hadoop@bigdata-senior01 scala-2.12.7]$ vim /etc/profile [hadoop@bigdata-senior01 scala-2.12.7]$ sudo vim /etc/profile [hadoop@bigdata-senior01 scala-2.12.7]$ source /etc/profile [hadoop@bigdata-senior01 scala-2.12.7]$ echo $SCALA_HOME /opt/modules/scala-2.12.7 [hadoop@bigdata-senior01 scala-2.12.7]$ scala Welcome to Scala 2.12.7 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_261). Type in expressions for evaluation. Or try :help.scala>scala> [hadoop@bigdata-senior01 scala-2.12.7]$ [hadoop@bigdata-senior01 scala-2.12.7]$ [hadoop@bigdata-senior01 scala-2.12.7]$ [hadoop@bigdata-senior01 scala-2.12.7]$ [hadoop@bigdata-senior01 scala-2.12.7]$ [hadoop@bigdata-senior01 scala-2.12.7]$ cd ../ [hadoop@bigdata-senior01 modules]$ ll total 1478576 drwxrwxr-x 10 hadoop hadoop 184 Jan 4 22:41 apache-hive-2.3.3 -rw-r--r-- 1 hadoop hadoop 232229830 Jan 4 21:41 apache-hive-2.3.3-bin.tar.gz drwxr-xr-x 10 hadoop hadoop 182 Jan 4 22:14 hadoop-2.8.5 -rw-r--r-- 1 hadoop hadoop 246543928 Jan 4 21:41 hadoop-2.8.5.tar.gz drwxr-xr-x 8 hadoop hadoop 273 Jun 17 2020 jdk1.8.0_261 -rw-r--r-- 1 hadoop hadoop 143111803 Jan 4 21:41 jdk-8u261-linux-x64.tar.gz drwxr-xr-x 2 root root 6 Jan 4 22:29 mysql-5.7.25-linux-glibc2.12-x86_64 -rw-r--r-- 1 root root 644862820 Jan 4 22:27 mysql-5.7.25-linux-glibc2.12-x86_64.tar.gz -rw-r--r-- 1 hadoop hadoop 1006904 Jan 4 21:41 mysql-connector-java-5.1.49.jar drwxrwxr-x 6 hadoop hadoop 50 Sep 27 2018 scala-2.12.7 -rw-r--r-- 1 hadoop hadoop 20415505 Jan 4 21:41 scala-2.12.7.tgz -rw-r--r-- 1 hadoop hadoop 225875602 Jan 4 21:41 spark-2.3.2-bin-hadoop2.7.tgz [hadoop@bigdata-senior01 modules]$ tar xf spark-2.3.2-bin-hadoop2.7.tgz [hadoop@bigdata-senior01 modules]$ ll total 1478576 drwxrwxr-x 10 hadoop hadoop 184 Jan 4 22:41 apache-hive-2.3.3 -rw-r--r-- 1 hadoop hadoop 232229830 Jan 4 21:41 apache-hive-2.3.3-bin.tar.gz drwxr-xr-x 10 hadoop hadoop 182 Jan 4 22:14 hadoop-2.8.5 -rw-r--r-- 1 hadoop hadoop 246543928 Jan 4 21:41 hadoop-2.8.5.tar.gz drwxr-xr-x 8 hadoop hadoop 273 Jun 17 2020 jdk1.8.0_261 -rw-r--r-- 1 hadoop hadoop 143111803 Jan 4 21:41 jdk-8u261-linux-x64.tar.gz drwxr-xr-x 2 root root 6 Jan 4 22:29 mysql-5.7.25-linux-glibc2.12-x86_64 -rw-r--r-- 1 root root 644862820 Jan 4 22:27 mysql-5.7.25-linux-glibc2.12-x86_64.tar.gz -rw-r--r-- 1 hadoop hadoop 1006904 Jan 4 21:41 mysql-connector-java-5.1.49.jar drwxrwxr-x 6 hadoop hadoop 50 Sep 27 2018 scala-2.12.7 -rw-r--r-- 1 hadoop hadoop 20415505 Jan 4 21:41 scala-2.12.7.tgz drwxrwxr-x 13 hadoop hadoop 211 Sep 16 2018 spark-2.3.2-bin-hadoop2.7 -rw-r--r-- 1 hadoop hadoop 225875602 Jan 4 21:41 spark-2.3.2-bin-hadoop2.7.tgz [hadoop@bigdata-senior01 modules]$ cd spark-2.3.2-bin-hadoop2.7/ [hadoop@bigdata-senior01 spark-2.3.2-bin-hadoop2.7]$ pwd /opt/modules/spark-2.3.2-bin-hadoop2.7 [hadoop@bigdata-senior01 spark-2.3.2-bin-hadoop2.7]$ [hadoop@bigdata-senior01 spark-2.3.2-bin-hadoop2.7]$ cd /opt/modules/spark-2.3.2-bin-hadoop2.7/conf [hadoop@bigdata-senior01 conf]$ cp spark-env.sh.template spark-env.sh [hadoop@bigdata-senior01 conf]$ cp slaves.template slaves [hadoop@bigdata-senior01 conf]$ vim slaves [hadoop@bigdata-senior01 conf]$ vi spark-env.sh [hadoop@bigdata-senior01 conf]$ cd ../ [hadoop@bigdata-senior01 spark-2.3.2-bin-hadoop2.7]$ ll total 84 drwxrwxr-x 2 hadoop hadoop 4096 Sep 16 2018 bin drwxrwxr-x 2 hadoop hadoop 264 Jan 4 23:33 conf drwxrwxr-x 5 hadoop hadoop 50 Sep 16 2018 data drwxrwxr-x 4 hadoop hadoop 29 Sep 16 2018 examples drwxrwxr-x 2 hadoop hadoop 12288 Sep 16 2018 jars drwxrwxr-x 3 hadoop hadoop 25 Sep 16 2018 kubernetes -rw-rw-r-- 1 hadoop hadoop 18045 Sep 16 2018 LICENSE drwxrwxr-x 2 hadoop hadoop 4096 Sep 16 2018 licenses -rw-rw-r-- 1 hadoop hadoop 26366 Sep 16 2018 NOTICE drwxrwxr-x 8 hadoop hadoop 240 Sep 16 2018 python drwxrwxr-x 3 hadoop hadoop 17 Sep 16 2018 R -rw-rw-r-- 1 hadoop hadoop 3809 Sep 16 2018 README.md -rw-rw-r-- 1 hadoop hadoop 164 Sep 16 2018 RELEASE drwxrwxr-x 2 hadoop hadoop 4096 Sep 16 2018 sbin drwxrwxr-x 2 hadoop hadoop 42 Sep 16 2018 yarn [hadoop@bigdata-senior01 spark-2.3.2-bin-hadoop2.7]$ cd conf [hadoop@bigdata-senior01 conf]$ echo ${JAVA_HOME} /opt/modules/jdk1.8.0_261 [hadoop@bigdata-senior01 conf]$ [hadoop@bigdata-senior01 conf]$ pwd /opt/modules/spark-2.3.2-bin-hadoop2.7/conf [hadoop@bigdata-senior01 conf]$ [hadoop@bigdata-senior01 conf]$ vim spark-env.sh [hadoop@bigdata-senior01 conf]$ cd ../ [hadoop@bigdata-senior01 spark-2.3.2-bin-hadoop2.7]$ ll total 84 drwxrwxr-x 2 hadoop hadoop 4096 Sep 16 2018 bin drwxrwxr-x 2 hadoop hadoop 264 Jan 4 23:35 conf drwxrwxr-x 5 hadoop hadoop 50 Sep 16 2018 data drwxrwxr-x 4 hadoop hadoop 29 Sep 16 2018 examples drwxrwxr-x 2 hadoop hadoop 12288 Sep 16 2018 jars drwxrwxr-x 3 hadoop hadoop 25 Sep 16 2018 kubernetes -rw-rw-r-- 1 hadoop hadoop 18045 Sep 16 2018 LICENSE drwxrwxr-x 2 hadoop hadoop 4096 Sep 16 2018 licenses -rw-rw-r-- 1 hadoop hadoop 26366 Sep 16 2018 NOTICE drwxrwxr-x 8 hadoop hadoop 240 Sep 16 2018 python drwxrwxr-x 3 hadoop hadoop 17 Sep 16 2018 R -rw-rw-r-- 1 hadoop hadoop 3809 Sep 16 2018 README.md -rw-rw-r-- 1 hadoop hadoop 164 Sep 16 2018 RELEASE drwxrwxr-x 2 hadoop hadoop 4096 Sep 16 2018 sbin drwxrwxr-x 2 hadoop hadoop 42 Sep 16 2018 yarn [hadoop@bigdata-senior01 spark-2.3.2-bin-hadoop2.7]$ cd sbin [hadoop@bigdata-senior01 sbin]$ ./start-all.sh starting org.apache.spark.deploy.master.Master, logging to /opt/modules/spark-2.3.2-bin-hadoop2.7/logs/spark-hadoop- org.apache.spark.deploy.master.Master-1-bigdata-senior01.chybinmy.com.out localhost: Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts. hadoop@localhost's password: hadoop@localhost's password: localhost: Permission denied, please try again.hadoop@localhost's password: localhost: Permission denied, please try again.localhost: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password). [hadoop@bigdata-senior01 sbin]$ [hadoop@bigdata-senior01 sbin]$ [hadoop@bigdata-senior01 sbin]$ jps 18898 NodeManager 96757 Jps 17144 DataNode 18600 ResourceManager 16939 NameNode 21277 JobHistoryServer 17342 SecondaryNameNode 95983 Master [hadoop@bigdata-senior01 sbin]$ vim /etc/profile [hadoop@bigdata-senior01 sbin]$ sudo vim /etc/profile [hadoop@bigdata-senior01 sbin]$ source /etc/profile [hadoop@bigdata-senior01 sbin]$ ./start-all.sh org.apache.spark.deploy.master.Master running as process 95983. Stop it first. hadoop@localhost's password: localhost: starting org.apache.spark.deploy.worker.Worker, logging to /opt/modules/spark-2.3.2-bin-hadoop2.7/logs/sp ark-hadoop-org.apache.spark.deploy.worker.Worker-1-bigdata-senior01.chybinmy.com.out [hadoop@bigdata-senior01 sbin]$ cat /opt/modules/spark-2.3.2-bin-hadoop2.7/logs/spark-hadoop-org.apache.spark.deploy .worker.Worker-1-bigdata-senior01.chybinmy.com.out Spark Command: /opt/modules/jdk1.8.0_261/bin/java -cp /opt/modules/spark-2.3.2-bin-hadoop2.7/conf/:/opt/modules/spar k-2.3.2-bin-hadoop2.7/jars/* -Xmx1g org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://bigdata-senior01 .chybinmy.com:7077 ======================================== 2022-01-04 23:42:32 INFO Worker:2612 - Started daemon with process name: 98596@bigdata-senior01.chybinmy.com 2022-01-04 23:42:32 INFO SignalUtils:54 - Registered signal handler for TERM 2022-01-04 23:42:32 INFO SignalUtils:54 - Registered signal handler for HUP 2022-01-04 23:42:32 INFO SignalUtils:54 - Registered signal handler for INT 2022-01-04 23:42:33 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using buil tin-java classes where applicable 2022-01-04 23:42:33 INFO SecurityManager:54 - Changing view acls to: hadoop 2022-01-04 23:42:33 INFO SecurityManager:54 - Changing modify acls to: hadoop 2022-01-04 23:42:33 INFO SecurityManager:54 - Changing view acls groups to: 2022-01-04 23:42:33 INFO SecurityManager:54 - Changing modify acls groups to: 2022-01-04 23:42:33 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users wi th view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set() 2022-01-04 23:42:34 INFO Utils:54 - Successfully started service 'sparkWorker' on port 46351. 2022-01-04 23:42:34 INFO Worker:54 - Starting Spark worker 192.168.100.20:46351 with 2 cores, 2.7 GB RAM 2022-01-04 23:42:34 INFO Worker:54 - Running Spark version 2.3.2 2022-01-04 23:42:34 INFO Worker:54 - Spark home: /opt/modules/spark-2.3.2-bin-hadoop2.7 2022-01-04 23:42:35 INFO log:192 - Logging initialized @4458ms 2022-01-04 23:42:35 INFO Server:351 - jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown 2022-01-04 23:42:35 INFO Server:419 - Started @4642ms 2022-01-04 23:42:35 INFO AbstractConnector:278 - Started ServerConnector@6de6ec17{HTTP/1.1,[http/1.1]}{0.0.0.0:8081 } 2022-01-04 23:42:35 INFO Utils:54 - Successfully started service 'WorkerUI' on port 8081. 2022-01-04 23:42:35 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@6cd8d218{/logPage,null,AVAILABL E,@Spark} 2022-01-04 23:42:35 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@265b9144{/logPage/json,null,AVA ILABLE,@Spark} 2022-01-04 23:42:35 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@460a8f36{/,null,AVAILABLE,@Spar k} 2022-01-04 23:42:35 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@732d2fc5{/json,null,AVAILABLE,@ Spark} 2022-01-04 23:42:35 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@9864155{/static,null,AVAILABLE, @Spark} 2022-01-04 23:42:35 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@a5fd0ba{/log,null,AVAILABLE,@Sp ark} 2022-01-04 23:42:35 INFO WorkerWebUI:54 - Bound WorkerWebUI to 0.0.0.0, and started at http://bigdata-senior01.chyb inmy.com:8081 2022-01-04 23:42:35 INFO Worker:54 - Connecting to master bigdata-senior01.chybinmy.com:7077... 2022-01-04 23:42:35 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1c2b0b0e{/metrics/json,null,AVA ILABLE,@Spark} 2022-01-04 23:42:35 INFO TransportClientFactory:267 - Successfully created connection to bigdata-senior01.chybinmy. com/192.168.100.20:7077 after 156 ms (0 ms spent in bootstraps) 2022-01-04 23:42:36 INFO Worker:54 - Successfully registered with master spark://bigdata-senior01.chybinmy.com:7077 [hadoop@bigdata-senior01 sbin]$ [hadoop@bigdata-senior01 sbin]$ [hadoop@bigdata-senior01 sbin]$訪問地址:
http://192.168.100.20:8080/
http://192.168.100.20:8081/
Spark(支持2.0以上所有版本) ,安裝的機器必須支持執行spark-sql -e “show databases” 命令
$ [hadoop@bigdata-senior01 sbin]$ spark-sql -e "show databases" 2022-01-04 23:49:13 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2022-01-04 23:49:14 INFO HiveMetaStore:589 - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 2022-01-04 23:49:14 INFO ObjectStore:289 - ObjectStore, initialize called 2022-01-04 23:49:15 INFO Persistence:77 - Property hive.metastore.integral.jdo.pushdown unknown - will be ignored 2022-01-04 23:49:15 INFO Persistence:77 - Property datanucleus.cache.level2 unknown - will be ignored 2022-01-04 23:49:18 INFO ObjectStore:370 - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2022-01-04 23:49:20 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2022-01-04 23:49:20 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2022-01-04 23:49:21 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2022-01-04 23:49:21 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2022-01-04 23:49:21 INFO MetaStoreDirectSql:139 - Using direct SQL, underlying DB is DERBY 2022-01-04 23:49:21 INFO ObjectStore:272 - Initialized ObjectStore 2022-01-04 23:49:21 WARN ObjectStore:6666 - Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 2022-01-04 23:49:21 WARN ObjectStore:568 - Failed to get database default, returning NoSuchObjectException 2022-01-04 23:49:22 INFO HiveMetaStore:663 - Added admin role in metastore 2022-01-04 23:49:22 INFO HiveMetaStore:672 - Added public role in metastore 2022-01-04 23:49:22 INFO HiveMetaStore:712 - No user is added in admin role, since config is empty 2022-01-04 23:49:22 INFO HiveMetaStore:746 - 0: get_all_databases 2022-01-04 23:49:22 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=get_all_databases 2022-01-04 23:49:22 INFO HiveMetaStore:746 - 0: get_functions: db=default pat=* 2022-01-04 23:49:22 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=get_functions: db=default pat=* 2022-01-04 23:49:22 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. 2022-01-04 23:49:23 INFO SessionState:641 - Created HDFS directory: /tmp/hive/hadoop 2022-01-04 23:49:23 INFO SessionState:641 - Created local directory: /tmp/17c2283a-5ee4-41a0-88b7-54b3ec12cd9f_resources 2022-01-04 23:49:23 INFO SessionState:641 - Created HDFS directory: /tmp/hive/hadoop/17c2283a-5ee4-41a0-88b7-54b3ec12cd9f 2022-01-04 23:49:23 INFO SessionState:641 - Created local directory: /tmp/hadoop/17c2283a-5ee4-41a0-88b7-54b3ec12cd9f 2022-01-04 23:49:23 INFO SessionState:641 - Created HDFS directory: /tmp/hive/hadoop/17c2283a-5ee4-41a0-88b7-54b3ec12cd9f/_tmp_space.db 2022-01-04 23:49:23 INFO SparkContext:54 - Running Spark version 2.3.2 2022-01-04 23:49:23 INFO SparkContext:54 - Submitted application: SparkSQL::192.168.100.20 2022-01-04 23:49:23 INFO SecurityManager:54 - Changing view acls to: hadoop 2022-01-04 23:49:23 INFO SecurityManager:54 - Changing modify acls to: hadoop 2022-01-04 23:49:23 INFO SecurityManager:54 - Changing view acls groups to: 2022-01-04 23:49:23 INFO SecurityManager:54 - Changing modify acls groups to: 2022-01-04 23:49:23 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set() 2022-01-04 23:49:24 INFO Utils:54 - Successfully started service 'sparkDriver' on port 37135. 2022-01-04 23:49:24 INFO SparkEnv:54 - Registering MapOutputTracker 2022-01-04 23:49:24 INFO SparkEnv:54 - Registering BlockManagerMaster 2022-01-04 23:49:24 INFO BlockManagerMasterEndpoint:54 - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 2022-01-04 23:49:24 INFO BlockManagerMasterEndpoint:54 - BlockManagerMasterEndpoint up 2022-01-04 23:49:24 INFO DiskBlockManager:54 - Created local directory at /tmp/blockmgr-e68cfdc9-adda-400b-beac-f4441481481c 2022-01-04 23:49:24 INFO MemoryStore:54 - MemoryStore started with capacity 366.3 MB 2022-01-04 23:49:24 INFO SparkEnv:54 - Registering OutputCommitCoordinator 2022-01-04 23:49:24 INFO log:192 - Logging initialized @14831ms 2022-01-04 23:49:25 INFO Server:351 - jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown 2022-01-04 23:49:25 INFO Server:419 - Started @15163ms 2022-01-04 23:49:25 INFO AbstractConnector:278 - Started ServerConnector@7d0cd23c{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2022-01-04 23:49:25 INFO Utils:54 - Successfully started service 'SparkUI' on port 4040. 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@130cfc47{/jobs,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4584304{/jobs/json,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@51888019{/jobs/job,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@19b5214b{/jobs/job/json,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5fb3111a{/stages,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4aaecabd{/stages/json,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@23bd0c81{/stages/stage,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@6889f56f{/stages/stage/json,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@231b35fb{/stages/pool,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@26da1ba2{/stages/pool/json,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3820cfe{/storage,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2407a36c{/storage/json,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5ec9eefa{/storage/rdd,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@28b8f98a{/storage/rdd/json,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3b4ef59f{/environment,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@22cb3d59{/environment/json,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@33e4b9c4{/executors,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5cff729b{/executors/json,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@10d18696{/executors/threadDump,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@6b8b5020{/executors/threadDump/json,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@7d37ee0c{/static,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@67f946c3{/,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@21b51e59{/api,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4f664bee{/jobs/job/kill,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@76563ae7{/stages/stage/kill,null,AVAILABLE,@Spark} 2022-01-04 23:49:25 INFO SparkUI:54 - Bound SparkUI to 0.0.0.0, and started at http://bigdata-senior01.chybinmy.com:4040 2022-01-04 23:49:25 INFO Executor:54 - Starting executor ID driver on host localhost 2022-01-04 23:49:25 INFO Utils:54 - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 43887. 2022-01-04 23:49:25 INFO NettyBlockTransferService:54 - Server created on bigdata-senior01.chybinmy.com:43887 2022-01-04 23:49:25 INFO BlockManager:54 - Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 2022-01-04 23:49:25 INFO BlockManagerMaster:54 - Registering BlockManager BlockManagerId(driver, bigdata-senior01.chybinmy.com, 43887, None) 2022-01-04 23:49:25 INFO BlockManagerMasterEndpoint:54 - Registering block manager bigdata-senior01.chybinmy.com:43887 with 366.3 MB RAM, BlockManagerId(driver, bigdata-senior01.chybinmy.com, 43887, None) 2022-01-04 23:49:25 INFO BlockManagerMaster:54 - Registered BlockManager BlockManagerId(driver, bigdata-senior01.chybinmy.com, 43887, None) 2022-01-04 23:49:25 INFO BlockManager:54 - Initialized BlockManager: BlockManagerId(driver, bigdata-senior01.chybinmy.com, 43887, None) 2022-01-04 23:49:26 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@35ac9ebd{/metrics/json,null,AVAILABLE,@Spark} 2022-01-04 23:49:26 INFO SharedState:54 - Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/opt/modules/spark-2.3.2-bin-hadoop2.7/sbin/spark-warehouse'). 2022-01-04 23:49:26 INFO SharedState:54 - Warehouse path is 'file:/opt/modules/spark-2.3.2-bin-hadoop2.7/sbin/spark-warehouse'. 2022-01-04 23:49:26 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@120350eb{/SQL,null,AVAILABLE,@Spark} 2022-01-04 23:49:26 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2ccc9681{/SQL/json,null,AVAILABLE,@Spark} 2022-01-04 23:49:26 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@aa752bb{/SQL/execution,null,AVAILABLE,@Spark} 2022-01-04 23:49:26 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@77fc19cf{/SQL/execution/json,null,AVAILABLE,@Spark} 2022-01-04 23:49:26 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2e45a357{/static/sql,null,AVAILABLE,@Spark} 2022-01-04 23:49:26 INFO HiveUtils:54 - Initializing HiveMetastoreConnection version 1.2.1 using Spark classes. 2022-01-04 23:49:26 INFO HiveClientImpl:54 - Warehouse location for Hive client (version 1.2.2) is file:/opt/modules/spark-2.3.2-bin-hadoop2.7/sbin/spark-warehouse 2022-01-04 23:49:26 INFO metastore:291 - Mestastore configuration hive.metastore.warehouse.dir changed from /user/hive/warehouse to file:/opt/modules/spark-2.3.2-bin-hadoop2.7/sbin/spark-warehouse 2022-01-04 23:49:26 INFO HiveMetaStore:746 - 0: Shutting down the object store... 2022-01-04 23:49:26 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=Shutting down the object store... 2022-01-04 23:49:26 INFO HiveMetaStore:746 - 0: Metastore shutdown complete. 2022-01-04 23:49:26 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=Metastore shutdown complete. 2022-01-04 23:49:26 INFO HiveMetaStore:746 - 0: get_database: default 2022-01-04 23:49:26 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=get_database: default 2022-01-04 23:49:26 INFO HiveMetaStore:589 - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 2022-01-04 23:49:26 INFO ObjectStore:289 - ObjectStore, initialize called 2022-01-04 23:49:26 INFO Query:77 - Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing 2022-01-04 23:49:26 INFO MetaStoreDirectSql:139 - Using direct SQL, underlying DB is DERBY 2022-01-04 23:49:26 INFO ObjectStore:272 - Initialized ObjectStore 2022-01-04 23:49:28 INFO StateStoreCoordinatorRef:54 - Registered StateStoreCoordinator endpoint 2022-01-04 23:49:28 INFO HiveMetaStore:746 - 0: get_database: global_temp 2022-01-04 23:49:28 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=get_database: global_temp 2022-01-04 23:49:28 WARN ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException 2022-01-04 23:49:32 INFO HiveMetaStore:746 - 0: get_databases: * 2022-01-04 23:49:32 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=get_databases: * 2022-01-04 23:49:33 INFO CodeGenerator:54 - Code generated in 524.895015 ms default Time taken: 5.645 seconds, Fetched 1 row(s) 2022-01-04 23:49:33 INFO SparkSQLCLIDriver:951 - Time taken: 5.645 seconds, Fetched 1 row(s) 2022-01-04 23:49:33 INFO AbstractConnector:318 - Stopped Spark@7d0cd23c{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2022-01-04 23:49:33 INFO SparkUI:54 - Stopped Spark web UI at http://bigdata-senior01.chybinmy.com:4040 2022-01-04 23:49:33 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped! 2022-01-04 23:49:33 INFO MemoryStore:54 - MemoryStore cleared 2022-01-04 23:49:33 INFO BlockManager:54 - BlockManager stopped 2022-01-04 23:49:33 INFO BlockManagerMaster:54 - BlockManagerMaster stopped 2022-01-04 23:49:33 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped! 2022-01-04 23:49:33 INFO SparkContext:54 - Successfully stopped SparkContext 2022-01-04 23:49:33 INFO ShutdownHookManager:54 - Shutdown hook called 2022-01-04 23:49:33 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-437b36ba-6b7b-4a31-be54-d724731cca35 2022-01-04 23:49:33 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-ef9ab382-9d00-4c59-a73f-f565e761bb45 [hadoop@bigdata-senior01 sbin]$參考
centOS7下Spark安裝配置
spark-sql -e “show databases”
[hadoop@dss sbin]$ spark-sql -e "show databases" 2022-03-03 06:29:44 WARN Utils:66 - Your hostname, dss resolves to a loopback address: 127.0.0.1; using 192.168.122.67 instead (on interface eth0) 2022-03-03 06:29:44 WARN Utils:66 - Set SPARK_LOCAL_IP if you need to bind to another address 2022-03-03 06:29:45 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 2022-03-03 06:29:46 INFO HiveMetaStore:589 - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 2022-03-03 06:29:46 INFO ObjectStore:289 - ObjectStore, initialize called 2022-03-03 06:29:46 INFO Persistence:77 - Property hive.metastore.integral.jdo.pushdown unknown - will be ignored 2022-03-03 06:29:46 INFO Persistence:77 - Property datanucleus.cache.level2 unknown - will be ignored 2022-03-03 06:29:55 INFO ObjectStore:370 - Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" 2022-03-03 06:29:57 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2022-03-03 06:29:57 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2022-03-03 06:30:03 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. 2022-03-03 06:30:03 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. 2022-03-03 06:30:05 INFO MetaStoreDirectSql:139 - Using direct SQL, underlying DB is DERBY 2022-03-03 06:30:05 INFO ObjectStore:272 - Initialized ObjectStore 2022-03-03 06:30:06 WARN ObjectStore:6666 - Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0 2022-03-03 06:30:06 WARN ObjectStore:568 - Failed to get database default, returning NoSuchObjectException 2022-03-03 06:30:07 INFO HiveMetaStore:663 - Added admin role in metastore 2022-03-03 06:30:07 INFO HiveMetaStore:672 - Added public role in metastore 2022-03-03 06:30:07 INFO HiveMetaStore:712 - No user is added in admin role, since config is empty 2022-03-03 06:30:07 INFO HiveMetaStore:746 - 0: get_all_databases 2022-03-03 06:30:07 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=get_all_databases 2022-03-03 06:30:07 INFO HiveMetaStore:746 - 0: get_functions: db=default pat=* 2022-03-03 06:30:07 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=get_functions: db=default pat=* 2022-03-03 06:30:07 INFO Datastore:77 - The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. 2022-03-03 06:30:08 INFO SessionState:641 - Created HDFS directory: /tmp/hive/hadoop 2022-03-03 06:30:08 INFO SessionState:641 - Created local directory: /tmp/d26dff0e-0e55-4623-af6a-a189a8d13689_resources 2022-03-03 06:30:08 INFO SessionState:641 - Created HDFS directory: /tmp/hive/hadoop/d26dff0e-0e55-4623-af6a-a189a8d13689 2022-03-03 06:30:08 INFO SessionState:641 - Created local directory: /tmp/hadoop/d26dff0e-0e55-4623-af6a-a189a8d13689 2022-03-03 06:30:08 INFO SessionState:641 - Created HDFS directory: /tmp/hive/hadoop/d26dff0e-0e55-4623-af6a-a189a8d13689/_tmp_space.db 2022-03-03 06:30:08 INFO SparkContext:54 - Running Spark version 2.3.2 2022-03-03 06:30:08 INFO SparkContext:54 - Submitted application: SparkSQL::192.168.122.67 2022-03-03 06:30:09 INFO SecurityManager:54 - Changing view acls to: hadoop 2022-03-03 06:30:09 INFO SecurityManager:54 - Changing modify acls to: hadoop 2022-03-03 06:30:09 INFO SecurityManager:54 - Changing view acls groups to: 2022-03-03 06:30:09 INFO SecurityManager:54 - Changing modify acls groups to: 2022-03-03 06:30:09 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); groups with view permissions: Set(); users with modify permissions: Set(hadoop); groups with modify permissions: Set() 2022-03-03 06:30:09 INFO Utils:54 - Successfully started service 'sparkDriver' on port 37582. 2022-03-03 06:30:09 INFO SparkEnv:54 - Registering MapOutputTracker 2022-03-03 06:30:09 INFO SparkEnv:54 - Registering BlockManagerMaster 2022-03-03 06:30:09 INFO BlockManagerMasterEndpoint:54 - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 2022-03-03 06:30:09 INFO BlockManagerMasterEndpoint:54 - BlockManagerMasterEndpoint up 2022-03-03 06:30:09 INFO DiskBlockManager:54 - Created local directory at /tmp/blockmgr-56268e6d-f9da-4ba0-813d-8a9f0396a3ed 2022-03-03 06:30:09 INFO MemoryStore:54 - MemoryStore started with capacity 366.3 MB 2022-03-03 06:30:09 INFO SparkEnv:54 - Registering OutputCommitCoordinator 2022-03-03 06:30:09 INFO log:192 - Logging initialized @26333ms 2022-03-03 06:30:09 INFO Server:351 - jetty-9.3.z-SNAPSHOT, build timestamp: unknown, git hash: unknown 2022-03-03 06:30:09 INFO Server:419 - Started @26464ms 2022-03-03 06:30:09 INFO AbstractConnector:278 - Started ServerConnector@231cdda8{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2022-03-03 06:30:09 INFO Utils:54 - Successfully started service 'SparkUI' on port 4040. 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@544e3679{/jobs,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3e14d390{/jobs/json,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5eb87338{/jobs/job,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@31ab1e67{/jobs/job/json,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@29bbc391{/stages,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3487442d{/stages/json,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@530ee28b{/stages/stage,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@43c7fe8a{/stages/stage/json,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@67f946c3{/stages/pool,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@21b51e59{/stages/pool/json,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1785d194{/storage,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@6b4a4e40{/storage/json,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@46a8c2b4{/storage/rdd,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4f664bee{/storage/rdd/json,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@76563ae7{/environment,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4fd74223{/environment/json,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4fea840f{/executors,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@32ae8f27{/executors/json,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@75e80a97{/executors/threadDump,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5b8853{/executors/threadDump/json,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1b8aaeab{/static,null,AVAILABLE,@Spark} 2022-03-03 06:30:09 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5bfc79cb{/,null,AVAILABLE,@Spark} 2022-03-03 06:30:10 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@27ec8754{/api,null,AVAILABLE,@Spark} 2022-03-03 06:30:10 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@62f9c790{/jobs/job/kill,null,AVAILABLE,@Spark} 2022-03-03 06:30:10 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@21e5f0b6{/stages/stage/kill,null,AVAILABLE,@Spark} 2022-03-03 06:30:10 INFO SparkUI:54 - Bound SparkUI to 0.0.0.0, and started at http://192.168.122.67:4040 2022-03-03 06:30:10 INFO Executor:54 - Starting executor ID driver on host localhost 2022-03-03 06:30:10 INFO Utils:54 - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 49364. 2022-03-03 06:30:10 INFO NettyBlockTransferService:54 - Server created on 192.168.122.67:49364 2022-03-03 06:30:10 INFO BlockManager:54 - Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 2022-03-03 06:30:10 INFO BlockManagerMaster:54 - Registering BlockManager BlockManagerId(driver, 192.168.122.67, 49364, None) 2022-03-03 06:30:10 INFO BlockManagerMasterEndpoint:54 - Registering block manager 192.168.122.67:49364 with 366.3 MB RAM, BlockManagerId(driver, 192.168.122.67, 49364, None) 2022-03-03 06:30:10 INFO BlockManagerMaster:54 - Registered BlockManager BlockManagerId(driver, 192.168.122.67, 49364, None) 2022-03-03 06:30:10 INFO BlockManager:54 - Initialized BlockManager: BlockManagerId(driver, 192.168.122.67, 49364, None) 2022-03-03 06:30:10 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@537b3b2e{/metrics/json,null,AVAILABLE,@Spark} 2022-03-03 06:30:10 INFO SharedState:54 - Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/opt/modules/spark-2.3.2-bin-hadoop2.7/sbin/spark-warehouse'). 2022-03-03 06:30:10 INFO SharedState:54 - Warehouse path is 'file:/opt/modules/spark-2.3.2-bin-hadoop2.7/sbin/spark-warehouse'. 2022-03-03 06:30:10 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@27bc1d44{/SQL,null,AVAILABLE,@Spark} 2022-03-03 06:30:10 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1af677f8{/SQL/json,null,AVAILABLE,@Spark} 2022-03-03 06:30:10 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@34cd65ac{/SQL/execution,null,AVAILABLE,@Spark} 2022-03-03 06:30:10 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@61911947{/SQL/execution/json,null,AVAILABLE,@Spark} 2022-03-03 06:30:10 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5940b14e{/static/sql,null,AVAILABLE,@Spark} 2022-03-03 06:30:10 INFO HiveUtils:54 - Initializing HiveMetastoreConnection version 1.2.1 using Spark classes. 2022-03-03 06:30:10 INFO HiveClientImpl:54 - Warehouse location for Hive client (version 1.2.2) is file:/opt/modules/spark-2.3.2-bin-hadoop2.7/sbin/spark-warehouse 2022-03-03 06:30:10 INFO metastore:291 - Mestastore configuration hive.metastore.warehouse.dir changed from /user/hive/warehouse to file:/opt/modules/spark-2.3.2-bin-hadoop2.7/sbin/spark-warehouse 2022-03-03 06:30:10 INFO HiveMetaStore:746 - 0: Shutting down the object store... 2022-03-03 06:30:10 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=Shutting down the object store... 2022-03-03 06:30:10 INFO HiveMetaStore:746 - 0: Metastore shutdown complete. 2022-03-03 06:30:10 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=Metastore shutdown complete. 2022-03-03 06:30:10 INFO HiveMetaStore:746 - 0: get_database: default 2022-03-03 06:30:10 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=get_database: default 2022-03-03 06:30:10 INFO HiveMetaStore:589 - 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore 2022-03-03 06:30:10 INFO ObjectStore:289 - ObjectStore, initialize called 2022-03-03 06:30:10 INFO Query:77 - Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing 2022-03-03 06:30:10 INFO MetaStoreDirectSql:139 - Using direct SQL, underlying DB is DERBY 2022-03-03 06:30:10 INFO ObjectStore:272 - Initialized ObjectStore 2022-03-03 06:30:11 INFO StateStoreCoordinatorRef:54 - Registered StateStoreCoordinator endpoint 2022-03-03 06:30:11 INFO HiveMetaStore:746 - 0: get_database: global_temp 2022-03-03 06:30:11 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=get_database: global_temp 2022-03-03 06:30:11 WARN ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException 2022-03-03 06:30:14 INFO HiveMetaStore:746 - 0: get_databases: * 2022-03-03 06:30:14 INFO audit:371 - ugi=hadoop ip=unknown-ip-addr cmd=get_databases: * 2022-03-03 06:30:14 INFO CodeGenerator:54 - Code generated in 271.767143 ms default Time taken: 2.991 seconds, Fetched 1 row(s) 2022-03-03 06:30:14 INFO SparkSQLCLIDriver:951 - Time taken: 2.991 seconds, Fetched 1 row(s) 2022-03-03 06:30:14 INFO AbstractConnector:318 - Stopped Spark@231cdda8{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} 2022-03-03 06:30:14 INFO SparkUI:54 - Stopped Spark web UI at http://192.168.122.67:4040 2022-03-03 06:30:14 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped! 2022-03-03 06:30:14 INFO MemoryStore:54 - MemoryStore cleared 2022-03-03 06:30:14 INFO BlockManager:54 - BlockManager stopped 2022-03-03 06:30:14 INFO BlockManagerMaster:54 - BlockManagerMaster stopped 2022-03-03 06:30:14 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped! 2022-03-03 06:30:14 INFO SparkContext:54 - Successfully stopped SparkContext 2022-03-03 06:30:14 INFO ShutdownHookManager:54 - Shutdown hook called 2022-03-03 06:30:14 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-f46e0008-5a10-405d-8a2a-c5406427ef4d 2022-03-03 06:30:14 INFO ShutdownHookManager:54 - Deleting directory /tmp/spark-5d80c67d-8631-4340-ad2d-e112802b93a2 [hadoop@dss sbin]$第十二部分、DSS一鍵安裝
補充nginx配置,測試yum是否好用,準備配置文件
links默認使用python,建議安裝python2
web的install.sh中,需要本地安裝后,然后去掉nginx的安裝,及防火墻的處理部分腳本
修改全家中中conf的config.sh中的nginx端口為非8088 ,不能與yarn沖突
一、使用前環境準備
a. 基礎軟件安裝
Linkix需要的命令工具(在正式安裝前,腳本會自動檢測這些命令是否可用,如果不存在會嘗試自動安裝,安裝失敗則需用戶手動安裝以下基礎shell命令工具):
- telnet tar sed dos2unix mysql yum unzip expect
- MySQL (5.5+)
- JDK (1.8.0_141以上)
- Python(2.7)
- Nginx
- Hadoop(2.7.2,Hadoop其他版本需自行編譯Linkis) ,安裝的機器必須支持執行 hdfs dfs -ls / 命令
- Hive(2.3.3,Hive其他版本需自行編譯Linkis),安裝的機器必須支持執行hive -e "show databases"命令
- Spark(支持2.0以上所有版本) ,安裝的機器必須支持執行spark-sql -e "show databases"命令
Tips:
如您是第一次安裝Hadoop,單機部署Hadoop可參考:Hadoop單機部署 ;分布式部署Hadoop可參考:Hadoop分布式部署。
如您是第一次安裝Hive,可參考:Hive快速安裝部署。
如您是第一次安裝Spark,On Yarn模式可參考:Spark on Yarn部署。
b. 創建用戶
? 例如: 部署用戶是hadoop賬號(可以不是hadoop用戶,但是推薦使用Hadoop的超級用戶進行部署,這里只是一個示例)
c.安裝準備
自行編譯或者去組件release頁面下載安裝包:
- wedatasphere-linkis-x.x.x-dist.tar.gz
- wedatasphere-dss-x.x.x-dist.tar.gz
- wedatasphere-dss-web-x.x.x-dist.zip
d. 修改配置
? 打開conf/config.sh,按需修改相關配置參數:
vim conf/config.sh參數說明如下:
說明,DSS_WEB_PORT端口需要保證不能 與YARN REST URL端口沖突,可以改為8099或其他可用端口
#################### 一鍵安裝部署的基本配置 ##################### 部署用戶,默認為當前登錄用戶,非必須不建議修改 # deployUser=hadoop# 非必須不建議修改 # LINKIS_VERSION=1.0.2### DSS Web,本機安裝無需修改 #DSS_NGINX_IP=127.0.0.1 #DSS_WEB_PORT=8099# 非必須不建議修改 #DSS_VERSION=1.0.0## Java應用的堆棧大小。如果部署機器的內存少于8G,推薦128M;達到16G時,推薦至少256M;如果想擁有非常良好的用戶使用體驗,推薦部署機器的內存至少達到32G。 export SERVER_HEAP_SIZE="128M"############################################################ ##################### Linkis 的配置開始 ##################### ########### 非注釋的參數必須配置,注釋掉的參數可按需修改 ########## ############################################################### DSS工作空間目錄 WORKSPACE_USER_ROOT_PATH=file:///tmp/linkis/ ### 用戶 HDFS 根路徑 HDFS_USER_ROOT_PATH=hdfs:///tmp/linkis ### 結果集路徑: file 或者 hdfs path RESULT_SET_ROOT_PATH=hdfs:///tmp/linkis### Path to store started engines and engine logs, must be local ENGINECONN_ROOT_PATH=/appcom/tmp#ENTRANCE_CONFIG_LOG_PATH=hdfs:///tmp/linkis/### ==HADOOP配置文件路徑,必須配置== HADOOP_CONF_DIR=/appcom/config/hadoop-config ### HIVE CONF DIR HIVE_CONF_DIR=/appcom/config/hive-config ### SPARK CONF DIR SPARK_CONF_DIR=/appcom/config/spark-config# for install #LINKIS_PUBLIC_MODULE=lib/linkis-commons/public-module## YARN REST URL YARN_RESTFUL_URL=http://127.0.0.1:8088## Engine版本配置,不配置則采用默認配置 #SPARK_VERSION #SPARK_VERSION=2.4.3 ##HIVE_VERSION #HIVE_VERSION=1.2.1 #PYTHON_VERSION=python2## LDAP is for enterprise authorization, if you just want to have a try, ignore it. #LDAP_URL=ldap://localhost:1389/ #LDAP_BASEDN=dc=webank,dc=com #LDAP_USER_NAME_FORMAT=cn=%s@xxx.com,OU=xxx,DC=xxx,DC=com# Microservices Service Registration Discovery Center #LINKIS_EUREKA_INSTALL_IP=127.0.0.1 #LINKIS_EUREKA_PORT=20303 #LINKIS_EUREKA_PREFER_IP=true### Gateway install information #LINKIS_GATEWAY_PORT =127.0.0.1 #LINKIS_GATEWAY_PORT=9001### ApplicationManager #LINKIS_MANAGER_INSTALL_IP=127.0.0.1 #LINKIS_MANAGER_PORT=9101### EngineManager #LINKIS_ENGINECONNMANAGER_INSTALL_IP=127.0.0.1 #LINKIS_ENGINECONNMANAGER_PORT=9102### EnginePluginServer #LINKIS_ENGINECONN_PLUGIN_SERVER_INSTALL_IP=127.0.0.1 #LINKIS_ENGINECONN_PLUGIN_SERVER_PORT=9103### LinkisEntrance #LINKIS_ENTRANCE_INSTALL_IP=127.0.0.1 #LINKIS_ENTRANCE_PORT=9104### publicservice #LINKIS_PUBLICSERVICE_INSTALL_IP=127.0.0.1 #LINKIS_PUBLICSERVICE_PORT=9105### cs #LINKIS_CS_INSTALL_IP=127.0.0.1 #LINKIS_CS_PORT=9108##################### Linkis 的配置完畢 ################################################################################# ####################### DSS 的配置開始 ####################### ########### 非注釋的參數必須配置,注釋掉的參數可按需修改 ########## ############################################################# 用于存儲發布到 Schedulis 的臨時ZIP包文件 WDS_SCHEDULER_PATH=file:///appcom/tmp/wds/scheduler### This service is used to provide dss-framework-project-server capability. #DSS_FRAMEWORK_PROJECT_SERVER_INSTALL_IP=127.0.0.1 #DSS_FRAMEWORK_PROJECT_SERVER_PORT=9002### This service is used to provide dss-framework-orchestrator-server capability. #DSS_FRAMEWORK_ORCHESTRATOR_SERVER_INSTALL_IP=127.0.0.1 #DSS_FRAMEWORK_ORCHESTRATOR_SERVER_PORT=9003### This service is used to provide dss-apiservice-server capability. #DSS_APISERVICE_SERVER_INSTALL_IP=127.0.0.1 #DSS_APISERVICE_SERVER_PORT=9004### This service is used to provide dss-workflow-server capability. #DSS_WORKFLOW_SERVER_INSTALL_IP=127.0.0.1 #DSS_WORKFLOW_SERVER_PORT=9005### dss-flow-Execution-Entrance ### This service is used to provide flow execution capability. #DSS_FLOW_EXECUTION_SERVER_INSTALL_IP=127.0.0.1 #DSS_FLOW_EXECUTION_SERVER_PORT=9006### This service is used to provide dss-datapipe-server capability. #DSS_DATAPIPE_SERVER_INSTALL_IP=127.0.0.1 #DSS_DATAPIPE_SERVER_PORT=9008##sendemail配置,只影響DSS工作流中發郵件功能 EMAIL_HOST=smtp.163.com EMAIL_PORT=25 EMAIL_USERNAME=xxx@163.com EMAIL_PASSWORD=xxxxx EMAIL_PROTOCOL=smtp ####################### DSS 的配置結束 #######################如下地址需要配置
###HADOOP CONF DIR #/appcom/config/hadoop-config
HADOOP_CONF_DIR=/opt/modules/hadoop-2.7.2/etc/hadoop/
###HIVE CONF DIR #/appcom/config/hive-config
HIVE_CONF_DIR=/opt/modules/apache-hive-2.3.3/conf
###SPARK CONF DIR #/appcom/config/spark-config
SPARK_CONF_DIR=/opt/modules/spark-2.3.2-bin-hadoop2.7/conf
e. 修改數據庫配置
請確保配置的數據庫,安裝機器可以正常訪問,否則將會出現DDL和DML導入失敗的錯誤。
vi conf/db.sh ### 配置DSS數據庫 MYSQL_HOST=127.0.0.1 MYSQL_PORT=3306 MYSQL_DB=dss MYSQL_USER=root MYSQL_PASSWORD=asdf1234## Hive metastore的數據庫配置,用于Linkis訪問Hive的元數據信息 HIVE_HOST=127.0.0.1 HIVE_PORT=3306 HIVE_DB=hive HIVE_USER=root HIVE_PASSWORD=asdf1234f. 修改wedatasphere-dss-web-1.0.1-dist配置
install.sh中的如下部分需要處理
centos7(){# nginx是否安裝#sudo rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm#調整點:1# yum安裝的部分nginx,缺少“/etc/nginx/conf.d/文件夾”,所以nginx手動安裝,詳見第8部分#sudo yum install -y nginx #echo "Nginx installed successfully"# 配置nginxdssConf# 解決 0.0.0.0:8888 問題yum -y install policycoreutils-pythonsemanage port -a -t http_port_t -p tcp $dss_port# 開放前端訪問端口#調整點2#【如果用于測試,本地已關閉防火墻,不需要執行】#firewall-cmd --zone=public --add-port=$dss_port/tcp --permanent#調整點3#重啟防火墻# 【如果用于測試,本地已關閉防火墻,不需要執行】#firewall-cmd --reload# 啟動nginxsystemctl restart nginx# 調整SELinux的參數sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config# 臨時生效setenforce 0}二、安裝和使用
1. 執行安裝腳本:
sh bin/install.sh # 看具體執行到哪一步 sh -v bin/install.sh2. 安裝步驟
-
該安裝腳本會檢查各項集成環境命令,如果沒有請按照提示進行安裝,以下命令為必須項【環境準備時已經安裝】
yum java mysql zip unzip expect telnet tar sed dos2unix nginx
-
安裝時,腳本會詢問您是否需要初始化數據庫并導入元數據,Linkis 和 DSS 均會詢問。
第一次安裝必須選是。
3. 是否安裝成功:
通過查看控制臺打印的日志信息查看是否安裝成功。
如果有錯誤信息,可以查看具體報錯原因。
4. 啟動服務
(1) 啟動服務:
在安裝目錄執行以下命令,啟動所有服務:
sh bin/start-all.sh如果啟動產生了錯誤信息,可以查看具體報錯原因。啟動后,各項微服務都會進行通信檢測,如果有異常則可以幫助用戶定位異常日志和原因。
(2) 查看是否啟動成功
可以在Eureka界面查看 Linkis & DSS 后臺各微服務的啟動情況。 [外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-2g42t5Fv-1646916641903)(https://github.com/WeBankFinTech/DataSphereStudio-Doc/raw/main/zh_CN/Images/%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2/DSS%E5%8D%95%E6%9C%BA%E9%83%A8%E7%BD%B2%E6%96%87%E6%A1%A3/eureka.png)]
(3) 谷歌瀏覽器訪問:
請使用谷歌瀏覽器訪問以下前端地址:
http://DSS_NGINX_IP:DSS_WEB_PORT 啟動日志會打印此訪問地址。登陸時管理員的用戶名和密碼均為部署用戶名,如部署用戶為hadoop,則管理員的用戶名/密碼為:hadoop/hadoop。
(4) 停止服務:
在安裝目錄執行以下命令,停止所有服務:
sh bin/stop-all.sh通過如下腳本判斷是否停止成功,如果停止失敗可以通過kill結束進程
ps -ef|grep DSS ps -ef|grep Linkis ps -ef|grep eureka(5) 安裝成功后,有6個DSS服務,8個Linkis服務
LINKIS-CG-ENGINECONNMANAGER
LINKIS-CG-ENGINEPLUGIN
LINKIS-CG-ENTRANCE
LINKIS-CG-LINKISMANAGER
LINKIS-MG-EUREKA
LINKIS-MG-GATEWAY
LINKIS-PS-CS
LINKIS-PS-PUBLICSERVICE
5.安裝日志 install.sh
[hadoop@dss dssLinksFamilyMeals]$ sh bin/install.sh######################################################################################## ######################################################################################## Welcome to DSS & Linkis Deployment Service! Suitable for Linkis and DSS first installation, please be sure the environment is ready. ######################################################################################## ########################################################################################It is recommended to use 5G memory. Each service is set to 256M, with a minimum of 128M. The default configuration is 256M. If you need to modify it, please modify conf/config.shAre you sure you have installed the database? If installed, enter 1, otherwise enter 0Please input the choice:1Do you need to check the installation environment? Enter 1 if necessary, otherwise enter 0Please input the choice:0######################################################################## ###################### Start to install Linkis ######################### ######################################################################## Start to unzip linkis package. Succeed to + Unzip linkis package to /opt/modules/dssLinksFamilyMeals/linkis-pre-install. Start to replace linkis field value. End to replace linkis field value. <-----start to check used cmd----> <-----end to check used cmd----> Succeed to + check env step1:load config Succeed to + load config Do you want to clear Linkis table information in the database?1: Do not execute table-building statements2: Dangerous! Clear all data and rebuild the tablesother: exitPlease input the choice:2 You chose Rebuild the table create hdfs directory and local directory Succeed to + create file:///tmp/linkis/ directory Succeed to + create hdfs:///tmp/linkis directory Succeed to + create hdfs:///tmp/linkis directory rm: cannot remove ‘/opt/modules/dssLinksFamilyMeals/linkis-bak’: No such file or directory mv /opt/modules/dssLinksFamilyMeals/linkis /opt/modules/dssLinksFamilyMeals/linkis-bak create dir LINKIS_HOME: /opt/modules/dssLinksFamilyMeals/linkis Succeed to + Create the dir of /opt/modules/dssLinksFamilyMeals/linkis Start to cp /opt/modules/dssLinksFamilyMeals/linkis-pre-install/linkis-package to /opt/modules/dssLinksFamilyMeals/linkis. Succeed to + cp /opt/modules/dssLinksFamilyMeals/linkis-pre-install/linkis-package to /opt/modules/dssLinksFamilyMeals/linkis mysql: [Warning] Using a password on the command line interface can be insecure. mysql: [Warning] Using a password on the command line interface can be insecure. Succeed to + source linkis_ddl.sql mysql: [Warning] Using a password on the command line interface can be insecure. +-----------------+ | @label_id := id | +-----------------+ | 1 | +-----------------+ +-----------------+ | @label_id := id | +-----------------+ | 2 | +-----------------+ +-----------------+ | @label_id := id | +-----------------+ | 3 | +-----------------+ +-----------------+ | @label_id := id | +-----------------+ | 4 | +-----------------+ +-----------------+ | @label_id := id | +-----------------+ | 11 | +-----------------+ +-----------------+ | @label_id := id | +-----------------+ | 12 | +-----------------+ +-----------------+ | @label_id := id | +-----------------+ | 13 | +-----------------+ +-----------------+ | @label_id := id | +-----------------+ | 14 | +-----------------+ +-----------------+ | @label_id := id | +-----------------+ | 15 | +-----------------+ +-----------------+ | @label_id := id | +-----------------+ | 16 | +-----------------+ +-----------------+ | @label_id := id | +-----------------+ | 17 | +-----------------+ +-----------------+ | @label_id := id | +-----------------+ | 18 | +-----------------+ +-----------------+ | @label_id := id | +-----------------+ | 19 | +-----------------+ Succeed to + source linkis_dml.sql Rebuild the table Update config... update conf /opt/modules/dssLinksFamilyMeals/linkis/conf/linkis.properties update conf /opt/modules/dssLinksFamilyMeals/linkis/conf/linkis-mg-gateway.properties update conf /opt/modules/dssLinksFamilyMeals/linkis/conf/linkis-ps-publicservice.properties Congratulations! You have installed Linkis 1.0.3 successfully, please use sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-start-all.sh to start it! Your default account password ishadoop/a61035488 Succeed to + install Linkis######################################################################### ###################### Start to install DSS Service ##################### ######################################################################### Succeed to + Create the dir of /opt/modules/dssLinksFamilyMeals/dss-pre-install Start to unzip dss server package. Succeed to + Unzip dss server package to /opt/modules/dssLinksFamilyMeals/dss-pre-install Start to replace dss field value. End to replace dss field value. java version "1.8.0_261" Java(TM) SE Runtime Environment (build 1.8.0_261-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode) Succeed to + execute java --version step1:load config Do you want to clear Dss table information in the database?1: Do not execute table-building statements2: Dangerous! Clear all data and rebuild the tables.Please input the choice:2 You chose Rebuild the table Simple installation mode java version "1.8.0_261" Java(TM) SE Runtime Environment (build 1.8.0_261-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode) Succeed to + execute java --version telnet check for your MYSQL, if you wait for a long time,may be your MYSQL does not prepared MYSQL is OK. mysql: [Warning] Using a password on the command line interface can be insecure. Succeed to + source dss_ddl.sql mysql: [Warning] Using a password on the command line interface can be insecure. +---------------------------------+ | @dss_appconn_orchestratorId:=id | +---------------------------------+ | 2 | +---------------------------------+ +-----------------------------+ | @dss_appconn_workflowId:=id | +-----------------------------+ | 3 | +-----------------------------+ +---------------------------------+ | @dss_appconn_eventcheckerId:=id | +---------------------------------+ | 5 | +---------------------------------+ +--------------------------------+ | @dss_appconn_datacheckerId:=id | +--------------------------------+ | 6 | +--------------------------------+ Succeed to + source dss_dml_real.sql Rebuild the table step2:update config rm: cannot remove ‘/opt/modules/dssLinksFamilyMeals/dss-bak’: No such file or directory mv /opt/modules/dssLinksFamilyMeals/dss /opt/modules/dssLinksFamilyMeals/dss-bak create dir SERVER_HOME: /opt/modules/dssLinksFamilyMeals/dss Succeed to + Create the dir of /opt/modules/dssLinksFamilyMeals/dss Succeed to + subsitution /opt/modules/dssLinksFamilyMeals/dss/conf/dss-framework-project-server.properties Succeed to + subsitution /opt/modules/dssLinksFamilyMeals/dss/conf/dss-framework-orchestrator-server.properties Succeed to + subsitution /opt/modules/dssLinksFamilyMeals/dss/conf/dss-apiservice-server.properties Succeed to + subsitution /opt/modules/dssLinksFamilyMeals/dss/conf/dss-datapipe-server.properties Succeed to + subsitution /opt/modules/dssLinksFamilyMeals/dss/conf/dss-flow-execution-server.properties Succeed to + subsitution /opt/modules/dssLinksFamilyMeals/dss/conf/dss-workflow-server.properties Congratulations! You have installed DSS 1.0.1 successfully, please use sbin/dss-start-all.sh to start it! Succeed to + install DSS Service########################################################################### ###################### Start to install DSS & Linkis Web ################## ########################################################################### Succeed to + Create the dir of /opt/modules/dssLinksFamilyMeals/web Start to unzip dss web package. Succeed to + Unzip dss web package to /opt/modules/dssLinksFamilyMeals/web Start to replace dss web field value. End to replace dss web field value. dss front-end deployment script linux Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile* base: mirrors.bupt.edu.cn* epel: mirror.earthlink.iq* extras: mirrors.cn99.com* updates: mirrors.cn99.com Package policycoreutils-python-2.5-34.el7.x86_64 already installed and latest version Nothing to do ValueError: Port tcp/8088 already defined Succeed to + install DSS & Linkis WebEureka configuration path of Linkis: linkis/conf/application-linkis.yml Eureka configuration path of DSS : dss/conf/application-dss.ymlCongratulations! You have installed DSS & Linkis successfully, please use bin/start-all.sh to start it!6.啟動腳本 start-all.sh
[hadoop@dss dssLinksFamilyMeals]$ [hadoop@dss dssLinksFamilyMeals]$ bin/start-all.sh ######################################################################## ###################### Begin to start Linkis ########################### ######################################################################## We will start all linkis applications, it will take some time, please wait <--------------------------------> Begin to start mg-eureka Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh restart mg-eureka server mg-eureka is not running Start to check whether the mg-eureka is running Start server, startup script: /opt/modules/dssLinksFamilyMeals/linkis/sbin/ext/linkis-mg-eureka nohup: redirecting stderr to stdout server linkis-mg-eureka start succeeded! Succeed to + End to start mg-eureka <--------------------------------> <--------------------------------> Begin to start mg-gateway Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh restart mg-gateway server mg-gateway is not running Start to check whether the mg-gateway is running Start server, startup script: /opt/modules/dssLinksFamilyMeals/linkis/sbin/ext/linkis-mg-gateway nohup: redirecting stderr to stdout server linkis-mg-gateway start succeeded! Succeed to + End to start mg-gateway <--------------------------------> <--------------------------------> Begin to start ps-publicservice Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh restart ps-publicservice server ps-publicservice is not running Start to check whether the ps-publicservice is running Start server, startup script: /opt/modules/dssLinksFamilyMeals/linkis/sbin/ext/linkis-ps-publicservice nohup: redirecting stderr to stdout server linkis-ps-publicservice start succeeded! Succeed to + End to start ps-publicservice <--------------------------------> <--------------------------------> Begin to start cg-linkismanager Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh restart cg-linkismanager server cg-linkismanager is not running Start to check whether the cg-linkismanager is running Start server, startup script: /opt/modules/dssLinksFamilyMeals/linkis/sbin/ext/linkis-cg-linkismanager nohup: redirecting stderr to stdout server linkis-cg-linkismanager start succeeded! Succeed to + End to start cg-linkismanager <--------------------------------> <--------------------------------> Begin to start ps-cs Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh restart ps-cs server ps-cs is not running Start to check whether the ps-cs is running Start server, startup script: /opt/modules/dssLinksFamilyMeals/linkis/sbin/ext/linkis-ps-cs nohup: redirecting stderr to stdout server linkis-ps-cs start succeeded! Succeed to + End to start ps-cs <--------------------------------> <--------------------------------> Begin to start cg-entrance Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh restart cg-entrance server cg-entrance is not running Start to check whether the cg-entrance is running Start server, startup script: /opt/modules/dssLinksFamilyMeals/linkis/sbin/ext/linkis-cg-entrance nohup: redirecting stderr to stdout server linkis-cg-entrance start succeeded! Succeed to + End to start cg-entrance <--------------------------------> <--------------------------------> Begin to start cg-engineconnmanager Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh restart cg-engineconnmanager server cg-engineconnmanager is not running Start to check whether the cg-engineconnmanager is running Start server, startup script: /opt/modules/dssLinksFamilyMeals/linkis/sbin/ext/linkis-cg-engineconnmanager nohup: redirecting stderr to stdout server linkis-cg-engineconnmanager start succeeded! Succeed to + End to start cg-engineconnmanager <--------------------------------> <--------------------------------> Begin to start cg-engineplugin Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh restart cg-engineplugin server cg-engineplugin is not running Start to check whether the cg-engineplugin is running Start server, startup script: /opt/modules/dssLinksFamilyMeals/linkis/sbin/ext/linkis-cg-engineplugin nohup: redirecting stderr to stdout server linkis-cg-engineplugin start succeeded! Succeed to + End to start cg-engineplugin <--------------------------------> start-all shell script executed completely Start to check all linkis microservice <--------------------------------> Begin to check mg-eureka Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh status mg-eureka 12210 server mg-eureka is running. <--------------------------------> <--------------------------------> Begin to check mg-gateway Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh status mg-gateway 12248 server mg-gateway is running. <--------------------------------> <--------------------------------> Begin to check ps-publicservice Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh status ps-publicservice 12317 server ps-publicservice is running. <--------------------------------> <--------------------------------> Begin to check ps-cs Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh status ps-cs 12465 server ps-cs is running. <--------------------------------> <--------------------------------> Begin to check cg-linkismanager Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh status cg-linkismanager 12363 server cg-linkismanager is running. <--------------------------------> <--------------------------------> Begin to check cg-entrance Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh status cg-entrance 12519 server cg-entrance is running. <--------------------------------> <--------------------------------> Begin to check cg-engineconnmanager Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh status cg-engineconnmanager 12575 server cg-engineconnmanager is running. <--------------------------------> <--------------------------------> Begin to check cg-engineplugin Is local execution:sh /opt/modules/dssLinksFamilyMeals/linkis/sbin/linkis-daemon.sh status cg-engineplugin 12631 server cg-engineplugin is running. <--------------------------------> Linkis started successfully Succeed to + start Linkis######################################################################## ###################### Begin to start DSS Service ###################### ######################################################################## We will start all dss applications, it will take some time, please wait <--------------------------------> Begin to start dss-framework-project-server Is local execution:sh /opt/modules/dssLinksFamilyMeals/dss/sbin/dss-daemon.sh restart dss-framework-project-server server dss-framework-project-server is not running Start to check whether the dss-framework-project-server is running Start to start server, startup script: /opt/modules/dssLinksFamilyMeals/dss/sbin/ext/dss-framework-project-server nohup: redirecting stderr to stdout server dss-framework-project-server start succeeded! End to start dss-framework-project-server <--------------------------------> <--------------------------------> Begin to start dss-framework-orchestrator-server Is local execution:sh /opt/modules/dssLinksFamilyMeals/dss/sbin/dss-daemon.sh restart dss-framework-orchestrator-server server dss-framework-orchestrator-server is not running Start to check whether the dss-framework-orchestrator-server is running Start to start server, startup script: /opt/modules/dssLinksFamilyMeals/dss/sbin/ext/dss-framework-orchestrator-server nohup: redirecting stderr to stdout server dss-framework-orchestrator-server start succeeded! End to start dss-framework-orchestrator-server <--------------------------------> <--------------------------------> Begin to start dss-apiservice-server Is local execution:sh /opt/modules/dssLinksFamilyMeals/dss/sbin/dss-daemon.sh restart dss-apiservice-server server dss-apiservice-server is not running Start to check whether the dss-apiservice-server is running Start to start server, startup script: /opt/modules/dssLinksFamilyMeals/dss/sbin/ext/dss-apiservice-server nohup: redirecting stderr to stdout server dss-apiservice-server start succeeded! End to start dss-apiservice-server <--------------------------------> <--------------------------------> Begin to start dss-datapipe-server Is local execution:sh /opt/modules/dssLinksFamilyMeals/dss/sbin/dss-daemon.sh restart dss-datapipe-server server dss-datapipe-server is not running Start to check whether the dss-datapipe-server is running Start to start server, startup script: /opt/modules/dssLinksFamilyMeals/dss/sbin/ext/dss-datapipe-server nohup: redirecting stderr to stdout server dss-datapipe-server start succeeded! End to start dss-datapipe-server <--------------------------------> <--------------------------------> Begin to start dss-workflow-server Is local execution:sh /opt/modules/dssLinksFamilyMeals/dss/sbin/dss-daemon.sh restart dss-workflow-server server dss-workflow-server is not running Start to check whether the dss-workflow-server is running Start to start server, startup script: /opt/modules/dssLinksFamilyMeals/dss/sbin/ext/dss-workflow-server nohup: redirecting stderr to stdout server dss-workflow-server start succeeded! End to start dss-workflow-server <--------------------------------> <--------------------------------> Begin to start dss-flow-execution-server Is local execution:sh /opt/modules/dssLinksFamilyMeals/dss/sbin/dss-daemon.sh restart dss-flow-execution-server server dss-flow-execution-server is not running Start to check whether the dss-flow-execution-server is running Start to start server, startup script: /opt/modules/dssLinksFamilyMeals/dss/sbin/ext/dss-flow-execution-server nohup: redirecting stderr to stdout server dss-flow-execution-server start succeeded! End to start dss-flow-execution-server <--------------------------------> <--------------------------------> Begin to check dss-framework-project-server Is local execution:sh /opt/modules/dssLinksFamilyMeals/dss/sbin/dss-daemon.sh status dss-framework-project-server server dss-framework-project-server is running. <--------------------------------> <--------------------------------> Begin to check dss-framework-orchestrator-server Is local execution:sh /opt/modules/dssLinksFamilyMeals/dss/sbin/dss-daemon.sh status dss-framework-orchestrator-server server dss-framework-orchestrator-server is running. <--------------------------------> <--------------------------------> Begin to check dss-apiservice-server Is local execution:sh /opt/modules/dssLinksFamilyMeals/dss/sbin/dss-daemon.sh status dss-apiservice-server server dss-apiservice-server is running. <--------------------------------> <--------------------------------> Begin to check dss-datapipe-server Is local execution:sh /opt/modules/dssLinksFamilyMeals/dss/sbin/dss-daemon.sh status dss-datapipe-server server dss-datapipe-server is running. <--------------------------------> <--------------------------------> Begin to check dss-workflow-server Is local execution:sh /opt/modules/dssLinksFamilyMeals/dss/sbin/dss-daemon.sh status dss-workflow-server server dss-workflow-server is running. <--------------------------------> <--------------------------------> Begin to check dss-flow-execution-server Is local execution:sh /opt/modules/dssLinksFamilyMeals/dss/sbin/dss-daemon.sh status dss-flow-execution-server server dss-flow-execution-server is running. <--------------------------------> Succeed to + start DSS Service######################################################################## ###################### Begin to start DSS & Linkis web ########################## ######################################################################## Succeed to + start DSS & Linkis Web=============================================== There are eight micro services in Linkis: linkis-cg-engineconnmanager linkis-cg-engineplugin linkis-cg-entrance linkis-cg-linkismanager linkis-mg-eureka linkis-mg-gateway linkis-ps-cs linkis-ps-publicservice ----------------------------------------------- There are six micro services in DSS: dss-framework-project-server dss-framework-orchestrator-server-dev dss-workflow-server-dev dss-flow-entrance dss-datapipe-server dss-apiservice-server ===============================================Log path of Linkis: linkis/logs Log path of DSS : dss/logsYou can check DSS & Linkis by acessing eureka URL: http://192.168.122.67:20303 You can acess DSS & Linkis Web by http://192.168.122.67:8088[hadoop@dss dssLinksFamilyMeals]$7.日志說明
dss路徑下(/opt/modules/dssLinksFamilyMeals/dss/logs)
dss-apiservice-server.out
dss-datapipe-server.out
dss-flow-execution-server.out
dss-framework-orchestrator-server.out
dss-framework-project-server.out
dss-workflow-server.out
linkis路徑下(/opt/modules/dssLinksFamilyMeals/linkis/logs)
linkis-cg-engineconnmanager.out
linkis-cg-engineplugin.out
linkis-cg-entrance.out
linkis-cg-linkismanager.out
linkis-mg-eureka.out
linkis-mg-gateway.out
linkis-ps-cs.out
linkis-ps-publicservice.out
三、相關訪問地址
nginx http://192.168.122.67/
hadoop http://192.168.122.67:50070/dfshealth.html
spark http://192.168.122.67:8080/
spark http://192.168.122.67:8081/
eureka URL: http://192.168.122.67:20303
DSS & Linkis Web by http://192.168.122.67:8088
登錄密碼可以從日志中查到,參考如下信息
Your default account password is hadoop/5f8a94fae
第十三部分、幫助
一、軟連接的創建、刪除、修改
1、軟鏈接創建
ln -s 【目標目錄】 【軟鏈接地址】
【目標目錄】指軟連接指向的目標目錄下,【軟鏈接地址】指“快捷鍵”文件名稱,該文件是被指令創建的。如下示例,public文件本來在data文件下是不存在的,執行指令后才存在的。
軟鏈接創建需要同級目錄下沒有同名的文件。就像你在windows系統桌面創建快捷鍵時,不能有同名的文件。
當同級目錄下,有同名的文件存在時,會報錯誤。
2、刪除
rm -rf 【軟鏈接地址】
上述指令中,軟鏈接地址最后不能含有“/”,當含有“/”時,刪除的是軟鏈接目標目錄下的資源,而不是軟鏈接本身。
3、修改
#ln -snf 【新目標目錄】 【軟鏈接地址】
這里修改是指修改軟鏈接的目標目錄 。
————————————————
版權聲明:本文為CSDN博主「主主主主公」的原創文章,遵循CC 4.0 BY-SA版權協議,轉載請附上原文出處鏈接及本聲明。
原文鏈接:https://blog.csdn.net/xhx94/article/details/98865598
二、sudo指令和/etc/sudoers文件說明
sudo 命令
-l 顯示當前用戶的sudo權限
-l username 顯示username的sudo權限
-u username 以username的權限執行
-k 強迫用戶下一次執行sudo時問密碼(不論有無超過n分鐘)
-b 后臺執行
-p 修改提示符,%u,%h
-H 將HOME環境變量設為新身份的HOME環境變量
-s 執行指定的shell
-v 延長密碼有效期限5分鐘
## Sudoers 允許特定用戶在不需要root密碼的情況下,運行各種需要root權限的指令
## 相關命令的集合的文件底部提供了示例,然后可以將它們委托給特定的用戶或組。
## 該文件必須使用visudo指令編輯
格式:
root ALL=(ALL) ALL
User Aliases Host Aliases = (Runas Aliases) Command Aliases
誰 通過 哪些主機 可以通過 哪個身份 運行 哪些命令
User Aliases和Runas Aliases可取值:
username
#uid
%gropname
%#gid
User_Alias/Runas_Alias
Host Aliases可取值:
hostname
ip
172.16.8.6/16
netgroup
Host_Alias
Command Aliases可取值:
commandname
directory
sudoedit
Cmnd_Alias
## Runas Aliases
# 以什么樣的身份運行后面的指令
Runas_Alias USER1 = root
## Host Aliases
## 主機組,您可能更愿意使用主機名(也可使用通配符匹配整個域)或IP地址。
# Host_Alias FILESERVERS = fs1, fs2
# Host_Alias MAILSERVERS = smtp, smtp2
## User Aliases
## 用戶,這些通常不是必需的,因為您可以在此文件中使用常規組(即來自文件,LDAP,NIS等) - 只需使用%groupname,而不是USERALIAS
# User_Alias ADMINS = jsmith, mikem
## Command Aliases
## 相關命令的集合
## 網絡相關的指令
# Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, /bin/ping, /sbin/dhclient, /usr/bin/net, /sbin/iptables, /usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool
## 軟件安裝和管理使用的指令
# Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum
## 服務相關的指令
# Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig
## 升級locate數據庫的指令
# Cmnd_Alias LOCATE = /usr/bin/updatedb
## 存儲相關的指令
# Cmnd_Alias STORAGE = /sbin/fdisk, /sbin/sfdisk, /sbin/parted, /sbin/partprobe, /bin/mount, /bin/umount
## 委派權限相關的指令
# Cmnd_Alias DELEGATING = /usr/sbin/visudo, /bin/chown, /bin/chmod, /bin/chgrp
## 進程相關的指令
# Cmnd_Alias PROCESSES = /bin/nice, /bin/kill, /usr/bin/kill, /usr/bin/killall
## 驅動模塊的相關的指令
# Cmnd_Alias DRIVERS = /sbin/modprobe
##開發常用指令
Cmnd_Alias DEVELOP = /usr/bin/cd, /usr/bin/pwd, /usr/bin/mkdir, /usr/bin/rmdir, /usr/bin/basename, /usr/bin/dirname, /usr/bin/vi, /usr/bin/diff, /usr/bin/find, /usr/bin/cat, /usr/bin/ta
c, /usr/bin/rev, /usr/bin/head, /usr/bin/tail, /usr/bin/tailf, /usr/bin/echo, /usr/bin/wc, /usr/bin/chown, /usr/bin/chmod, /usr/bin/chgrp, /usr/bin/gzip, /usr/bin/zcat, /usr/bin/gunzip,/
usr/bin/tar, /usr/sbin/ifconfig, /usr/bin/ping, /usr/bin/telnet, /usr/bin/netstat, /usr/bin/wget, /usr/bin/top, /usr/bin/cal, /usr/bin/date, /usr/bin/who, /usr/bin/ps, /usr/bin/clear, /u
sr/bin/df, /usr/bin/du, /usr/bin/free, /usr/bin/crontab, /usr/bin/yum,/usr/bin/make,/usr/bin/rm,/usr/sbin/ldconfig
# 默認規范
#
# 如果無法禁用tty上的回顯(echo),拒絕運行,即輸入密碼時禁止顯示
Defaults !visiblepw
#
#由于許多程序在搜索配置文件時使用它,因此保留HOME會帶來安全隱患。請注意,當啟用env_reset選項時,就已經設置了HOME,因此此選項僅適用于env_keep列表中,禁用了env_reset或HOME存在的配置。
Defaults always_set_home
#
Defaults env_reset,passwd_timeout=2.5,timestampe_timeout=4
Defaults env_keep = “COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS”
Defaults env_keep += “MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE”
Defaults env_keep += “LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES”
Defaults env_keep += “LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE”
Defaults env_keep += “LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY”
#
# 將HOME添加到env_keep可以使用戶通過sudo運行不受限制的命令。
# Defaults env_keep += “HOME”
Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin
## 接下來是重要部分:哪些用戶可以在哪些機器上運行哪些軟件(sudoers文件可以在多個系統之間共享)。
## 語法:
## user MACHINE=COMMANDS
##
## COMMANDS部分可能會添加其他選項
##
## 允許root在任何地方運行任何命令
root ALL=(ALL) ALL
## 允許’sys’用戶組的所有用戶運行"NETWORKING",“SOFTWARE”,“SERVICES”,“STORAGE”,“DELEGATING”,“PROCESSES”,“LOCATE”,"DRIVERS"命令組的指令
# %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS
## 允許wheel組中的所有成員運行系統上的所有指令
# %wheel ALL=(ALL) ALL
## 允許wheel組中的所有成員運行系統上的所有指令,不需要密碼
# %wheel ALL=(ALL) NOPASSWD: ALL
## 允許users用戶組中的所有成員以root用戶身份mount和umount cdrom
# %users ALL=/sbin/mount /mnt/cdrom, /sbin/umount /mnt/cdrom
## 允許users用戶組中的所有成員關閉系統
# %users localhost=/sbin/shutdown -h now
## 讀取/etc/sudoers.d目錄下的所有配置文件 (這里的 # 號并不是表示注釋,是固定寫法)
#includedir /etc/sudoers.d
## 為sudo添加日志審計功能,這樣的話,只要使用使用sudo執行指令的用戶,執行指令的詳細信息都會記錄在指定的日志文件中
Defaults logfile=/var/log/sudo.log
Aliases
## 主機組,您可能更愿意使用主機名(也可使用通配符匹配整個域)或IP地址。
# Host_Alias FILESERVERS = fs1, fs2
# Host_Alias MAILSERVERS = smtp, smtp2
## User Aliases
## 用戶,這些通常不是必需的,因為您可以在此文件中使用常規組(即來自文件,LDAP,NIS等) - 只需使用%groupname,而不是USERALIAS
# User_Alias ADMINS = jsmith, mikem
## Command Aliases
## 相關命令的集合
## 網絡相關的指令
# Cmnd_Alias NETWORKING = /sbin/route, /sbin/ifconfig, /bin/ping, /sbin/dhclient, /usr/bin/net, /sbin/iptables, /usr/bin/rfcomm, /usr/bin/wvdial, /sbin/iwconfig, /sbin/mii-tool
## 軟件安裝和管理使用的指令
# Cmnd_Alias SOFTWARE = /bin/rpm, /usr/bin/up2date, /usr/bin/yum
## 服務相關的指令
# Cmnd_Alias SERVICES = /sbin/service, /sbin/chkconfig
## 升級locate數據庫的指令
# Cmnd_Alias LOCATE = /usr/bin/updatedb
## 存儲相關的指令
# Cmnd_Alias STORAGE = /sbin/fdisk, /sbin/sfdisk, /sbin/parted, /sbin/partprobe, /bin/mount, /bin/umount
## 委派權限相關的指令
# Cmnd_Alias DELEGATING = /usr/sbin/visudo, /bin/chown, /bin/chmod, /bin/chgrp
## 進程相關的指令
# Cmnd_Alias PROCESSES = /bin/nice, /bin/kill, /usr/bin/kill, /usr/bin/killall
## 驅動模塊的相關的指令
# Cmnd_Alias DRIVERS = /sbin/modprobe
##開發常用指令
Cmnd_Alias DEVELOP = /usr/bin/cd, /usr/bin/pwd, /usr/bin/mkdir, /usr/bin/rmdir, /usr/bin/basename, /usr/bin/dirname, /usr/bin/vi, /usr/bin/diff, /usr/bin/find, /usr/bin/cat, /usr/bin/ta
c, /usr/bin/rev, /usr/bin/head, /usr/bin/tail, /usr/bin/tailf, /usr/bin/echo, /usr/bin/wc, /usr/bin/chown, /usr/bin/chmod, /usr/bin/chgrp, /usr/bin/gzip, /usr/bin/zcat, /usr/bin/gunzip,/
usr/bin/tar, /usr/sbin/ifconfig, /usr/bin/ping, /usr/bin/telnet, /usr/bin/netstat, /usr/bin/wget, /usr/bin/top, /usr/bin/cal, /usr/bin/date, /usr/bin/who, /usr/bin/ps, /usr/bin/clear, /u
sr/bin/df, /usr/bin/du, /usr/bin/free, /usr/bin/crontab, /usr/bin/yum,/usr/bin/make,/usr/bin/rm,/usr/sbin/ldconfig
# 默認規范
#
# 如果無法禁用tty上的回顯(echo),拒絕運行,即輸入密碼時禁止顯示
Defaults !visiblepw
#
#由于許多程序在搜索配置文件時使用它,因此保留HOME會帶來安全隱患。請注意,當啟用env_reset選項時,就已經設置了HOME,因此此選項僅適用于env_keep列表中,禁用了env_reset或HOME存在的配置。
Defaults always_set_home
#
Defaults env_reset,passwd_timeout=2.5,timestampe_timeout=4
Defaults env_keep = “COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS”
Defaults env_keep += “MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE”
Defaults env_keep += “LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES”
Defaults env_keep += “LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE”
Defaults env_keep += “LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY”
#
# 將HOME添加到env_keep可以使用戶通過sudo運行不受限制的命令。
# Defaults env_keep += “HOME”
Defaults secure_path = /sbin:/bin:/usr/sbin:/usr/bin
## 接下來是重要部分:哪些用戶可以在哪些機器上運行哪些軟件(sudoers文件可以在多個系統之間共享)。
## 語法:
## user MACHINE=COMMANDS
##
## COMMANDS部分可能會添加其他選項
##
## 允許root在任何地方運行任何命令
root ALL=(ALL) ALL
## 允許’sys’用戶組的所有用戶運行"NETWORKING",“SOFTWARE”,“SERVICES”,“STORAGE”,“DELEGATING”,“PROCESSES”,“LOCATE”,"DRIVERS"命令組的指令
# %sys ALL = NETWORKING, SOFTWARE, SERVICES, STORAGE, DELEGATING, PROCESSES, LOCATE, DRIVERS
## 允許wheel組中的所有成員運行系統上的所有指令
# %wheel ALL=(ALL) ALL
## 允許wheel組中的所有成員運行系統上的所有指令,不需要密碼
# %wheel ALL=(ALL) NOPASSWD: ALL
## 允許users用戶組中的所有成員以root用戶身份mount和umount cdrom
# %users ALL=/sbin/mount /mnt/cdrom, /sbin/umount /mnt/cdrom
## 允許users用戶組中的所有成員關閉系統
# %users localhost=/sbin/shutdown -h now
## 讀取/etc/sudoers.d目錄下的所有配置文件 (這里的 # 號并不是表示注釋,是固定寫法)
#includedir /etc/sudoers.d
## 為sudo添加日志審計功能,這樣的話,只要使用使用sudo執行指令的用戶,執行指令的詳細信息都會記錄在指定的日志文件中
Defaults logfile=/var/log/sudo.log
轉載于:https://www.cnblogs.com/wyzhou/p/10527535.html
總結
- 上一篇: Qt 设置窗体或控件渐变消失
- 下一篇: c语言深度剖析第三版pdf_入门到入坟,