hadoop 2.6.5 + hive 集群搭建
Hadoop?搭建:https://blog.csdn.net/sinat_28371057/article/details/109135056
hive 搭建
1. 系統(tǒng)環(huán)境
centos 7.3
Hadoop 2.7.3
jdk 1.8
?
MySQL安裝在master機器上,hive服務(wù)器也安裝在master上
hive版本:?https://mirrors.cnnic.cn/apache/hive/hive-2.3.4/apache-hive-2.3.4-bin.tar.gz
2.mysql安裝
本文使用MySQL作為遠程元數(shù)據(jù)庫,部署在master節(jié)點上
2.1安裝mysql
安裝mysql服務(wù)端
sudo apt-get install mysql-server
安裝mysql客戶端
sudo apt-get install mysql-client
期間會有命令窗口會有跳窗提醒輸入密碼,一定要記住密碼,登錄Mysql和后續(xù)的配置都需要密碼。
2.2.查看mysql服務(wù)是否啟動
sudo netstat -tap | grep mysql
2.3.設(shè)置mysql遠程訪問
a).編輯mysql配置文件,把其中bind-address = 127.0.0.1注釋了
sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf
b). 使用root進入mysql命令行,執(zhí)行如下2個命令,示例中mysql的root賬號密碼就是按照mysql時輸入的密碼
mysql -u root -p
命令窗口會有提示輸入密碼,即是安裝mysql時輸入的密碼
c).授權(quán)root賬戶,并授予它遠程連接的權(quán)力
添加一個用戶名是root且密碼是root的遠程訪問用戶
grant all on *.* to root@'%' identified by 'root' with grant option;
d).運行完后緊接著輸入,以更新數(shù)據(jù)庫:
FLUSH PRIVILEGES;
e).執(zhí)行quit退出mysql
?
2.4.重啟mysql
/etc/init.d/mysql restart
重啟成功后,在其他計算機上,便可以登錄。
MySQL卸載:
1、sudo apt-get autoremove --purge mysql-server-5.0
2、sudo apt-get remove mysql-server
3、sudo apt-get autoremove mysql-server
4、sudo apt-get remove mysql-common --這個很重要
5、dpkg -l |grep ^rc|awk '{print $2}' |sudo xargs dpkg -P -- 清除殘留數(shù)據(jù)
3.Hive安裝配置
3.1.下載Hive安裝包
wget https://mirrors.cnnic.cn/apache/hive/hive-2.3.0/apache-hive-2.3.0-bin.tar.gz
3.2.解壓
tar -zxfv apache-hive-2.3.0-bin.tar.gz?
3.3.將解壓后的目錄移動到自己指定的安裝目錄
mv apache-hive-2.3.0-bin /home/hadoop/software/
3.4.配置環(huán)境變量
sudo vim /etc/profile
export HIVE_HOME=/home/hadoop/software/apache-hive-2.3.0-bin
export PATH=$HIVE_HOME/bin:$PATH
3.5.使環(huán)境變量生效
source /etc/profile
3.6.修改conf/下的幾個template模板并重命名?
a).復(fù)制hive-env.sh.template創(chuàng)建為hive-env.sh
cp hive-env.sh.template hive-env.sh?
給hive-env.sh增加執(zhí)行權(quán)限
chmod 755 hive-env.sh
修改conf/hive-env.sh 文件
HADOOP_HOME=/home/hadoop/software/hadoop-2.7.4
b).復(fù)制hive-default.xml.template創(chuàng)建為hive-site.xml
cp hive-default.xml.template hive-site.xml
修改hive-site.xml文件內(nèi)容
<property>
??? <name>javax.jdo.option.ConnectionURL</name>
??? <value>jdbc:mysql://master:3306/hive?createDatabaseIfNotExist=true</value>
??? <description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
??? <name>javax.jdo.option.ConnectionDriverName</name>
??? <value>com.mysql.jdbc.Driver</value>
??? <description>Driver class name for a JDBC metastore</description>
</property>
<property>
??? <name>javax.jdo.option.ConnectionUserName</name>
??? <value>hive</value>
??? <description>username to use against metastore database</description>
</property>
<property>
??? <name>javax.jdo.option.ConnectionPassword</name>
??? <value>hive</value>
??? <description>password to use against metastore database</description>
</property>
<!--配置緩存目錄-->
<property>
??? <name>hive.exec.local.scratchdir</name>
??? <value>/home/hadoop/software/apache-hive-2.3.0-bin/iotmp</value>
??? <description>Local scratch space for Hive jobs</description>
</property>
<property>
??? <name>hive.downloaded.resources.dir</name>
??? <value>/home/hadoop/software/apache-hive-2.3.0-bin/iotmp</value>
??? <description>Temporary local directory for added resources in the remote file system.</description>
</property>
根據(jù)hive-site-xml,創(chuàng)建緩存目錄
cd /home/hadoop/software/apache-hive-2.3.0-bin/
mkdir iotmp
3.7.修改 bin/hive-config.sh 文件
export JAVA_HOME=/usr/local/jdk/jdk1.8.0_121
export HIVE_HOME=/home/hadoop/software/apache-hive-2.3.0-bin
export HADOOP_HOME=/home/hadoop/software/hadoop-2.7.4
3.8.下載mysql-connector-java-5.1.44-bin.jar文件,并放到/home/hadoop/software/apache-hive-2.3.0-bin/lib目錄下
wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.44.tar.gz
解壓mysql-connector-java-5.1.44.tar.gz后,將mysql-connector-java-5.1.44-bin.jar放置在lib目錄下
4.將apache-hive-2.3.0-bin分發(fā)到slave節(jié)點
scp -r apache-hive-2.3.0-bin hadoop@slave1:/home/hadoop/software/
scp -r apache-hive-2.3.0-bin hadoop@slave2:/home/hadoop/software/
slave端配置, 修改 conf/hive-site.xml 文件
<property> ?
??? <name>hive.metastore.uris</name> ?
??? <value>thrift://master:9083</value>
??? <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description> ?
</property>
5.Hive的mysql數(shù)據(jù)庫配置
5.1.使用root用戶登錄mysql數(shù)據(jù)庫
mysql -u root -p
5.2.創(chuàng)建hive用戶
mysql> CREATE USER 'hive' IDENTIFIED BY 'hive';
5.3.給hive用戶賦權(quán)限
mysql> GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%' WITH GRANT OPTION;
5.4.更新數(shù)據(jù)庫
mysql>flush privileges;
mysql> quit
5.5.Hive用戶登錄
hadoop@master:~$ mysql -u hive -p
5.6.創(chuàng)建Hive數(shù)據(jù)庫
mysql>create database hive;
6.啟動Hive
6.1.啟動hadoop
6.2. 進入bin目錄初始化表數(shù)據(jù)
hadoop@master:~/software/apache-hive-2.3.0-bin/bin$./schematool -dbType mysql -initSchema
6.3.啟動metastore服務(wù)?
hive –service metastore &
在 master 節(jié)點上運行 jps 應(yīng)該會有RunJar 進程
6.4.服務(wù)器端訪問
hadoop@master:~$ hive
6.5.客戶端(slave)訪問
hadoop@slave2:~$ hive
總結(jié)
以上是生活随笔為你收集整理的hadoop 2.6.5 + hive 集群搭建的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: windows10 vscode 构建最
- 下一篇: 超松弛迭代法解线性方程组c语言,超松弛迭