脚本两则--用于快速部署HADOOP,SPARK这些(特别是VM虚拟机模板部署出来的)。。...
生活随笔
收集整理的這篇文章主要介紹了
脚本两则--用于快速部署HADOOP,SPARK这些(特别是VM虚拟机模板部署出来的)。。...
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
感覺可能只是適合我自己的部署習慣吧,且只針對CENTOS6及以下版本,以后有時間,可能還是要改進。。
1,從VM的模塊產生的虛擬機,如何快速搞定網絡配置?
#!/bin/bash#usage:./init_cdh_sys.sh hostname hostip #generate the host from esxi template.must change NIC mac address and change hostname net_rule_file="/etc/udev/rules.d/70-persistent-net.rules" net_conf_file="/etc/sysconfig/network-scripts/ifcfg-eth0" net_hostname_file="/etc/sysconfig/network" netmask_conf="255.255.255.0" gateway_conf="192.168.xx.1" dns1_conf="a.b.c.d" old_mac="00:50:56:BD:92:DA"#此處要替換為模板的MAC地址#============================================ #resetup 70-persistent-net.rules file if (cat $net_rule_file|grep -i $old_mac) ;thennew_mac_str=$(sed -n -e '/eth1/ p' $net_rule_file)#new_mac_1=${new_mac_str:64:17}new_mac=$(echo $new_mac_str| awk -F ',' {'print $4'}|awk -F '==' {'print $2'}|sed 's/\"//g')sed -i "/$old_mac/Id" $net_rule_filesed -i "s/eth1/eth0/g" $net_rule_file elsenew_mac_str=$(sed -n -e '/eth0/ p' $net_rule_file)#new_mac_1=${new_mac_str:64:17}new_mac=$(echo $new_mac_str| awk -F ',' {'print $4'}|awk -F '==' {'print $2'}|sed 's/\"//g')echo "done 70-persistent-net.rules file!" fi#==================================== #change hostname if [ ! -n "$1" ] ;thenecho "you have not input a hostname!"echo "usage:./init_sys_nic.sh cm222.wdzjcdh.com 192.168.14.222" elsesed -i "s/localhost.localdomain/$1/g" $net_hostname_file fi #=================================== #resetup NIC config file if (cat $net_conf_file|grep $netmask_conf) ;thenecho "done /etc/sysconfig/network-scripts/ifcfg-eth0" elif [ ! -n "$2" ] ;thenecho "you have not input a ip address!" elsesed -i "/$old_mac/Id" $net_conf_filesed -i "s/dhcp/static/g" $net_conf_fileecho "HWADDR=$new_mac" >> $net_conf_fileecho "IPADDR=$2" >> $net_conf_fileecho "NETMASK=$netmask_conf" >> $net_conf_fileecho "GATEWAY=$gateway_conf" >> $net_conf_fileecho "DNS1=$dns1_conf" >> $net_conf_fileservice network restartreboot fi2,SSH-KEYGEN -T RSA這個命令暫時沒有想到好的操作方面(ANSIBLE來部署?)最近動了這個心了,SALTSTACK作應用部署,快,但運維自己的操作,ANSIBLE也是一個選擇喲,畢竟純SSH。。
3,在弄好首臺HADOOP之后,如何愉快的COPY到其它結點?這個腳本不太方便,可能相關目錄要自定義。。。如果能所有的東東統一到一個目錄的話。。。:),還有scp -r $var_folder root@$1:/usr/local/,這個寫得奇丑,當時只求快。。
#!/bin/bashecho "Usage: ./init_hadoop_spark -f demo-data" cp_file=("/etc/hosts" "/etc/profile.d/env.sh") cp_folder=("/root/.ssh/" "/usr/local/scala-2.11.4" "/usr/local/hadoop-2.6.0" "/usr/local/spark-1.2.2-bin-hadoop2.4" "/usr/local/jdk1.7.0_71")function cp_file_folder() {for var_file in ${cp_file[@]};doscp $var_file root@$1:$var_filedone for var_folder in ${cp_folder[@]};doscp -r $var_folder root@$1:/usr/local/done }while getopts :f:h file_name docase $file_name inf) cat $OPTARG | while read linedoarr_var=(${line})cp_file_folder ${arr_var[0]}#run_docker ${arr_var[0]} ${arr_var[1]} ${arr_var[2]}donesleep 2;;h) echo "Usage: ./init_hadoop_spark -f demo-data" exit 1;;\?) echo "Usage: ./init_hadoop_spark -f demo-data" exit 1 ;; :) echo "Usage: ./init_hadoop_spark -f demo-data" exit 1;;esac done?
轉載于:https://www.cnblogs.com/aguncn/p/4461997.html
總結
以上是生活随笔為你收集整理的脚本两则--用于快速部署HADOOP,SPARK这些(特别是VM虚拟机模板部署出来的)。。...的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Navicat for my sql 数
- 下一篇: 一文讲清楚【KL距离】、【torch.n