日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

零起步的Hadoop实践日记(更改hadoop数据存储位置)

發布時間:2023/12/31 编程问答 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 零起步的Hadoop实践日记(更改hadoop数据存储位置) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

用的是阿里云主機,發現系統盤只有20G,但是送了一塊130G數據盤(要是給我直接一塊150G的系統盤就好了,阿里云的說法是,數據系統分開互不干擾)本來打算要升級硬盤,后來啟動了130G硬盤并掛載在某目錄下(/ad)。需要修改hadoop配置,不需要修改hive配置。下面是CDH4默認給我們配置的?

(1) /etc/hadoop/conf/hdfs-site.xml?

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>dfs.replication</name><value>1</value></property><property><name>dfs.safemode.extension</name><value>0</value></property><property><name>dfs.safemode.min.datanodes</name><value>1</value></property><property><name>hadoop.tmp.dir</name><value>/var/lib/hadoop-hdfs/cache/${user.name}</value></property><property><name>dfs.namenode.name.dir</name><value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/name</value></property><property><name>dfs.namenode.checkpoint.dir</name><value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/namesecondary</value></property><property><name>dfs.datanode.data.dir</name><value>file:///var/lib/hadoop-hdfs/cache/${user.name}/dfs/data</value></property></configuration>

我實際修改了上圖紅色部分,修改后為:

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>dfs.replication</name><value>1</value></property><property><name>dfs.safemode.extension</name><value>0</value></property><property><name>dfs.safemode.min.datanodes</name><value>1</value></property><property><name>hadoop.tmp.dir</name><value>/ad/hadoop-hdfs/cache/${user.name}</value></property><property><name>dfs.namenode.name.dir</name><value>file:///ad/hadoop-hdfs/cache/${user.name}/dfs/name</value></property><property><name>dfs.namenode.checkpoint.dir</name><value>file:///ad/hadoop-hdfs/cache/${user.name}/dfs/namesecondary</value></property><property><name>dfs.datanode.data.dir</name><value>file:///ad/hadoop-hdfs/cache/${user.name}/dfs/data</value></property></configuration>

?

(2)?/etc/hadoop/conf/mapred-site.xml

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>mapred.job.tracker</name><value>localhost:8021</value></property><property><name>mapreduce.framework.name</name><value>yarn</value></property><property><name>mapreduce.jobhistory.address</name><value>localhost:10020</value></property><property><name>mapreduce.jobhistory.webapp.address</name><value>localhost:19888</value></property><property><description>To set the value of tmp directory for map and reduce tasks.</description><name>mapreduce.task.tmp.dir</name><value>/var/lib/hadoop-mapreduce/cache/${user.name}/tasks</value></property></configuration>

實際修改了上圖紅色部分,修改后為:

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>mapred.job.tracker</name><value>localhost:8021</value></property><property><name>mapreduce.framework.name</name><value>yarn</value></property><property><name>mapreduce.jobhistory.address</name><value>localhost:10020</value></property><property><name>mapreduce.jobhistory.webapp.address</name><value>localhost:19888</value></property><property><description>To set the value of tmp directory for map and reduce tasks.</description><name>mapreduce.task.tmp.dir</name><value>/ad/hadoop-mapreduce/cache/${user.name}/tasks</value></property></configuration>

?

(3)?/etc/hadoop/conf/mapred-site.xml

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>yarn.resourcemanager.resource-tracker.address</name><value>127.0.0.1:8031</value><description>host is the hostname of the resource manager and port is the port on which the NodeManagers contact the Resource Manager.</description></property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce.shuffle</value></property><property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value></property><property><name>yarn.log-aggregation-enable</name><value>true</value></property><property><name>yarn.dispatcher.exit-on-error</name><value>true</value></property><property><description>List of directories to store localized files in.</description><name>yarn.nodemanager.local-dirs</name><value>/var/lib/hadoop-yarn/cache/${user.name}/nm-local-dir</value></property><property><description>Where to store container logs.</description><name>yarn.nodemanager.log-dirs</name><value>/var/log/hadoop-yarn/containers</value></property><property><description>Where to aggregate logs to.</description><name>yarn.nodemanager.remote-app-log-dir</name><value>/var/log/hadoop-yarn/apps</value></property><property><description>Classpath for typical applications.</description><name>yarn.application.classpath</name><value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,$YARN_HOME/*,$YARN_HOME/lib/*</value></property></configuration>

實際修改了上圖紅色部分,修改后為:

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property><name>yarn.resourcemanager.resource-tracker.address</name><value>127.0.0.1:8031</value><description>host is the hostname of the resource manager and port is the port on which the NodeManagers contact the Resource Manager.</description></property><property><name>yarn.nodemanager.aux-services</name><value>mapreduce.shuffle</value></property><property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value></property><property><name>yarn.log-aggregation-enable</name><value>true</value></property><property><name>yarn.dispatcher.exit-on-error</name><value>true</value></property><property><description>List of directories to store localized files in.</description><name>yarn.nodemanager.local-dirs</name><value>/ad/hadoop-yarn/cache/${user.name}/nm-local-dir</value></property><property><description>Where to store container logs.</description><name>yarn.nodemanager.log-dirs</name><value>/var/log/hadoop-yarn/containers</value></property><property><description>Where to aggregate logs to.</description><name>yarn.nodemanager.remote-app-log-dir</name><value>/var/log/hadoop-yarn/apps</value></property><property><description>Classpath for typical applications.</description><name>yarn.application.classpath</name><value>$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/*,$HADOOP_COMMON_HOME/lib/*,$HADOOP_HDFS_HOME/*,$HADOOP_HDFS_HOME/lib/*,$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,$YARN_HOME/*,$YARN_HOME/lib/*</value></property></configuration>

?

PS:另外分享我重啟hadoop和hive的腳本

# stop hive, yarn and hdfs firstecho "@@@ stop yarn and hdfs first"sudo service hive-metastore stopsudo service hive-server stopsudo service hadoop-yarn-resourcemanager stopsudo service hadoop-yarn-nodemanager stopsudo service hadoop-mapreduce-historyserver stopfor x in `cd /etc/init.d ; ls hadoop-hdfs-*`do sudo service $x stopdone# clear and formatecho "@@@ clear and format"sudo rm -rf /tmp/*sudo rm -rf /ad/hadoop-hdfs/cache/*sudo rm -rf /ad/hadoop-yarn/cache/*sudo rm -rf /ad/hadoop-mapreduce/cache/*sudo -u hdfs hdfs namenode -format# start hdfsecho "@@@ start hdfs"for x in `cd /etc/init.d ; ls hadoop-hdfs-*`do sudo service $x startdone# mkdirecho "@@@ mkdir"sudo -u hdfs hadoop fs -rm -r /tmpsudo -u hdfs hadoop fs -mkdir /tmpsudo -u hdfs hadoop fs -chmod -R 1777 /tmp sudo -u hdfs hadoop fs -mkdir /tmp/hadoop-yarn/stagingsudo -u hdfs hadoop fs -chmod -R 1777 /tmp/hadoop-yarn/stagingsudo -u hdfs hadoop fs -mkdir /tmp/hadoop-yarn/staging/history/done_intermediatesudo -u hdfs hadoop fs -chmod -R 1777 /tmp/hadoop-yarn/staging/history/done_intermediatesudo -u hdfs hadoop fs -chown -R mapred:mapred /tmp/hadoop-yarn/stagingsudo -u hdfs hadoop fs -mkdir /var/log/hadoop-yarnsudo -u hdfs hadoop fs -chown yarn:mapred /var/log/hadoop-yarnsudo -u hdfs hadoop fs -ls -R /# start yarnecho "@@@ start yarn"sudo service hadoop-yarn-resourcemanager start sudo service hadoop-yarn-nodemanager start sudo service hadoop-mapreduce-historyserver startsudo -u hdfs hadoop fs -mkdir /user/maminghansudo -u hdfs hadoop fs -chown maminghan /user/maminghan# start hive
sudo service hive-metastore startsudo service hive-server startsudo -u hdfs hadoop fs -mkdir /user/hivesudo -u hdfs hadoop fs -chown hive /user/hivesudo -u hdfs hadoop fs -mkdir /tmpsudo -u hdfs hadoop fs -chmod 777 /tmp #already existsudo -u hdfs hadoop fs -chmod o+t /tmpsudo -u hdfs hadoop fs -mkdir /datasudo -u hdfs hadoop fs -chown hdfs /datasudo -u hdfs hadoop fs -chmod 777 /datasudo -u hdfs hadoop fs -chmod o+t /datasudo chown -R hive:hive /ad/hive

?

轉載于:https://www.cnblogs.com/aquastar/p/3607570.html

總結

以上是生活随笔為你收集整理的零起步的Hadoop实践日记(更改hadoop数据存储位置)的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 蜜桃成人无码区免费视频网站 | 日本女优中文字幕 | 波多野结衣在线视频免费观看 | 五月婷网站 | 致命弯道8在线观看免费高清完整 | av影片在线观看 | 蝌蚪自拍网站 | 男生把女生困困的视频 | 波多野结衣三区 | 日韩精品在线免费 | 国产真实夫妇交换视频 | 国产亚洲av片在线观看18女人 | 欧美不卡网 | 国产精品久久久久久久久久久免费看 | 人人干在线视频 | 国产尻逼视频 | 久久久久亚洲av片无码下载蜜桃 | 国内外成人免费视频 | 成人香蕉网 | 成人免费一级 | 黄色网久久 | 国产日韩精品在线观看 | 夜夜综合 | 高清av不卡 | 中文字幕欧美人妻精品 | 国产亚洲精品成人a | 国产免费av一区 | 日本在线视频免费 | 手机在线看黄色 | 我把护士日出水了视频90分钟 | 黄色aaa视频 | 美女扒开让男人桶爽 | 国产二区一区 | 久久久久九九九九 | 精品人妻无码中文字幕18禁 | 欧美日韩性视频 | 日韩精品一二三 | 欧美性高潮| 国产嫩草视频 | 国产激情小视频 | 日韩免费高清视频网站 | 国产无遮挡免费 | 男女被到爽流尿 | 国产福利精品在线观看 | 欧美性极品 | 欧美精品第二页 | 国产农村老头老太视频 | 国产一级二级三级视频 | 涩涩视频在线 | 波多野结衣一区二区三区 | 亚色av | 日韩精品视频一区二区在线观看 | 在线一区二区三区视频 | 日韩一区二 | 日本女优网址 | 性国产精品 | 一本色道久久hezyo加勒比 | 免费国产区| 成人精品在线 | 欧美三日本三级少妇99 | 国产亚洲精品久久久久久打不开 | 天天干夜夜草 | 欧美午夜精品理论片 | av在线黄| 欧美日韩国产中文 | 国精产品一区一区三区免费视频 | 亚洲免费av片 | 欧美一级乱黄 | 日本黄色不卡视频 | 打白嫩屁屁网站视频短裙 | 一级中文片 | 亚洲高清一区二区三区 | 自拍偷拍亚洲欧洲 | 黄页在线播放 | 亚洲天堂免费观看 | 欧美成人免费观看 | 日韩欧美在线免费观看 | 久久国产这里只有精品 | aaaaa级片 | 潮喷失禁大喷水无码 | 国产精品久久久久久久久岛 | 日日操日日操 | 久草国产视频 | 久久久综合精品 | 欧美亚洲一级片 | 国产高清精品软件丝瓜软件 | 色窝窝综合色窝窝久久 | 国产美女久久久 | 狠狠操夜夜操 | 亚洲福利在线播放 | 四色在线 | 在线一区av | 日日操天天 | 网站在线看| 福利片一区二区 | 国产农村妇女精品一二区 | 日韩人妻无码一区二区三区 | 亚洲精品网站在线播放gif | 久久久午夜 |