日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Hadoop与Hbase基本配置

發布時間:2025/4/16 编程问答 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Hadoop与Hbase基本配置 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
在經歷了幾周的努力之后,終于選擇放棄現在安裝的穩定版本,轉而安裝舊版本來部署Sleuthkit-Hadp系統。直到昨天,自己還一直為Inconsistent configuration的錯誤頭疼不已。既然同事的版本已經裝成功了,那自己也就先用跑通的系統試一下吧,畢竟先放下再回來看現在的問題可能有更好的解決思路吧。今天開始正式重新部署SH系統的第一步,安裝Hadoop與Hbase。之前的安裝筆記比較凌亂,今天借著這樣的機會把整個步驟重新梳理一遍。

一、安裝Hadoop
? ? ?自己使用的軟件版本是hadoop-1.0.3,比較早的一個版本,可以去hadoop的官方網站去下載。在安裝hadoop以前首先要設置系統環境:
<1> 安裝java-1.6版本,之前自己安裝的java-1.7,但是沒有成功,不曉得是不是java版本的緣故;無論如何,這次自己選擇了比較保守的方案,從oracle官方注冊后下載jdk-6u45-linux-i586.bin后解包得到【jdk-1.6.0_45】。注意到這里都是bin文件,因此需要chmod該文件以775的可執行權限,然后./filename.bin即可;
<2> 安裝ssh,并且設置ssh無密碼登錄hadoop
<2.1>運行sudo apt-get install ssh/rsync,運行sudo apt-get install openjava1.6-jdk (for jps command)
<2.2>配置ssh本機免口令登錄主要有兩步:
運行ssh-keygen -t dsa -P '' -f /.ssh/id_dsa
? ?-t用來指定加密算法,可以選擇dsa和rsa兩種加密方式;
? ?-P用來指定密碼,兩個單引號表示空密碼'';
? ?-f用來指定存放密鑰的文件
運行cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
? ?這一步將公鑰添加進本機的authorized_keys中,完成這兩步后可以ssh localhost驗證是否成功。接下來需要進入已經解好的hadoop-1.0.3中進行配置。hadoop的偽分布模式主要需要配置以下幾個配置文件:
<3>conf/hbase-env.sh:主要用來配置hadoop的運行環境,這里需要修改JAVA_HOME到你的jdk1.6目錄(見黑體)

點擊(此處)折疊或打開

  • #Set Hadoop-specific environment variables here.

  • # The only required environment variable is JAVA_HOME.?All?others are
  • # optional. When running a distributed configuration?it?is best to
  • # set JAVA_HOME in this file,?so that?it?is correctly defined?on
  • # remote nodes.

  • # The java implementation to use. Required.
  • export JAVA_HOME=/home/hadoop/platform/jdk1.6.0_45

  • # Extra Java CLASSPATH elements. Optional.
  • # export HADOOP_CLASSPATH=

  • # The maximum amount of heap to use,?in MB. Default is 1000.
  • # export HADOOP_HEAPSIZE=2000

  • # Extra Java runtime?options.?Empty by default.
  • # export HADOOP_OPTS=-server

  • # Command specific?options?appended to HADOOP_OPTS when specified
  • export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
  • export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
  • export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
  • export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
  • export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
  • # export HADOOP_TASKTRACKER_OPTS=
  • # The following applies to multiple commands (fs,?dfs,?fsck,?distcp etc)
  • # export HADOOP_CLIENT_OPTS

  • # Extra ssh?options.?Empty by default.
  • # export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"

  • # Where log?files?are stored. $HADOOP_HOME/logs by default.
  • # export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

  • # File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.
  • # export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves

  • # host:path where hadoop code should be rsync'd?from.?Unset by default.
  • # export HADOOP_MASTER=master:/home/$USER/src/hadoop

  • # Seconds to sleep between slave commands. Unset by default. This
  • # can be useful in large clusters,?where,?e.g.,?slave rsyncs can
  • # otherwise arrive faster than the master can service them.
  • # export HADOOP_SLAVE_SLEEP=0.1

  • # The?directory?where pid?files?are stored. /tmp by default.
  • # export HADOOP_PID_DIR=/var/hadoop/pids

  • # A string representing this instance of hadoop. $USER?by default.
  • # export HADOOP_IDENT_STRING=$USER

  • # The scheduling priority for daemon processes. See 'man nice'.
  • # export HADOOP_NICENESS=10
  • <4>conf/core-site.xml
    ? ? ?這里主要配置fs.default.name(用來指定namenode)和hadoop.tmp.dir(默認的hdfs的tmp目錄位置),這里可以不設置hadoop.tmp.dir,那么就會保存在默認的/tmp下,每次重啟機器都會丟失數據。

    點擊(此處)折疊或打開

  • <?xml version="1.0"?>
  • <?xml-stylesheet type="text/xsl"?href="configuration.xsl"?>
  • ??
  • <!-- Put site-specific property overrides in this file. -->

  • <configuration>
  • ???<property>
  • ???????<name>fs.default.name</name>
  • ???????<value>hdfs://localhost:9000</value>
  • ???</property>
  • ???<property>
  • ???????<name>hadoop.tmp.dir</name>
  • ???????<value>/home/hadoop/hdfs/tmp</value>
  • ???</property>

  • </configuration>
  • <5>hdfs-site.xml
    ? ? ?這里的dfs.replication用來設置每份數據塊的副本數目,默認是3,因為我們是在單機上配置的偽分布模式,因此設為1。dfs.name.dir和dfs.data.dir非常重要,用來設置存放hdfs中namenode和datanode數據的本地存放位置。這里如果設置不好,后續會出現多個錯誤。當然你也可以不設置采用默認的/tmp下的目錄,但是同樣重啟會丟失數據。

    點擊(此處)折疊或打開

  • <?xml version="1.0"?>
  • <?xml-stylesheet type="text/xsl"?href="configuration.xsl"?>

  • <!-- Put site-specific property overrides in this file. -->

  • <configuration>
  • ???<property>
  • ???????<name>dfs.replication</name>
  • ???????<value>1</value>
  • ???</property>
  • ???<property>
  • ???????<name>dfs.name.dir</name>
  • ???????<value>/home/hadoop/hdfs/name</value>
  • ???</property>
  • ???<property>
  • ???????<name>dfs.data.dir</name>
  • ???????<value>/home/hadoop/hdfs/data</value>
  • ???</property>

  • </configuration>
  • <6>mapred-site.xml

    點擊(此處)折疊或打開

  • <?xml version="1.0"?>
  • <?xml-stylesheet type="text/xsl"?href="configuration.xsl"?>

  • <!-- Put site-specific property overrides in this file. -->

  • <configuration>
  • ???<property>
  • ???????<name>mapred.job.tracker</name>
  • ???????<value>localhost:9001</value>
  • ???</property>

  • </configuration>
  • ? ? ? 然后就是運行測試了,把hadoop-1.0.3/bin加入到/etc/profile中的PATH路徑中,方便我們執行Hadoop命令。運行start-all.sh后出現了問題,jps查看namenode無法啟動,使用hadoop namenode -format也不能成功,查看日志:

    ? ? 提示我們存儲的HDFS目錄要么不存在要么沒有權限:
    FSNamesystem initialization failed.
    org.apache.hadoop.hdfs.server.common.InconsistentFSStateException:Directory /home/hadoop/hdfs/name is in an inconsistent state: storage directory does not exist or is not accessible.
    查看發現已經生成了hdfs目錄,那么問題就是權限了,將/home下的hadoop目錄權限由755設為775,然后重新運行hadoop,成功:

    ? ? ? 在進行安裝hbase之前,我們先來按照官方的方法測試一下偽分布式的hadoop,看看安裝是否成功:首先將conf下的所有文件拷貝到hdfs上的input目錄中,然后運行jar文件將結果存儲到hdfs中的output目錄中,最后從output目錄中查看結果:


    點擊(此處)折疊或打開

  • $ bin/hadoop fs -put conf input

  • $ bin/hadoop jar hadoop-examples-*.jar grep input output 'dfs[a-z.]+'

  • Copy the output?files?from?the distributed filesystem to the local filesytem and examine them:
  • $ bin/hadoop fs -get?output output?
  • $ cat output/*

  • or

  • View the output?files?on?the distributed filesystem:
  • $ bin/hadoop fs -cat output/*

  • When you're done,?stop the daemons with:
  • $ bin/stop-all.sh

  • 二、安裝hbase
    ? ? ?這里選擇的版本是hbase-0.90.0,首先從google中直接搜索該版本,下載后解壓得到hbase-0.90.0。同hadoop一樣,這里我們的主要工作同樣是修改配置文件:
    <1> 修改/etc/hosts文件,將127.0.1.1 hadoop修改為127.0.0.1
    <2> 設置ulimits:
    修改 /etc/security/limits.conf ,添加:
    hadoop - ?nofile 32768
    hadoop soft/hard nproc 32000
    修改/etc/pam.d/common-session?,添加
    session required pam_limits.so
    <3>修改conf/hbase-env.xml
    ? ? ?這里主要設置JAVA_HOME目錄,HBASE_LOG_DIR路徑以及啟用hbase自帶的zookeeper

    點擊(此處)折疊或打開

  • #
  • #/**
  • #?*?Copyright 2007 The Apache Software Foundation
  • #?*
  • #?*?Licensed to the Apache Software Foundation (ASF) under one
  • #?*?or more contributor license agreements. See the?NOTICE?file
  • #?*?distributed with this work for additional information
  • #?*?regarding copyright ownership. The ASF licenses this file
  • #?*?to you under the Apache License,?Version 2.0 (the
  • #?*?"License");?you may not use this file except in compliance
  • #?*?with the License. You may obtain a copy of the License at
  • #?*
  • #?*?http://www.apache.org/licenses/LICENSE-2.0
  • #?*
  • #?*?Unless required by applicable law or agreed to in writing,?software
  • #?*?distributed under the License is distributed?on?an?"AS IS"?BASIS,
  • #?*?WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,?either express or implied.
  • #?*?See the License for the specific language governing permissions and
  • #?*?limitations under the License.
  • #?*/

  • # Set environment variables here.

  • # The java implementation to use. Java 1.6 required.
  • ?export JAVA_HOME=/home/hadoop/platform/jdk1.6.0_45?

  • # Extra Java CLASSPATH elements. Optional.
  • # export HBASE_CLASSPATH=

  • # The maximum amount of heap to use,?in MB. Default is 1000.
  • # export HBASE_HEAPSIZE=1000

  • # Extra Java runtime?options.
  • # Below are what we set by default. May only work with SUN JVM.
  • # For more?on?why as well as other possible settings,
  • # see http://wiki.apache.org/hadoop/PerformanceTuning
  • export HBASE_OPTS="$HBASE_OPTS -ea -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"

  • # Uncomment below to enable java garbage collection logging.
  • # export HBASE_OPTS="$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-hbase.log"?

  • # Uncomment and adjust to enable JMX exporting
  • # See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
  • # More details at:?http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
  • #
  • # export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
  • # export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101"
  • # export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"
  • # export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"
  • # export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"

  • # File naming hosts?on?which HRegionServers will run. $HBASE_HOME/conf/regionservers by default.
  • # export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers

  • # Extra ssh?options.?Empty by default.
  • # export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"

  • # Where log?files?are stored. $HBASE_HOME/logs by default.
  • export HBASE_LOG_DIR=${HBASE_HOME}/logs

  • # A string representing this instance of hbase. $USER?by default.
  • # export HBASE_IDENT_STRING=$USER

  • # The scheduling priority for daemon processes. See 'man nice'.
  • # export HBASE_NICENESS=10

  • # The?directory?where pid?files?are stored. /tmp by default.
  • # export HBASE_PID_DIR=/var/hadoop/pids

  • # Seconds to sleep between slave commands. Unset by default. This
  • # can be useful in large clusters,?where,?e.g.,?slave rsyncs can
  • # otherwise arrive faster than the master can service them.
  • # export HBASE_SLAVE_SLEEP=0.1

  • # Tell HBase whether?it?should manage?it's own instance of Zookeeper or not.
  • ?export HBASE_MANAGES_ZK=true
  • <4>配置conf/hbase-site.xml
    ? ? ?這里要將我們的hbase設置為偽分布模式,因此除了設置hbase的根目錄hbase.rootdir外,還需要設置hbase.cluster.distributed和hbase.zookeeper.quorum兩個參數;至于zookeeper.znode.parent是指定zookeeper的相對目錄。

    點擊(此處)折疊或打開

  • <?xml version="1.0"?>
  • <?xml-stylesheet type="text/xsl"?href="configuration.xsl"?>
  • <!--
  • /**
  • ?*?Copyright 2010 The Apache Software Foundation
  • ?*
  • ?*?Licensed to the Apache Software Foundation (ASF) under one
  • ?*?or more contributor license agreements. See the?NOTICE?file
  • ?*?distributed with this work for additional information
  • ?*?regarding copyright ownership. The ASF licenses this file
  • ?*?to you under the Apache License,?Version 2.0 (the
  • ?*?"License");?you may not use this file except in compliance
  • ?*?with the License. You may obtain a copy of the License at
  • ?*
  • ?*?http://www.apache.org/licenses/LICENSE-2.0
  • ?*
  • ?*?Unless required by applicable law or agreed to in writing,?software
  • ?*?distributed under the License is distributed?on?an?"AS IS"?BASIS,
  • ?*?WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,?either express or implied.
  • ?*?See the License for the specific language governing permissions and
  • ?*?limitations under the License.
  • ?*/
  • -->
  • <configuration>
  • ????<property>
  • ????????<name>hbase.rootdir</name>
  • ????????<value>hdfs://localhost:9000/hbase</value>
  • ????</property>
  • ????????<property>
  • ????????<name>hbase.cluster.distributed</name>
  • ????????<value>true</value>
  • ????</property>
  • ????<property>
  • ????????<name>hbase.zookeeper.quorum</name>
  • ????????<value>localhost</value>
  • ????</property>
  • ????<property>
  • ????????<name>zookeeper.znode.parent</name>
  • ????????<value>/hbase</value>
  • ????</property>



  • </configuration>

  • <5>運行測試 ? ? ?
    ? ? ??然后同樣將hbase下的bin目錄加入到PATH中,運行start-hbase.sh后發現HMaster未啟動,查看日志發現以下提示:
    FATAL: org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. java.io.IOException:
    Call to localhost/127.0.0.1:9000 failed on local exception: java.io.EOFException

    ? ? ??原因是需要將hadoop中的hadoop-core-1.0.4.jar覆蓋掉hbase-0.90.0/lib下的hadoop-core-0.20-append-r1056947.jar文件,再次運行提示了新錯誤:java.lang.NoClassDefFoundError


    ? ? ?看來是找不到相關的類,所以我們需要將hadoop-1.0.3下lib目錄的jar文件全部拷貝到hbase下的lib目錄下,再次運行,順利通過:

    <5>運行hbase shell
    ? ? ?Hbase為我們提供了shell接口進行測試,可以使用create命令創建一個新表,然后利用put命令添加新的行,使用scan命令查看建立的表:

    ? ? ?好了,hadoop和hbase的安裝到此基本結束,明天開始配置Sleuthkit。

    總結

    以上是生活随笔為你收集整理的Hadoop与Hbase基本配置的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。