日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Hadoop64位版本安装后遇到的警告处理

發(fā)布時間:2025/3/15 编程问答 19 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Hadoop64位版本安装后遇到的警告处理 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

在使用hadoop的過程中,會遇到一個警告,內(nèi)容如下:

WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable


對于這個問題網(wǎng)上很多說法是由于系統(tǒng)位數(shù)和所下載的hadoop的位數(shù)不同造成的,說到這里就需要看一下自己的hadoop的位數(shù)了,查看方法如下:

1.進入到hadoop的安裝文件夾下;

2.進入如下目錄:


3.看到上圖中的libhadoop.so后,使用file命令:


到這里,就可以完全知道了自己的hadoop的版本是32位的還是64位的了;

如果確實是因為位數(shù)不一樣,ok那么只能選擇下載源代碼然后自己編譯了;但是我這里遇到的不是這個問題,因為我的是64位的操作系統(tǒng),并且我的hadoop也是64位的;那么問題出在哪里呢?


編輯一下/etc/profile,讓hadoop打印日志到console中,來看一下;

1.給/etc/profile 中加入如下內(nèi)容:

export HADOOP_ROOT_LOGGER=DEBUG,console

截圖如下:


然后source一下/etc/profile讓它生效:

source /etc/profile

2.執(zhí)行任意的hadoop命令,來看一下彈出的警告信息,主要內(nèi)容如下

[hadoop@hadoop2 ~]$ hdfs dfs -ls / 17/01/13 14:04:39 DEBUG util.Shell: setsid exited with exit code 0 17/01/13 14:04:39 DEBUG conf.Configuration: parsing URL jar:file:/home/hadoop/app/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar!/core-default.xml 17/01/13 14:04:39 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@6e90891 17/01/13 14:04:39 DEBUG conf.Configuration: parsing URL file:/home/hadoop/app/hadoop-2.5.2/etc/hadoop/core-site.xml 17/01/13 14:04:39 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@3021eb3f 17/01/13 14:04:40 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Rate of successful kerberos logins and latency (milliseconds)], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops) 17/01/13 14:04:40 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Rate of failed kerberos logins and latency (milliseconds)], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops) 17/01/13 14:04:40 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[GetGroups], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops) 17/01/13 14:04:40 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics 17/01/13 14:04:41 DEBUG security.Groups: Creating new Groups object 17/01/13 14:04:41 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library... 17/01/13 14:04:41 DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: /home/hadoop/app/hadoop-2.5.2/lib/native/libhadoop.so: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /home/hadoop/app/hadoop-2.5.2/lib/native/libhadoop.so) 17/01/13 14:04:41 DEBUG util.NativeCodeLoader: java.library.path=/home/hadoop/app/hadoop-2.5.2/lib/native 17/01/13 14:04:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/01/13 14:04:41 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Falling back to shell based 17/01/13 14:04:41 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping 17/01/13 14:04:41 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000 17/01/13 14:04:41 DEBUG security.UserGroupInformation: hadoop login 17/01/13 14:04:41 DEBUG security.UserGroupInformation: hadoop login commit 17/01/13 14:04:41 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: hadoop 17/01/13 14:04:41 DEBUG security.UserGroupInformation: UGI loginUser:hadoop (auth:SIMPLE) 17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false 17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false 17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false 17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = 17/01/13 14:04:42 DEBUG hdfs.HAUtil: No HA service delegation token found for logical URI hdfs://ns1/ 17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false 17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false 17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false 17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path = 17/01/13 14:04:42 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null 17/01/13 14:04:42 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@440b2a8c 17/01/13 14:04:42 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@2301799f 17/01/13 14:04:43 DEBUG shortcircuit.DomainSocketFactory: Both short-circuit local reads and UNIX domain socket are disabled. 17/01/13 14:04:43 DEBUG ipc.Client: The ping interval is 60000 ms. 17/01/13 14:04:43 DEBUG ipc.Client: Connecting to hadoop1/192.168.1.232:9000 17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop1/192.168.1.232:9000 from hadoop: starting, having connections 1 17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop1/192.168.1.232:9000 from hadoop sending #0 17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop1/192.168.1.232:9000 from hadoop got value #0 17/01/13 14:04:43 INFO retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over hadoop1/192.168.1.232:9000. Trying to fail over immediately. org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standbyat org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1688)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1258)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3684)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:803)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:779)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:415)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)at org.apache.hadoop.ipc.Client.call(Client.java:1411)at org.apache.hadoop.ipc.Client.call(Client.java:1364)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)at org.apache.hadoop.fs.Globber.glob(Globber.java:265)at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:224)at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:207)at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)at org.apache.hadoop.fs.shell.Command.run(Command.java:154)at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)at org.apache.hadoop.fs.FsShell.main(FsShell.java:340) 17/01/13 14:04:43 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null 17/01/13 14:04:43 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@2301799f 17/01/13 14:04:43 DEBUG ipc.Client: The ping interval is 60000 ms. 17/01/13 14:04:43 DEBUG ipc.Client: Connecting to hadoop2/192.168.1.233:9000 17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop sending #0 17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop: starting, having connections 2 17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop got value #0 17/01/13 14:04:43 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 8ms 17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop sending #1 17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop got value #1 17/01/13 14:04:43 DEBUG ipc.ProtobufRpcEngine: Call: getListing took 6ms Found 3 items -rw-r--r-- 3 hadoop supergroup 179161400 2017-01-09 13:35 /apache-storm-1.0.2.tar.gz -rw-r--r-- 3 hadoop supergroup 147197492 2017-01-07 12:28 /hadoop-2.5.2.tar.gz drwxr-xr-x - hadoop supergroup 0 2017-01-12 08:54 /hbase 17/01/13 14:04:43 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@2301799f 17/01/13 14:04:43 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@2301799f 17/01/13 14:04:43 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@2301799f 17/01/13 14:04:43 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@2301799f 17/01/13 14:04:43 DEBUG ipc.Client: Stopping client 17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop: closed 17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop: stopped, remaining connections 1 17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop1/192.168.1.232:9000 from hadoop: closed 17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop1/192.168.1.232:9000 from hadoop: stopped, remaining connections 0

綠色的部分不用管,主要是因為我這個是集群中的standby的節(jié)點造成的重試現(xiàn)象;主要看紅色部分,說來說去就是一個問題

GLIBC_2.14沒有找到

那我們來看一下當前系統(tǒng)的GLIBC的版本,執(zhí)行如下命令:


看到的結(jié)果很明顯,系統(tǒng)的版本是2.12,額,版本太低了,需要升級一下了;那么現(xiàn)在就開始升級。



升級過程如下:

升級過程中最好在root環(huán)境下進行,否則會因為權(quán)限問題而導(dǎo)致編譯安裝失敗。


1.如果沒有安裝wget,自己安裝一個吧

2.下載如下兩個tar包

wget?http://ftp.gnu.org/gnu/glibc/glibc-2.15.tar.gz?

wget?http://ftp.gnu.org/gnu/glibc/glibc-ports-2.15.tar.gz?

3.解壓

tar -xvf glibc-2.15.tar.gz?

tar -xvf glibc-ports-2.15.tar.gz?

4.移動一下,把glibc-ports-2.15移動到glibc-2.15中去

mv glibc-ports-2.15 glibc-2.15/ports?

5.建立一個目錄來進行編譯

mkdir glibc-build-2.15

6、進入編譯目錄

cd glibc-build-2.15

7.配置編譯參數(shù)

../glibc-2.15/configure --prefix=/usr --disable-profile --enable-add-ons --with-headers=/usr/include --with-binutils=/usr/bin

8.編譯(這個過程有些慢,需要等待)

make

9.安裝

make install


到這里就成功安裝完成了,這時候再看一下glibc的版本



呵呵,已經(jīng)升級完成了,在運行一下hadoop的測試命令看看



已經(jīng)沒有剛才的警告了,然后呢,編輯/etc/profile,去掉剛才加入的數(shù)據(jù)。大功告成!

總結(jié)

以上是生活随笔為你收集整理的Hadoop64位版本安装后遇到的警告处理的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。