日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Bad connect ack with firstBadLink 192.168.*.*:50010

發布時間:2025/7/14 编程问答 22 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Bad connect ack with firstBadLink 192.168.*.*:50010 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

?今天搭建hadoop2.0 時hadoop fs? -put?文件時報錯,看到網上有這樣的解決方法先轉載下 呵呵 已解決

?

轉自:http://lykke.iteye.com/blog/1320558

Exception in thread "main" java.io.IOException: Bad connect ack with firstBadLink 192.168.1.14:50010
??????? at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2903)
??????? at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826)
??????? at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
??????? at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)

運行hadoop put文件 的時候 回報這個錯誤

這個在 DFSClient 里

?

[java] view plaincopyprint?

  • //?connects?to?the?first?datanode?in?the?pipeline??

  • ??//?Returns?true?if?success,?otherwise?return?failure.??

  • ??//??

  • ??private?boolean?createBlockOutputStream(DatanodeInfo[]?nodes,?String?client,??

  • ??????????????????boolean?recoveryFlag)?{??

  • ????String?firstBadLink?=?"";??

  • ????if?(LOG.isDebugEnabled())?{??

  • ??????for?(int?i?=?0;?i?<?nodes.length;?i++)?{??

  • ????????LOG.debug("pipeline?=?"?+?nodes[i].getName());??

  • ??????}??

  • ????}??

  • ??

  • ????//?persist?blocks?on?namenode?on?next?flush??

  • ????persistBlocks?=?true;??

  • ??

  • ????try?{??

  • ??????LOG.debug("Connecting?to?"?+?nodes[0].getName());??

  • ??????InetSocketAddress?target?=?NetUtils.createSocketAddr(nodes[0].getName());??

  • ??????s?=?socketFactory.createSocket();??

  • ??????int?timeoutValue?=?3000?*?nodes.length?+?socketTimeout;??

  • ??????NetUtils.connect(s,?target,?timeoutValue);??

  • ??????s.setSoTimeout(timeoutValue);??

  • ??????s.setSendBufferSize(DEFAULT_DATA_SOCKET_SIZE);??

  • ??????LOG.debug("Send?buf?size?"?+?s.getSendBufferSize());??

  • ??????long?writeTimeout?=?HdfsConstants.WRITE_TIMEOUT_EXTENSION?*?nodes.length?+??

  • ??????????????????????????datanodeWriteTimeout;??

  • ??

  • ??????//??

  • ??????//?Xmit?header?info?to?datanode??

  • ??????//??

  • ??????DataOutputStream?out?=?new?DataOutputStream(??

  • ??????????new?BufferedOutputStream(NetUtils.getOutputStream(s,?writeTimeout),???

  • ???????????????????????????????????DataNode.SMALL_BUFFER_SIZE));??

  • ??????blockReplyStream?=?new?DataInputStream(NetUtils.getInputStream(s));??

  • ??

  • ??????out.writeShort(?DataTransferProtocol.DATA_TRANSFER_VERSION?);??

  • ??????out.write(?DataTransferProtocol.OP_WRITE_BLOCK?);??

  • ??????out.writeLong(?block.getBlockId()?);??

  • ??????out.writeLong(?block.getGenerationStamp()?);??

  • ??????out.writeInt(?nodes.length?);??

  • ??????out.writeBoolean(?recoveryFlag?);???????//?recovery?flag??

  • ??????Text.writeString(?out,?client?);??

  • ??????out.writeBoolean(false);?//?Not?sending?src?node?information??

  • ??????out.writeInt(?nodes.length?-?1?);??

  • ??????for?(int?i?=?1;?i?<?nodes.length;?i++)?{??

  • ????????nodes[i].write(out);??

  • ??????}??

  • ??????checksum.writeHeader(?out?);??

  • ??????out.flush();??

  • ??

  • ??????//?receive?ack?for?connect??

  • ??????firstBadLink?=?Text.readString(blockReplyStream);??

  • ??????if?(firstBadLink.length()?!=?0)?{??

  • ????????throw?new?IOException("Bad?connect?ack?with?firstBadLink?"?+?firstBadLink);??

  • ??????}??

  • ??

  • ??????blockStream?=?out;??

  • ??????return?true;?????//?success??

  • ??

  • ????}?catch?(IOException?ie)?{??

  • ??//?connects?to?the?first?datanode?in?the?pipeline//?Returns?true?if?success,?otherwise?return?failure.//private?boolean?createBlockOutputStream(DatanodeInfo[]?nodes,?String?client,boolean?recoveryFlag)?{String?firstBadLink?=?"";if?(LOG.isDebugEnabled())?{for?(int?i?=?0;?i?<?nodes.length;?i++)?{LOG.debug("pipeline?=?"?+?nodes[i].getName());}}//?persist?blocks?on?namenode?on?next?flushpersistBlocks?=?true;try?{LOG.debug("Connecting?to?"?+?nodes[0].getName());InetSocketAddress?target?=?NetUtils.createSocketAddr(nodes[0].getName());s?=?socketFactory.createSocket();int?timeoutValue?=?3000?*?nodes.length?+?socketTimeout;NetUtils.connect(s,?target,?timeoutValue);s.setSoTimeout(timeoutValue);s.setSendBufferSize(DEFAULT_DATA_SOCKET_SIZE);LOG.debug("Send?buf?size?"?+?s.getSendBufferSize());long?writeTimeout?=?HdfsConstants.WRITE_TIMEOUT_EXTENSION?*?nodes.length?+datanodeWriteTimeout;////?Xmit?header?info?to?datanode//DataOutputStream?out?=?new?DataOutputStream(new?BufferedOutputStream(NetUtils.getOutputStream(s,?writeTimeout),?DataNode.SMALL_BUFFER_SIZE));blockReplyStream?=?new?DataInputStream(NetUtils.getInputStream(s));out.writeShort(?DataTransferProtocol.DATA_TRANSFER_VERSION?);out.write(?DataTransferProtocol.OP_WRITE_BLOCK?);out.writeLong(?block.getBlockId()?);out.writeLong(?block.getGenerationStamp()?);out.writeInt(?nodes.length?);out.writeBoolean(?recoveryFlag?);???????//?recovery?flagText.writeString(?out,?client?);out.writeBoolean(false);?//?Not?sending?src?node?informationout.writeInt(?nodes.length?-?1?);for?(int?i?=?1;?i?<?nodes.length;?i++)?{nodes[i].write(out);}checksum.writeHeader(?out?);out.flush();//?receive?ack?for?connectfirstBadLink?=?Text.readString(blockReplyStream);if?(firstBadLink.length()?!=?0)?{throw?new?IOException("Bad?connect?ack?with?firstBadLink?"?+?firstBadLink);}blockStream?=?out;return?true;?????//?success}?catch?(IOException?ie)?{



    顯示為沒有收到正確的應答包,我用了兩種方式解決了


    1) '/etc/init.d/iptables stop' -->stopped firewall
    2) SELINUX=disabled in '/etc/selinux/config' file.-->disabled selinux

    一般的這種hadoop 應答類錯誤 多半是防火墻沒有關閉

    轉載于:https://blog.51cto.com/gcjava/1426492

    總結

    以上是生活随笔為你收集整理的Bad connect ack with firstBadLink 192.168.*.*:50010的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。