日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

java房源信息管理的代码_crawler4j源码学习(2):Ziroom租房网房源信息采集爬虫

發布時間:2023/12/19 编程问答 42 豆豆
生活随笔 收集整理的這篇文章主要介紹了 java房源信息管理的代码_crawler4j源码学习(2):Ziroom租房网房源信息采集爬虫 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

/*** @date 2016年8月20日 下午6:13:24

*@version*@sinceJDK 1.8*/

public class ZiroomCrawler extendsWebCrawler {/**爬取匹配原則*/

private final static Pattern FILTERS = Pattern.compile(".*(\\.(css|js|bmp|gif|jpe?g|ico"

+ "|png|tiff?|mid|mp2|mp3|mp4" + "|wav|avi|mov|mpeg|ram|m4v|pdf" + "|rm|smil|wmv|swf|wma|zip|rar|gz))$");/**爬取數據保存文件路徑*/

private final static String DATA_PATH = "data/crawl/ziroom.csv";/**爬取link文件路徑*/

private final static String LINK_PATH = "data/crawl/link.csv";//private static final Logger logger =//LoggerFactory.getLogger(ZiroomCrawler.class);

private final static String URL_PREFIX = "http://sh.ziroom.com/z/nl/";private finalFile fLinks;private finalFile fDatas;privateCsvWriter csvLinks;privateCsvWriter csvDatas;/*** You should implement this function to specify whether the given url

* should be crawled or not (based on your crawling logic).*/ZiroomCrawlStat myCrawlStat;public ZiroomCrawler() throwsIOException {

myCrawlStat= newZiroomCrawlStat();

fLinks= newFile(DATA_PATH);

fDatas= newFile(LINK_PATH);if(fLinks.isFile()) {

fLinks.delete();

}if(fDatas.isFile()) {

fDatas.delete();

}

csvDatas= new CsvWriter(new FileWriter(fDatas, true), ',');

csvDatas.write("請求路徑");

csvDatas.endRecord();

csvDatas.close();

csvLinks= new CsvWriter(new FileWriter(fLinks, true), ',');

csvLinks.write("圖片");

csvLinks.write("價格");

csvLinks.write("地址");

csvLinks.write("說明");

csvLinks.endRecord();

csvLinks.close();

}public voiddumpMyData() {final int id =getMyId();//You can configure the log to output to file

logger.info("Crawler {} > Processed Pages: {}", id, myCrawlStat.getTotalProcessedPages());

logger.info("Crawler {} > Total Links Found: {}", id, myCrawlStat.getTotalLinks());

logger.info("Crawler {} > Total Text Size: {}", id, myCrawlStat.getTotalTextSize());

}

@OverridepublicObject getMyLocalData() {returnmyCrawlStat;

}

@Overridepublic voidonBeforeExit() {

dumpMyData();

}/** 這個方法決定了要抓取的URL及其內容,例子中只允許抓取“http://sh.ziroom.com/z/nl/”這個域的頁面,

* 不允許.css、.js和多媒體等文件

*

* @see edu.uci.ics.crawler4j.crawler.WebCrawler#shouldVisit(edu.uci.ics.

* crawler4j.crawler.Page, edu.uci.ics.crawler4j.url.WebURL)*/@Overridepublic booleanshouldVisit(Page referringPage, WebURL url) {final String href =url.getURL().toLowerCase();if (FILTERS.matcher(href).matches() || !href.startsWith(URL_PREFIX)) {return false;

}return true;

}/** 當URL下載完成會調用這個方法。你可以輕松獲取下載頁面的url, 文本, 鏈接, html,和唯一id等內容。

*

* @see

* edu.uci.ics.crawler4j.crawler.WebCrawler#visit(edu.uci.ics.crawler4j.

* crawler.Page)*/@Overridepublic voidvisit(Page page) {final String url =page.getWebURL().getURL();

logger.info("爬取路徑:" +url);

myCrawlStat.incProcessedPages();if (page.getParseData() instanceofHtmlParseData) {final HtmlParseData htmlParseData =(HtmlParseData) page.getParseData();final Set links =htmlParseData.getOutgoingUrls();try{

linkToCsv(links);

}catch (finalIOException e2) {//TODO Auto-generated catch block

e2.printStackTrace();

}

myCrawlStat.incTotalLinks(links.size());try{

myCrawlStat.incTotalTextSize(htmlParseData.getText().getBytes("UTF-8").length);

}catch (finalUnsupportedEncodingException e1) {//TODO Auto-generated catch block

e1.printStackTrace();

}final String html =htmlParseData.getHtml();final Document doc =Jsoup.parse(html);final Elements contents = doc.select("li[class=clearfix]");for (finalElement c : contents) {//圖片

final String img = c.select(".img img").first().attr("src");

logger.debug("圖片:" +img);//地址

final Element txt = c.select("div[class=txt]").first();final String arr1 = txt.select("h3 a").first().text();final String arr2 = txt.select("h4 a").first().text();final String arr3 = txt.select("div[class=detail]").first().text();final String arr = arr1.concat(arr1 + ",").concat(arr2 + ",").concat(arr3);

logger.debug("地址:" +arr);//說明

final String rank = txt.select("p").first().text();

logger.debug("說明:" +rank);//價格

final String pirce = c.select("p[class=price]").first().text();try{

csvLinks= new CsvWriter(new FileWriter(fLinks, true), ',');

csvLinks.write(img);

csvLinks.write(pirce);

csvLinks.write(arr);

csvLinks.write(rank);

csvLinks.endRecord();

csvLinks.flush();

csvLinks.close();

}catch (finalIOException e) {

e.printStackTrace();

}

}

}

}private void linkToCsv(Set links) throwsIOException {

csvDatas= new CsvWriter(new FileWriter(fDatas, true), ',');for (finalWebURL webURL : links) {

csvDatas.write(webURL.getURL());

}

csvDatas.flush();

csvDatas.endRecord();

csvDatas.close();

}

}

總結

以上是生活随笔為你收集整理的java房源信息管理的代码_crawler4j源码学习(2):Ziroom租房网房源信息采集爬虫的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。