日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

pythonweb扫描器_Python安全工具之web目录扫描

發布時間:2025/3/15 python 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 pythonweb扫描器_Python安全工具之web目录扫描 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

轉載學習

這里為了方便,將收集的UA自己保存成一個文件方便其他腳本直接調用。

user_agent_list.py:

#!/usr/bin/python

#coding=utf-8

import random

def get_user_agent():

user_agent_list = [

{'User-Agent':'Mozilla/4.0 (Mozilla/4.0; MSIE 7.0; Windows NT 5.1; FDM; SV1; .NET CLR 3.0.04506.30)'},

{'User-Agent':'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; en) Opera 11.00'},

{'User-Agent':'Mozilla/5.0 (X11; U; Linux i686; de; rv:1.9.0.2) Gecko/2008092313 Ubuntu/8.04 (hardy) Firefox/3.0.2'},

{'User-Agent':'Mozilla/5.0 (X11; U; Linux i686; en-GB; rv:1.9.1.15) Gecko/20101027 Fedora/3.5.15-1.fc12 Firefox/3.5.15'},

{'User-Agent':'Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/8.0.551.0 Safari/534.10'},

{'User-Agent':'Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.2) Gecko/2008092809 Gentoo Firefox/3.0.2'},

{'User-Agent':'Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.10 (KHTML, like Gecko) Chrome/7.0.544.0'},

{'User-Agent':'Opera/9.10 (Windows NT 5.2; U; en)'},

{'User-Agent':'Mozilla/5.0 (iPhone; U; CPU OS 3_2 like Mac OS X; en-us) AppleWebKit/531.21.10 (KHTML, like Gecko)'},

{'User-Agent':'Opera/9.80 (X11; U; Linux i686; en-US; rv:1.9.2.3) Presto/2.2.15 Version/10.10'},

{'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 5.1; ru-RU) AppleWebKit/533.18.1 (KHTML, like Gecko) Version/5.0.2 Safari/533.18.5'},

{'User-Agent':'Mozilla/5.0 (Windows; U; Windows NT 5.1; ru; rv:1.9b3) Gecko/2008020514 Firefox/3.0b3'},

{'User-Agent':'Mozilla/5.0 (Macintosh; U; PPC Mac OS X 10_4_11; fr) AppleWebKit/533.16 (KHTML, like Gecko) Version/5.0 Safari/533.16'},

{'User-Agent':'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_6; en-US) AppleWebKit/534.20 (KHTML, like Gecko) Chrome/11.0.672.2 Safari/534.20'},

{'User-Agent':'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; InfoPath.2)'},

{'User-Agent':'Mozilla/4.0 (compatible; MSIE 6.0; X11; Linux x86_64; en) Opera 9.60'},

{'User-Agent':'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_2; en-US) AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.366.0 Safari/533.4'},

{'User-Agent':'Mozilla/5.0 (Windows NT 6.0; U; en; rv:1.8.1) Gecko/20061208 Firefox/2.0.0 Opera 9.51'}

]

return random.choice(user_agent_list)

然后將該腳本放在名為agent_proxy目錄中。

接著是主程序,使用多線程以及讀取文件的形式來實現Web目錄的暴破掃描,最后將掃描出來的結果以HTML文件的超鏈接的形式展現:

#!/usr/bin/python

#coding=utf-8

import requests

import sys

from Queue import Queue

import threading

from agent_proxy import user_agent_list

from optparse import OptionParser

class DirScanMain:

"""docstring for DirScanMain"""

def __init__(self, options):

self.url = options.url

self.filename = options.filename

self.count = options.count

class DirScan(threading.Thread):

"""docstring for DirScan"""

def __init__(self, queue,total):

threading.Thread.__init__(self)

self._queue = queue

self._total = total

def run(self):

while not self._queue.empty():

url = self._queue.get()

#

threading.Thread(target=self.msg).start()

try:

r = requests.get(url=url, headers=user_agent_list.get_user_agent(), timeout=8,)

if r.status_code == 200:

sys.stdout.write('\r' + '[+]%s\t\t\n' % (url))

result = open('result.html','a+')

result.write('' + url + '')

result.write('\r\n')

result.close()

except Exception as e:

pass

def msg(self):

# print self._total,self._queue.qsize()

per = 100 - float(self._queue.qsize())/float(self._total) * 100

percentage = "%s Finished| %s All| Scan in %1.f %s"%((self._total - self._queue.qsize()),self._total,per,'%')

sys.stdout.write('\r'+'[*]'+percentage)

def start(self):

result = open('result.html','w')

result.close()

queue = Queue()

f = open('./dics/%s'%self.filename,'r')

for i in f:

queue.put(self.url+i.rstrip('\n'))

#

total = queue.qsize()

threads = []

thread_count = int(self.count)

for i in range(thread_count):

threads.append(self.DirScan(queue,total))

for i in threads:

i.start()

for i in threads:

i.join()

if __name__ == '__main__':

print '''

____ _ ____

| _ \(_)_ __/ ___| ___ __ _ _ __

| | | | | '__\___ \ / __/ _` | '_ \

| |_| | | | ___) | (_| (_| | | | |

|____/|_|_| |____/ \___\__,_|_| |_|

'''

parser = OptionParser('./web_dir_scan.py -u -f [-t ]')

parser.add_option('-u','--url',dest='url',type='string',help='target url for scan')

parser.add_option('-f','--file',dest='filename',type='string',help='dictionary filename')

parser.add_option('-t','--thread',dest='count',type='int',default=10,help='scan thread_count')

(options,args)=parser.parse_args()

if options.url and options.filename:

# start(options.url,options.filename,options.count)

dirscan = DirScanMain(options)

dirscan.start()

sys.exit(1)

else:

parser.print_help()

sys.exit(1)

運行結果:

總結

以上是生活随笔為你收集整理的pythonweb扫描器_Python安全工具之web目录扫描的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。