python request timeout_Python - aiohttp请求不断超时(Python - aiohttp requests continuously time out)...
Python - aiohttp請求不斷超時(Python - aiohttp requests continuously time out)
我有一個Python程序,它使用aiohttp和ElementTree從網站獲取數據。 下面的代碼是Raspberry Pi上托管的Discord聊天機器人的一部分。 該功能在大多數情況下運行良好,但是在機器人開啟幾天之后,該功能開始停滯并且總是超時。 重新啟動程序并不能解決問題,只有重啟Pi才能解決問題。 我知道這并不是很多,但是這段代碼有什么明顯的問題可以解決這個問題,或者問題出在別的什么地方?
import lxml.etree as ET
import aiohttp, async_timeout
...
async with aiohttp.ClientSession() as session:
try:
with async_timeout.timeout(5):
async with session.get('https://example.com', params=params, headers=headers) as resp:
if resp.status == 200:
root = ET.fromstring(await resp.text(), ET.HTMLParser())
# Do stuff with root
else:
print("Error: {}".format(resp.response))
except Exception as e:
print("Timeout error {}".format(e))
I have a Python program that uses aiohttp and ElementTree to fetch data from a website. The code below is a segment of a Discord chat bot hosted on a Raspberry Pi. The function works well most of the time, but after the bot has been on for a few days, the function begins to bog down, and always times out. Restarting the program doesn't fix the issue, only rebooting the Pi seems to solve the problem for a while. I know it's not a lot to go on, but is there an obvious issue with this segment of code that could case this, or does the problem lie somewhere else?
import lxml.etree as ET
import aiohttp, async_timeout
...
async with aiohttp.ClientSession() as session:
try:
with async_timeout.timeout(5):
async with session.get('https://example.com', params=params, headers=headers) as resp:
if resp.status == 200:
root = ET.fromstring(await resp.text(), ET.HTMLParser())
# Do stuff with root
else:
print("Error: {}".format(resp.response))
except Exception as e:
print("Timeout error {}".format(e))
原文:https://stackoverflow.com/questions/47215072
更新時間:2020-02-04 11:44
最滿意答案
也許內存泄漏緩慢地占用了系統內存,一旦所有內容都變得非常緩慢,因為交換被用于內存分配并發生超時。
然而,正如Andrew所說,這不能成為python腳本的問題,或者通過重新啟動它來解決。
監視系統內存并從那里開始。
Perhaps a memory leak somewhere that slowly uses up systems memory, once full everything becomes very slow as swap is used for memory allocation and timeouts occur.
However as Andrew says this can't be a problem with the python script or it would be fixed by restarting it.
Monitor system memory and go from there.
2017-11-12
相關問答
response.headers是一個常規屬性,在調用之前無需等待 另一方面, asyncio.wait接受期貨和退貨(done, pending)對的列表。 看起來你應該用await asyncio.gather(*tasks)替換await wait()調用( 收集doc ) response.headers is a regular property, no need to put await before the call asyncio.wait on other hand accept
...
代理是錯的。 不同的代理導致不同的錯誤,因此很難找到一個好的代理。 上面的代碼絕對有效(但請更改代理!)。 The proxy was wrong. Different proxies cause different errors, so it was hard to find one fine proxy. The code above is absolutely valid (but pls change the proxy!).
變量d包含對字典的引用(“指針”)。 text.append(d)語句只是將相同字典的引用添加到列表中。 因此,在N次迭代之后,您對列表中的d具有N個相同的引用。 如果你改變你循環成這樣的東西: for ip in ip_list:
d["ip"]=ip
text.append(d)
print(text)
你應該在控制臺上看到: [{'ip': '192.168.1.1'}]
[{'ip': '18.9.8.1'}, {'ip': '18.9.8.1'}]
[{'ip'
...
你的測試技術出了點問題。 針對您的服務器運行wrk工具會提供不同的結果。 要運行的命令: wrk http://127.0.0.1:15000/
服務器輸出: ======== Running on http://0.0.0.0:15000 ========
(Press CTRL+C to quit)
2016-10-23 14:58:56,447 - webserver - INFO - Request id: hkkrp received - will sleep for 10
201
...
您正在等待單獨的do_request()調用。 而不是直接等待它們(在協程完成之前阻塞它們),使用asyncio.gather()函數讓事件循環同時運行它們: async def main():
create_database_and_tables()
records = prep_sample_data()[:100]
requests = []
for record in records:
r = Record(record)
...
當使用AsyncResolver作為連接的解析器時,我遇到了類似的問題。 它曾經是默認的解析器,所以它可能是你的情況。 該問題與ipv6的域有關,其中AsyncResolver存在問題,因此解決方案是簡單地將族指定為ipv4地址 conn = aiohttp.TCPConnector(
family=socket.AF_INET,
verify_ssl=False,
)
I had a similar issue when using AsyncResol
...
也許內存泄漏緩慢地占用了系統內存,一旦所有內容都變得非常緩慢,因為交換被用于內存分配并發生超時。 然而,正如Andrew所說,這不能成為python腳本的問題,或者通過重新啟動它來解決。 監視系統內存并從那里開始。 Perhaps a memory leak somewhere that slowly uses up systems memory, once full everything becomes very slow as swap is used for memory allocatio
...
一般來說,每當運行事件循環時,您應盡量避免使用線程。 不幸的是, rethinkdb不支持asyncio開箱即用,但它確實支持Tornado和Twisted框架。 因此,您可以橋接 Tornado和asyncio ,并在不使用線程的情況下使其工作。 編輯 : 正如安德魯所指出的, rethinkdb 確實支持asyncio 。 在2.1.0之后你可以做到: rethinkdb.set_loop_type("asyncio")
然后在您的Web處理程序中: res = await rethinkd
...
將密鑰傳遞給fetch() ,以使用相應的響應返回它們: #!/usr/bin/env python
import asyncio
import aiohttp # $ pip install aiohttp
async def fetch(session, key, item, base_url='http://example.com/posts/'):
async with session.get(base_url + item) as response:
retu
...
我也在學習它。 我發現了這個問題https://github.com/hangoutsbot/hangoutsbot/pull/655 。 那么代碼就像這樣 @asyncio.coroutine
def _create_server(self):
app = web.Application(loop=self.loop)
return app
def add_handler(self, url, handler):
self.app.router.add_route('G
...
總結
以上是生活随笔為你收集整理的python request timeout_Python - aiohttp请求不断超时(Python - aiohttp requests continuously time out)...的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: linux 内存显示括号内字母的含义
- 下一篇: 一个历史遗留问题,引发的linux内存管