-
-
Notifications
You must be signed in to change notification settings - Fork 554
Open
Description
经常爆这个错误,换了模型也是,基模api本身没设置超时,而且LLM 基础参数请求超时(毫秒)设置150000,但是会提示50s这种不一致的情况(50s好像是上次配置的,但是已经更新150000了还是会出现50s),有时候超时200s也会报错,我在chatbox测试基模的回显是非常快的。
docker搭建,源代码用了项目java-sec-code,本地基模用了qwen,然后30个文件就停止完成了。
📈 ZIP任务 4c92fed2-abb2-4480-8139-9e56a09e398e: 进度 30/64
INFO:openai._base_client:Retrying request to /chat/completions in 0.447964 seconds
INFO:openai._base_client:Retrying request to /chat/completions in 0.836777 seconds
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm._turn_on_debug()'.
ERROR:app.services.llm.service:Custom prompt analysis failed: LiteLLM (qwen) API调用失败: 请求超时 (50s)。建议:
1. 检查网络连接是否正常
2. 尝试增加超时时间
3. 验证API端点是否正确
Traceback (most recent call last):
❌ ZIP任务分析文件失败 (java-sec-code-master/src/main/java/org/joychou/mapper/UserMapper.java): LiteLLM (qwen) API调用失败: 请求超时 (50s)。建议:
1. 检查网络连接是否正常
2. 尝试增加超时时间
3. 验证API端点是否正确
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/aiohttp_transport.py", line 60, in map_aiohttp_exceptions
yield
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/aiohttp_transport.py", line 274, in handle_async_request
response = await self._make_aiohttp_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/aiohttp_transport.py", line 240, in _make_aiohttp_request
response = await client_session.request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/aiohttp/client.py", line 1510, in __aenter__
self._resp: _RetType = await self._coro
^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/aiohttp/client.py", line 779, in _request
resp = await handler(req)
^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/aiohttp/client.py", line 757, in _connect_and_send_request
await resp.start(conn)
File "/app/.venv/lib/python3.12/site-packages/aiohttp/client_reqrep.py", line 539, in start
message, payload = await protocol.read() # type: ignore[union-attr]
^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 680, in read
await self._waiter
aiohttp.client_exceptions.SocketTimeoutError: Timeout on reading data from socket
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1529, in request
response = await self._client.send(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1629, in send
response = await self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1657, in _send_handling_auth
response = await self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1694, in _send_handling_redirects
response = await self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1730, in _send_single_request
response = await transport.handle_async_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/aiohttp_transport.py", line 273, in handle_async_request
with map_aiohttp_exceptions():
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/contextlib.py", line 158, in __exit__
self.gen.throw(value)
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/aiohttp_transport.py", line 74, in map_aiohttp_exceptions
raise mapped_exc(message) from exc
httpx.ReadTimeout: Timeout on reading data from socket
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 836, in acompletion
headers, response = await self.make_openai_chat_completion_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 190, in async_wrapper
result = await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 458, in make_openai_chat_completion_request
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 437, in make_openai_chat_completion_request
await openai_aclient.chat.completions.with_raw_response.create(
File "/app/.venv/lib/python3.12/site-packages/openai/_legacy_response.py", line 381, in wrapped
return cast(LegacyAPIResponse[R], await func(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2678, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1794, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1547, in request
raise APITimeoutError(request=request) from err
openai.APITimeoutError: Request timed out.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 607, in acompletion
response = await init_response
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 883, in acompletion
raise OpenAIError(
litellm.llms.openai.common_utils.OpenAIError: Request timed out. - timeout value=50.0, time taken=151.3 seconds
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/app/services/llm/adapters/litellm_adapter.py", line 171, in complete
return await self.retry(lambda: self._send_request(request))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/app/services/llm/base_adapter.py", line 123, in retry
raise error
File "/app/app/services/llm/base_adapter.py", line 116, in retry
return await fn()
^^^^^^^^^^
File "/app/app/services/llm/adapters/litellm_adapter.py", line 240, in _send_request
response = await litellm.acompletion(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1646, in wrapper_async
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1492, in wrapper_async
result = await original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 626, in acompletion
raise exception_type(
^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2340, in exception_type
raise e
File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 317, in exception_type
raise Timeout(
litellm.exceptions.Timeout: litellm.Timeout: APITimeoutError - Request timed out. Error_str: Request timed out. - timeout value=50.0, time taken=151.3 seconds
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/app/services/llm/service.py", line 964, in analyze_code_with_custom_prompt
response = await adapter.complete(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/app/services/llm/adapters/litellm_adapter.py", line 173, in complete
self.handle_error(error, f"LiteLLM ({self.config.provider.value}) API调用失败")
File "/app/app/services/llm/base_adapter.py", line 102, in handle_error
raise LLMError(
app.services.llm.types.LLMError: LiteLLM (qwen) API调用失败: 请求超时 (50s)。建议:
1. 检查网络连接是否正常
2. 尝试增加超时时间
3. 验证API端点是否正确
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels