-
Couldn't load subscription status.
- Fork 5.2k
Open
Description
Describe the bug
interpreter --api_base "http://192.168.1.166:8084/v1" --api_key "MY_KEY" --model gpt-oss-20b-f16
/home/doraemon/Documents/.venv/lib/python3.12/site-packages/interpreter/core/utils/system_debug_info.py:4: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
import pkg_resources
▌ Model set to openai/gpt-oss-20b-f16
Open Interpreter will require approval before running code.
Use interpreter -y to bypass this.
Press CTRL-C to exit.
>
> hi
We were unable to determine the context window of this model. Defaulting to 8000.
If your model can handle more, run interpreter --context_window {token limit} --max_tokens {max tokens per response}.
Continuing...
Traceback (most recent call last):
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 101, in map_httpcore_exceptions
yield
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 250, in handle_request
resp = self._pool.handle_request(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 256, in handle_request
raise exc from None
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpcore/_sync/connection_pool.py", line 236, in handle_request
response = connection.handle_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpcore/_sync/connection.py", line 103, in handle_request
return self._connection.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 136, in handle_request
raise exc
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 106, in handle_request
) = self._receive_response_headers(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 177, in _receive_response_headers
event = self._receive_event(timeout=timeout)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpcore/_sync/http11.py", line 213, in _receive_event
with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 158, in __exit__
self.gen.throw(value)
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.RemoteProtocolError: multiple Transfer-Encoding headers
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 982, in request
response = self._client.send(
^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
response = self._send_single_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpx/_client.py", line 1014, in _send_single_request
response = transport.handle_request(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 249, in handle_request
with map_httpcore_exceptions():
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/.local/share/uv/python/cpython-3.12.11-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 158, in __exit__
self.gen.throw(value)
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/httpx/_transports/default.py", line 118, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.RemoteProtocolError: multiple Transfer-Encoding headers
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 745, in completion
raise e
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 628, in completion
return self.streaming(
^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 918, in streaming
headers, response = self.make_sync_openai_chat_completion_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/logging_utils.py", line 237, in sync_wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 489, in make_sync_openai_chat_completion_request
raise e
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 471, in make_sync_openai_chat_completion_request
raw_response = openai_client.chat.completions.with_raw_response.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/openai/_legacy_response.py", line 364, in wrapped
return cast(LegacyAPIResponse[R], func(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/openai/_utils/_utils.py", line 286, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 1147, in create
return self._post(
^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1259, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1014, in request
raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/main.py", line 2122, in completion
raise e
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/main.py", line 2094, in completion
response = openai_chat_completions.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/llms/openai/openai.py", line 756, in completion
raise OpenAIError(
litellm.llms.openai.common_utils.OpenAIError: Connection error.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/doraemon/Documents/.venv/bin/interpreter", line 10, in <module>
sys.exit(main())
^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 612, in main
start_terminal_interface(interpreter)
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/interpreter/terminal_interface/start_terminal_interface.py", line 578, in start_terminal_interface
interpreter.chat()
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/interpreter/core/core.py", line 191, in chat
for _ in self._streaming_chat(message=message, display=display):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/interpreter/core/core.py", line 223, in _streaming_chat
yield from terminal_interface(self, message)
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/interpreter/terminal_interface/terminal_interface.py", line 162, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/interpreter/core/core.py", line 259, in _streaming_chat
yield from self._respond_and_store()
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/interpreter/core/core.py", line 318, in _respond_and_store
for chunk in respond(self):
^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/interpreter/core/respond.py", line 87, in respond
for chunk in interpreter.llm.run(messages_for_llm):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 324, in run
yield from run_text_llm(self, params)
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/interpreter/core/llm/run_text_llm.py", line 20, in run_text_llm
for chunk in llm.completions(**params):
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 466, in fixed_litellm_completions
raise first_error # If all attempts fail, raise the first error
^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/interpreter/core/llm/llm.py", line 443, in fixed_litellm_completions
yield from litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1352, in wrapper
raise e
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1227, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/main.py", line 3701, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2273, in exception_type
raise e
File "/home/doraemon/Documents/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 502, in exception_type
raise InternalServerError(
litellm.exceptions.InternalServerError: litellm.InternalServerError: InternalServerError: OpenAIException - Connection error.
(Documents) [doraemon@doraemon-arch Documents]$
My curl works
$ curl -s -H "Authorization: Bearer fake_key" -H "Content-Type: application/json" -d '{
"model": "gpt-oss-20b-f16",
"messages": [{"role":"user","content":"hi"}],
"stream": false
}' http://192.168.1.166:8084/v1/chat/completions | jq .
{
"choices": [
{
"finish_reason": "stop",
"index": 0,
"message": {
"role": "assistant",
"reasoning_content": "We need to respond in a friendly manner. The user says \"hi\". Probably we should greet back.",
"content": "Hello! 👋 How can I help you today?"
}
}
],
"created": 1759111089,
"model": "gpt-oss-20b-f16",
"system_fingerprint": "b6360-3de00820",
"object": "chat.completion",
"usage": {
"completion_tokens": 42,
"prompt_tokens": 68,
"total_tokens": 110
},
"id": "chatcmpl-244grkXV01PZ7SY6SyyEzfRVeP7oghT8",
"timings": {
"prompt_n": 1,
"prompt_ms": 38.897,
"prompt_per_token_ms": 38.897,
"prompt_per_second": 25.708923567370235,
"predicted_n": 42,
"predicted_ms": 1426.036,
"predicted_per_token_ms": 33.9532380952381,
"predicted_per_second": 29.452271892154194
}
}
Reproduce
.
Expected behavior
.
Screenshots
No response
Open Interpreter version
0.4.3
Python version
3.12
Operating System name and version
Arch linux
Additional context
No response
Metadata
Metadata
Assignees
Labels
No labels