Skip to content

Output from reasoning models not parsed correctly #8036

@vaaale

Description

@vaaale

LocalAI version:
3.9.0

Environment, CPU architecture, OS, and Version:
Not relevant

Describe the bug
When using a reasoning model like Qwen-3*, the response is not included in the output the way it would be expected according to the OpenAI api. Instead of placing the content of the .. block in "reasoning_content" it's returned in the normal "content" of the response. This obviously breaks any consumer of the api.

To Reproduce
Run qwen3-*b

Expected behavior
Localai should return the response according to the OpenAI specification.

Logs
N/A

Additional context
N/A

Metadata

Metadata

Assignees

No one assigned

    Labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions