I’m working on a production setup using EVI Speech-to-Speech via the Hume SDK. In my configuration, I’m providing both: 1. A system prompt that includes clear contextual information about the use case, and 2. Tool calling (function calling) to retrieve verified user data from our backend before the AI responds. However, during live sessions, the EVI speech agent frequently responds with context that is not present in either the system prompt or the tool results. The responses often contain fabricated or hallucinated information — for example, incorrect company names or made-up achievements — even though the correct data is returned by the tool call. This suggests that the EVI model is generating answers without consistently relying on tool outputs or provided context. Summary of the issue: • Using: EVI Speech-to-Speech with custom system prompt + tool calling • Behavior: Model responses include hallucinated details not found in prompt or tool data • Expected: Model should ground responses strictly in system prompt and tool outputs Questions: 1. Is there a recommended configuration or flag to ensure EVI prioritizes tool outputs before generating a response? 2. Are there known limitations or best practices for combining system context and function calling within EVI Speech-to-Speech? 3. Could the SDK or model version (e.g., gpt-5) affect how reliably tools are called before a response is generated? Any guidance or configuration recommendations to ensure the model consistently grounds its responses in the provided tool data would be greatly appreciated.