-
Notifications
You must be signed in to change notification settings - Fork 178
fix(ai): strip openai itemID from tool call metadata #889
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…all errors Signed-off-by: voyager14 <21mh124@queensu.ca>
🦋 Changeset detectedLatest commit: 6f8c6eb The changes in this PR will be included in the next version bump. This PR includes changesets to release 2 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
|
@michael-han-dev is attempting to deploy a commit to the Vercel Labs Team on Vercel. A member of the Team first needs to authorize it. |
bhuvaneshprasad
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding tests
| } | ||
|
|
||
| return meta; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@michael-han-dev — My initial approach followed this pattern, where I did not provide (itemId + reasoningItem) or previousResponseId. However, I later updated the implementation to include previousResponseId and raised PR #886 to address this.
I wanted to confirm my understanding:
without (itemId + reasoningItem) or previousResponseId, the code will still function, but I’m unclear on whether the reasoning models would be able to access and leverage the prior reasoning text to improve performance. If the reasoning text is not actually available to the models at inference time, this approach may be less effective than intended.
Could you help clarify whether this assumption is correct, or if I might be overlooking something in how the reasoning context is preserved?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi @bhuvaneshprasad thanks for bringing this up! adding previousresponseID would definitely add a bonus of giving the model access to prior reasoning during tool calls. I didn't add it here since it would create a dependency on openai's server-side state, which breaks the self-contained design of durableagent (among other things). wanted to keep this pr minimal and match the existing behaviour.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc: @pranaygp
Description
OpenAI's responses API includes an itemId in tool call metadata. When we pass that back in the next request, OpenAI expects the matching reasoning item to be there too. DurableAgent doesn't keep reasoning items around, so tool calls were failing with errors about missing reasoning items. see #880
This change strips itemId from OpenAI metadata before we add tool calls to conversation history. We still keep other metadata like Gemini's thoughtSignature. We don't need previousResponseId tracking since DurableAgent already sends the full conversation history.
Fixes #880
How did you test your changes?
Added 3 unit tests: one for stripping just itemId, one for keeping other OpenAI fields, and one for mixed provider metadata. Also tested manually with the flight-booking-app example using openai('gpt-5-mini') and tool calls work now.
PR Checklist - Required to merge
pnpm changesetwas run to create a changelog for this PRpnpm changeset --emptyif you are changing documentation or workbench appsgit commit --signoffon your commits)