-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
feat(issue-views): Add backend endpoint for AI-generated view titles #105970
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Adds a new endpoint that uses Seer's LLM proxy to generate descriptive
titles for issue views based on search queries. Uses Gemini Flash for
fast, low-cost generation.
- New endpoint: POST /organizations/{org}/issue-view-title/generate/
- New feature flag: organizations:issue-view-ai-title
- Calls Seer's /v1/llm/generate with a system prompt optimized for
generating 3-6 word titles
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
🚨 Warning: This pull request contains Frontend and Backend changes! It's discouraged to make changes to Sentry's Frontend and Backend in a single pull request. The Frontend and Backend are not atomically deployed. If the changes are interdependent of each other, they must be separated into two pull requests and be made forward or backwards compatible, such that the Backend or Frontend can be safely deployed independently. Have questions? Please ask in the |
- Change owner from ML_AI to ISSUES team - Remove unnecessary SEER_AUTOFIX_URL check (always defined) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Tests cover: - Successful title generation - Title whitespace stripping - Missing/empty query parameter - Feature flag not enabled - AI features disabled for org - Seer API errors - Empty response from Seer - Long query truncation Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
| ) | ||
|
|
||
| response = requests.post( | ||
| f"{settings.SEER_AUTOFIX_URL}/v1/llm/generate", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just for my own learning, is this the general way to send generic prompts to an llm at sentry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, as of a few minutes ago :D this PR is a demo of the new endpoint. https://github.com/getsentry/seer/blob/875097a70ebe0596900e525748adbbdcaf19e816/api_docs/llm_proxy.md#L17
| } | ||
| owner = ApiOwner.ISSUES | ||
|
|
||
| def post(self, request: Request, organization: Organization) -> Response: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need any kind of strict ratelimiting on this to make sure a bad actor can't cause us a bunch of llm costs by spamming this?
Should we potentially cache duplicate queries within the same org?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, we could add some more stringent rate limits, our defaults are probably reasonable, and flash is super cheap + the max tokens of this request is quite small. I think since this is all UI based, caching won't really help too much as frequency will be very low
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was more thinking about someone doing something malicious, or some llm or script doing something dumb.
I'm not sure if it's possible for example to do an injection attack here and get free llm queries, for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it would be quite difficult to do, as they'd have to override our prompt etc, but certainly possible for a malicious actor to spam us. our model router will eventually protect us from this (we're building out usage tracking similar to what snuba has now). I'll put in a stricter rate limit on this endpoint for now
| if not request.user.is_authenticated: | ||
| return Response( | ||
| {"detail": "User is not authenticated"}, | ||
| status=status.HTTP_401_UNAUTHORIZED, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that authentication is implied because otherwise you can't access an organization endpoint?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, we had to add these checks in a bunch of places for our type checker and unfortunately they don't usually make logical sense, i'll see if i can remove this one.
| if not features.has( | ||
| "organizations:issue-view-ai-title", organization=organization, actor=request.user | ||
| ): | ||
| return Response( | ||
| {"detail": "Organization does not have access to this feature"}, | ||
| status=status.HTTP_403_FORBIDDEN, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we actually need to gate this api behind a feature? Or is this just required for self-hosted since they don't have seer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just for self hosted. and can probably consolidate under the general "ai feature flag" after release.
OrganizationEndpoint base class already handles authentication. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
This pull request has gone three weeks without activity. In another week, I will close it. But! If you comment or otherwise update it, I will reset the clock, and if you add the label "A weed is but an unloved flower." ― Ella Wheeler Wilcox 🥀 |
Summary
POST /organizations/{org}/issue-view-title/generate/that uses Seer's LLM proxy to generate descriptive titles for issue views based on search queriesorganizations:issue-view-ai-titleRelated PR
Test plan
~/.sentry/sentry.conf.py:make devin seer repo)curl -X POST http://localhost:8000/api/0/organizations/{org}/issue-view-title/generate/ \ -H "Content-Type: application/json" \ -d '{"query": "is:unresolved assigned:me level:error"}'🤖 Generated with Claude Code