Replies: 1 comment
-
|
💬 Your Product Feedback Has Been Submitted 🎉 Thank you for taking the time to share your insights with us! Your feedback is invaluable as we build a better GitHub experience for all our users. Here's what you can expect moving forward ⏩
Where to look to see what's shipping 👀
What you can do in the meantime 💻
As a member of the GitHub community, your participation is essential. While we can't promise that every suggestion will be implemented, we want to emphasize that your feedback is instrumental in guiding our decisions and priorities. Thank you once again for your contribution to making GitHub even better! We're grateful for your ongoing support and collaboration in shaping the future of our platform. ⭐ |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Select Topic Area
Product Feedback
Body
I couldn't find the best place to give GitHub product suggestions so thought this would be the best place to do it.
Problem
With rise of AI slop and contributions, stars are no longer a reliable metric to understand the technical ability and capability of a GitHub profile.
My suggestion is to have a star to repo ratio. While this isn't perfect as some actors would slip through. At the very least, it'll filter out folks and shows GitHub cares about quality over volume.
The metric
Parameters:
Recency Weights:
Scenario 1: one viral hit
Dev A
1 repo, 10,000 stars, still active
Scenario 2: consistent contributor
Dev B
50 repos, 40 active (avg 500 stars), 10 stale (avg 200 stars)
Dev B beats Dev A
Scenario 3: repo spammer
Dev C
200 repos, 190 have 0 to 2 stars (avg 1), 10 have 100 stars, all stale
Low score creating empty repos doesn't game it.
Scenario 4: someone who abandoned but HQ contribs
Dev D
5 repos, all 3+ years stale, but 8,000 stars each
Still scores well (past impact matters), but less than if they were still active where it would be:
Scenario 5: active contrib, modest stars
Dev E
with 30 repos, all active, avg 50 stars each
Modest but respectabl reflects real-world utility.
Final Ranking: D > B > A > E > C
Where does it fail?
No metric is perfect. Each comes with its own bias of sorts.. but at least this gets us somewhere?
Beta Was this translation helpful? Give feedback.
All reactions