HumanSignal’s cover photo
HumanSignal

HumanSignal

Software Development

San Francisco, California 5,222 followers

HumanSignal enables data science teams to build AI with their company DNA.

About us

HumanSignal enables data science teams to build AI models with their company DNA. With the emergence of generative AI, it’s more important than ever to build highly differentiated models by guiding foundation models with proprietary data and human feedback. Creators of Label Studio, the most popular open source data labeling platform, HumanSignal enables data scientists to develop high quality datasets and workflows for model training, fine tuning and continuous validation. Today, the Label Studio open source community has more than 250,000 users who have collectively annotated more than 100 million pieces of data. Label Studio Enterprise is available as a cloud service with enhanced security, automation, quality review workflows, and performance reporting, used by leading data science teams including Bombora, Geberit, Outreach, Wyze, and Zendesk.

Website
humansignal.com
Industry
Software Development
Company size
51-200 employees
Headquarters
San Francisco, California
Type
Privately Held
Founded
2019
Specialties
MachineLearning, DeepLearning, AI, DataLabeling, DataScience, and GenerativeAI

Products

Locations

Employees at HumanSignal

Updates

  • View organization page for HumanSignal

    5,222 followers

    Most evaluation methods break down with generative models. Outputs vary, “correct” isn’t fixed, and disagreement shows up everywhere. Instead of filtering that out, it’s worth understanding what it actually tells you. This blog walks through how consensus helps turn subjective judgment into something you can measure and improve: https://lnkd.in/gBq6j-uv

  • When you’re annotating clinical notes, extracting entities is only part of the work. You also need to preserve the relationships between them. That’s what made OM1's use case especially interesting. Using Label Studio, the team built a process to manage complex findings, scale review, and keep documentation organized across thousands of tasks. Scroll through to see how they approached it and what changed. 👉 Read the full case study here: https://lnkd.in/gUeiyJyG

  • View organization page for HumanSignal

    5,222 followers

    AI observability tools track latency, uptime, and cost. They don’t tell you enough about output quality or what to improve next. We just released human-in-the-loop evaluation interfaces for agentic AI in Label Studio Enterprise, with support for traces from Braintrust, LangSmith, and Langfuse. With the new tutorials and templates, you can: - Review agent traces with structured human feedback - Speed up time-to-insight - Track quality over time Read the blog: https://lnkd.in/gB-J3py8 or watch the walkthrough: https://lnkd.in/g75b8Zpr

  • You can have dashboards, KPIs, and evaluation results and still struggle to answer the questions that matter: - Are we learning faster? - Is this ready to ship? - Will we catch failures early enough? - Is any of this moving the business? That’s where many AI programs start to lose clarity. There is no shortage of data. What’s missing is a clear way to decide what to fix, what to ship, and how to manage systems in production. Taking learnings from our customers, we put together a short piece based on a more practical approach: a small set of metrics that helps teams learn faster, make better release decisions, and manage risk once systems are live. Read it here: https://lnkd.in/gdcVrPwp

  • Last week, our team was on the ground at PyTorch Conference Europe 2026, connecting with engineers, researchers, and AI platform teams building systems that are already in production, not just prototypes. What stood out most: The conversation has shifted. Less about models. More about reliability, evaluation, and how to make AI systems actually hold up in the real world. We also had the privilege of co-hosting the Open Source AI Soirée along the Seine with our partners at Docling. An evening full of builders, operators, and open source leaders sharing practical lessons from the field. Exactly the kind of community energy that keeps this ecosystem moving forward. Huge thanks to everyone who made the week special and to the teams pushing the boundaries of open source AI every day, Peter W. J. Staar, Alain AIROM (Ayrom), Mark Collier, and our team, Nikolai Liubimov, Lauren Sell, and Micaela Kaplan The open source AI community is just getting started 🚀

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
  • HumanSignal reposted this

    I'm hosting a webinar! If you’ve ever looked at a single “agreement score” and thought “cool… but what do I actually do with this?” — this is for you. Join me TUESDAY, APRIL 13th at 11:30 EST. We’re going deeper into: 🔍 Where annotators actually disagree (not just that they disagree) ⚙️ How to choose the right agreement methodology for real-world setups 📊 Why one number isn’t enough—and what to look at instead 🤖 Comparing humans, models, and ground truth (yes, including LLM-as-judge) Basically: how to use agreement as a debugging tool, not just a metric. If your data quality matters (and it does), you’ll want to join. 👉 Save your spot: https://lnkd.in/eAfBBH3K

  • Join us for a live webinar: Beyond Inter-annotator Agreement: Managing Quality with Consensus. You'll walk away understanding which metrics to use when and the tools to derive immediate insights to improve your data quality at scale. 📅 April 14 ⏰ 11:30 AM EDT Register here: https://lnkd.in/g3q85-bY You’ll also learn: - The difference between consensus and pairwise agreement and when to use each - How more granular agreement metrics help you save time and take action - Why agreement calculations should be continuously integrated into quality workflows - How Label Studio Enterprise enables these insights

  • HumanSignal reposted this

    Kicking off #PyTorchCon Europe 2026 tomorrow in Paris! We’re hosting the Open Source AI Soirée with Docling tomorrow evening. Looking forward to a good evening of conversation with the community! Register here: https://luma.com/ya2wihmc

    View organization page for PyTorch

    318,136 followers

    Wrap up Day 1 of #PyTorchCon Europe 2026 in Paris at our official Flare Party followed by the Open Source AI Soirée hosted by HumanSignal (Label Studio) and Docling. Tuesday, April 7 Flare Party: 17:05 – 18:30 Join us at the Open Platform for the official PyTorch Foundation evening event. Engage with presenters during technical poster sessions and live Q&A to discuss implementation details and project roadmaps directly with core contributors. Details: https://lnkd.in/gADcEnPp Open Source AI Soirée: 18:30 – 21:00 Join conference sponsor Label Studio and Docling for an evening of conversation and community connection immediately following the Flare Party. APPROVAL REQUIRED: https://luma.com/ya2wihmc Register for PyTorch Conference Europe 2026: https://lnkd.in/eVBjaUtk #PyTorch #PyTorchCon #OpenSource #AI #MachineLearning

    • No alternative text description for this image
  • Kicking off #PyTorchCon Europe 2026 tomorrow in Paris! We’re hosting the Open Source AI Soirée with Docling tomorrow evening. Looking forward to a good evening of conversation with the community! Register here: https://luma.com/ya2wihmc

    View organization page for PyTorch

    318,136 followers

    Wrap up Day 1 of #PyTorchCon Europe 2026 in Paris at our official Flare Party followed by the Open Source AI Soirée hosted by HumanSignal (Label Studio) and Docling. Tuesday, April 7 Flare Party: 17:05 – 18:30 Join us at the Open Platform for the official PyTorch Foundation evening event. Engage with presenters during technical poster sessions and live Q&A to discuss implementation details and project roadmaps directly with core contributors. Details: https://lnkd.in/gADcEnPp Open Source AI Soirée: 18:30 – 21:00 Join conference sponsor Label Studio and Docling for an evening of conversation and community connection immediately following the Flare Party. APPROVAL REQUIRED: https://luma.com/ya2wihmc Register for PyTorch Conference Europe 2026: https://lnkd.in/eVBjaUtk #PyTorch #PyTorchCon #OpenSource #AI #MachineLearning

    • No alternative text description for this image
  • HumanSignal reposted this

    In grad school I took an entire class on annotation, aka professionally overthinking whether people can agree on anything. At first, I didn’t understand why we spent so much time focusing on things like Cohen’s κ, Fleiss’ κ, and Krippendorff’s α. We even learned to calculate the pairwise agreements needed for these metrics and other insights by hand (or, well, by code). By the end of the semester, though, one thing had become clear: these metrics were 𝘪𝘮𝘱𝘰𝘳𝘵𝘢𝘯𝘵. Not just for school or quality, but for a deep and clear understanding of how your annotators are working and the kind of data quality you can expect from a given group. Throughout my career, I’ve continued to use these metrics to get real signals from my data and annotators to make sure projects stay on track. Even if my skip level managers didn’t quite understand the math, having real numbers was a huge win for me and the teams that I was on. All of which is to say: 𝙄 𝙘𝙖𝙧𝙚 𝙖 𝙡𝙤𝙩 𝙖𝙗𝙤𝙪𝙩 𝙖𝙜𝙧𝙚𝙚𝙢𝙚𝙣𝙩. And with that said, this newest release is extra fun for me. We just shipped 𝘀𝘂𝗽𝗲𝗿-𝗴𝗿𝗮𝗻𝘂𝗹𝗮𝗿 𝗮𝗴𝗿𝗲𝗲𝗺𝗲𝗻𝘁 𝗺𝗲𝘁𝗿𝗶𝗰𝘀 𝗶𝗻 𝗟𝗮𝗯𝗲𝗹 𝗦𝘁𝘂𝗱𝗶𝗼 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲. One thing that’s always bugged me, both academically and in production, is how often agreement gets reduced to a single number. A single score doesn’t tell you: ��• where annotators disagree  • why they disagree  • what to actually fix Which, honestly, is kind of the whole point in the real world. So, here’s what we focused on:   • 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻-𝗹𝗲𝘃𝗲𝗹 𝗮𝗴𝗿𝗲𝗲𝗺𝗲𝗻𝘁 (𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝘁𝗮𝘀𝗸-𝗹𝗲𝘃𝗲𝗹) 𝘚𝘰 𝘺𝘰𝘶 𝘤𝘢𝘯 𝘱𝘪𝘯𝘱𝘰𝘪𝘯𝘵 𝘦𝘹𝘢𝘤𝘵𝘭𝘺 𝘸𝘩𝘪𝘤𝘩 𝘱𝘢𝘳𝘵 𝘰𝘧 𝘢 𝘵𝘢𝘴𝘬 𝘪𝘴 𝘤𝘢𝘶𝘴𝘪𝘯𝘨 𝘥𝘪𝘷𝘦𝘳𝘨𝘦𝘯𝘤𝘦, 𝘪𝘯𝘴𝘵𝘦𝘢𝘥 𝘰𝘧 𝘤𝘩𝘢𝘴𝘪𝘯𝘨 𝘩𝘶𝘯𝘤𝘩𝘦𝘴.  • 𝗖𝗼𝗻𝗳𝗶𝗴𝘂𝗿𝗮𝗯𝗹𝗲 𝗺𝗲𝘁𝗵𝗼𝗱𝗼𝗹𝗼𝗴𝗶𝗲𝘀 (𝗰𝗼𝗻𝘀𝗲𝗻𝘀𝘂𝘀 𝘃𝘀. 𝗽𝗮𝗶𝗿𝘄𝗶𝘀𝗲) 𝘉𝘦𝘤𝘢𝘶𝘴𝘦 𝘮𝘰𝘴𝘵 𝘳𝘦𝘢𝘭-𝘸𝘰𝘳𝘭𝘥 𝘴𝘦𝘵𝘶𝘱𝘴 𝘥𝘰𝘯’𝘵 𝘭𝘰𝘰𝘬 𝘭𝘪𝘬𝘦 𝘵𝘩𝘦 𝘤𝘭𝘦𝘢𝘯, 2-𝘢𝘯𝘯𝘰𝘵𝘢𝘵𝘰𝘳 𝘴𝘤𝘦𝘯𝘢𝘳𝘪𝘰𝘴 𝘧𝘳𝘰𝘮 𝘵𝘦𝘹𝘵𝘣𝘰𝘰𝘬𝘴.  • 𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝘀𝗶𝗺𝗶𝗹𝗮𝗿𝗶𝘁𝘆 + 𝘁𝗵𝗿𝗲𝘀𝗵𝗼𝗹𝗱𝘀 (𝗲𝘅𝗮𝗰𝘁 𝗺𝗮𝘁𝗰𝗵, 𝗜𝗼𝗨, 𝗝𝗮𝗰𝗰𝗮𝗿𝗱, 𝗲𝘁𝗰.) 𝘈𝘨𝘳𝘦𝘦𝘮𝘦𝘯𝘵 𝘴𝘩𝘰𝘶𝘭𝘥 𝘢𝘥𝘢𝘱𝘵 𝘵𝘰 𝘵𝘩𝘦 𝘥𝘢𝘵𝘢 𝘮𝘰𝘥𝘢𝘭𝘪𝘵𝘺, 𝘯𝘰𝘵 𝘵𝘩𝘦 𝘰𝘵𝘩𝘦𝘳 𝘸𝘢𝘺 𝘢𝘳𝘰𝘶𝘯𝘥.  • 𝗖𝗼𝗺𝗽𝗮𝗿𝗲 𝗮𝗰𝗿𝗼𝘀𝘀 𝗵𝘂𝗺𝗮𝗻𝘀, 𝗺𝗼𝗱𝗲𝗹𝘀, 𝗮𝗻𝗱 𝗴𝗿𝗼𝘂𝗻𝗱 𝘁𝗿𝘂𝘁𝗵 𝘐𝘯𝘤𝘭𝘶𝘥𝘪𝘯𝘨 𝘵𝘩𝘪𝘯𝘨𝘴 𝘭𝘪𝘬𝘦 𝘓𝘓𝘔-𝘢𝘴-𝘫𝘶𝘥𝘨𝘦, 𝘮𝘰𝘥𝘦𝘭 𝘷𝘴. 𝘮𝘰𝘥𝘦𝘭, 𝘰𝘳 𝘢𝘯𝘯𝘰𝘵𝘢𝘵𝘰𝘳 𝘷𝘴. 𝘨𝘳𝘰𝘶𝘯𝘥 𝘵𝘳𝘶𝘵𝘩. If you take one thing away from this post, let it be this: 𝗔𝗴𝗿𝗲𝗲𝗺𝗲𝗻𝘁 𝗶𝘀 𝗺𝗼𝗿𝗲 𝘁𝗵𝗮𝗻 𝗮 𝗺𝗲𝘁𝗿𝗶𝗰. 𝗜𝘁’𝘀 𝗮 𝗱𝗲𝗯𝘂𝗴𝗴𝗶𝗻𝗴 𝘁𝗼𝗼𝗹 𝗳𝗼𝗿 𝘆𝗼𝘂𝗿 𝗱𝗮𝘁𝗮, 𝘆𝗼𝘂𝗿 𝘁𝗲𝗮𝗺𝘀, 𝗮𝗻𝗱 𝘆𝗼𝘂𝗿 𝘀𝘆𝘀𝘁𝗲𝗺. And they say you never use anything you learned in school in the real world.  https://lnkd.in/eF8N-qYP

Similar pages

Browse jobs

Funding

HumanSignal 2 total rounds

Last Round

Series A

US$ 25.0M

See more info on crunchbase