Code intelligence + workflow automation

For product and engineering teams

Built for multi-million line codebases and complex architectures. AI that actually understands your code — powering search, reviews, and automation across your entire stack.

PMs and support get answers without interrupting engineers. Reviews run automatically. Processes that used to take days happen in minutes.

The bottleneck isn't writing code anymore — it's understanding it.

Most AI tools don't understand your codebase — they skim it

In small repos, "search a few strings and read a snippet" often works. In enterprise codebases it creates a predictable failure: tools claim understanding too early—and the plan, review, or fix is built on partial context.

What most tools do (including Claude Code)
  • Run a few keyword searches over code "as text"
  • Grab ~10–30 lines around matches
  • Assume the result is enough context
  • Proceed confidently ("I understand") even when it's not true
This is a fine strategy for small projects. It's a liability in large ones.
Why it breaks at scale
  • Code isn't plain text — it's symbols, scopes, call paths, and boundaries
  • Understanding often requires the big picture (ownership boundaries, cross-repo dependencies)
  • Docs aren't plain text either — Markdown has structure (headers, sections, contracts)
  • The worst outcome isn't "no answer" — it's a wrong plan with high confidence
If retrieval is shallow, everything downstream becomes guesswork: specs, reviews, triage, and automation.

Probe treats your code like a database

Instead of searching text, Probe queries your codebase the way you'd query a database — with structure, precision, and speed.

ElasticSearch-style queries

Full query syntax with boolean operators, phrase matching, and field-specific search. Works out of the box; optional indexing for larger repos.

AST-aware parsing

Probe understands code structure — functions, classes, scopes, call paths. It returns the smallest useful slice with clear boundaries, not arbitrary line ranges.

LSP + dependency awareness

Works with language servers for go-to-definition and references. Reads go.mod, package.json, Cargo.toml — understands what your project actually depends on.

Fast, local, private

Built in Rust with SIMD optimizations — fast enough to search million-line repos locally. Runs entirely on your machine as a single binary. Retrieval runs locally; you control what context is sent to the model.

Customer feedback

What teams say after shipping with Probe

From product and support to engineering leadership, Probe turns codebases into shared, reliable knowledge.

As a Product Manager, Probe helps me to understand the true behaviour of the software so that I can go beyond the documentation and validate edge case scenarios and answer "what if?" questions as I develop the product roadmap and define new capabilities. This saves a lot of time and disruption to the development teams, as I can provide more fully formed and reasonable proposals to them without having to ask them to check my understanding of the code every five minutes.

Andy Ost
Senior Product Manager at Tyk.io

I'm using Probe Labs tools daily as a technical lead, and they've been adopted across marketing, sales, documentation, product, delivery, and engineering. Probe is useful in every part of the SDLC, and it helps us understand complex features across multi-project dependency chains quickly and efficiently. The YAML-based automation makes it easy to wire in tools like JIRA, Zendesk, and GitHub for agentic flows that actually work from day one.

Laurentiu Ghiur
Technical Lead at Tyk.io

Technical Knowledge for Everyone
Humans and Agents.

In 30 minutes, we'll show you exactly how Probe empowers your entire organization to get answers without waiting for engineering.

Here's what to expect:

  1. Deep Code UnderstandingSee how Probe searches and understands your entire codebase — millions of lines, multiple repos, complex architectures.
  2. Live Code Review DemoWatch intelligent, context-aware reviews that actually understand your code patterns and catch real issues.
  3. Workflow AutomationAutomated issue triage, better specs, faster releases — see processes that used to take days happen in minutes.
  4. Integration OptionsMCP for AI editors, GitHub Actions, Slack bots, Web UI — we'll show you what fits your stack.
  5. Your Personalized Setup PathGet a tailored plan to have Probe running in your org within 10 minutes of the call ending.

Tell Us About Your Setup

This will be an engineer-to-engineer meeting. No sales fluff.

One Platform, Every Team

Engineers, leaders, and product teams all get value — in different ways. See what Probe delivers for your role.

CTO / Founder

Three things that work right now: chat with your entire codebase, intelligent code reviews that scale, and workflow automation that runs itself.

You get

  • Chat with code across your entire architecture: Everyone - engineers, PMs, support, sales - can query millions of lines across repos, docs, and history. No more "ask engineering" loops. No more tribal knowledge bottlenecks.
  • Intelligent code reviews that scale with your org: Powered by Probe's deep code understanding, with configurable rules per team. Consistent quality gates across all repos, all teams - without slowing anyone down.
  • Workflow automation that runs without you: Issue triage, spec generation, release gates - processes that used to require manual intervention now run automatically. Better specs, smoother releases, fewer interrupts.

Replaces

  • PMs and support constantly interrupting engineers for context
  • Inconsistent code review quality that depends on who's available
  • Manual issue triage that burns engineering time
  • Specs that get rewritten because they missed technical constraints
  • Release processes that require heroics every time

Get started in 10 minutes

Real value, not demos. Pick any of these and have something running before your next meeting.

Add Probe to Your AI Coding Tool

Get enterprise-grade code understanding in your existing workflow. Probe auto-detects Claude Code and Codex auth, or works with any LLM API. One command - your AI actually understands your architecture.

You get: A specialized AI agent for code search and analysis, powered by Probe's code search engine - finds the right context and reduces wrong answers with bounded, structured retrieval.
AI code editor setup →

Add to AI Coding Tools

claude mcp add probe -- npx -y @probelabs/probe@latest agent --mcp

Run Agent Directly Run in your project folder

# Opens a web browser with the Probe agent UI
npx -y @probelabs/probe-chat@latest --web

Visor: the workflow engine for predictable agent automation

CI is great for building, testing, and deploying code. Visor is built for something different: running agent workflows across tools and teams—where you need explicit control flow, validated outputs, and observable runs.

On‑premAny LLMOpenTelemetryOpen sourceMCP tools

What makes Visor workflows different

Workflow as code

Steps, prompts, schemas, routing, outputs—all defined in YAML. No hidden glue, no magic.

Multi-provider runs

AI + GitHub + HTTP + shell + MCP tools + memory in one workflow.

Schema validation

Enforce structured outputs. Stable results for PR comments, reports, and downstream systems.

Observable by default

OpenTelemetry traces + log correlation built in. Debug agent workflows like any production system.

Safe control flow

Bounded retries, deterministic routing, approval gates. Predictable behavior, not improvisation.

Testable workflows

Fixtures and mocks validate behavior before production. Run the same workflow locally, in CI, or via API.

On-prem and governable by design

Run it in your environment, bring your model, and audit every workflow like infrastructure.

Control

  • On-prem by default — your code and data stay in your environment
  • Any LLM — choose per workflow; swap models without rewriting everything
  • Explicit tool permissions — per workflow and per step
  • Scoped context retrieval — pull only what's needed; avoid broad exposure
  • Schema validation + policy gates — for critical steps
  • Human approvals — where required (release gates, risky actions)

Governance

  • OpenTelemetry traces + audit trails — what happened, with what context, why
  • Versioned workflows — reviewed like code (changes are intentional and trackable)
  • Org templates + team ownership — teams customize safely; leadership keeps standards
  • Multi-repo + multi-team — aligned to ownership boundaries
  • Deterministic retries + failure reasons — no silent "AI weirdness"

No vendor lock-in: open-source core + open standards + deploy anywhere.

Common questions

What do you mean by "pipelines, not prompts"?

Prompts are instructions. Pipelines are defined systems: explicit steps, constrained tools, validated outputs, retries, and feedback loops—so behavior is repeatable and governable.

Isn't AI inherently nondeterministic?

Model outputs can vary. The workflow doesn't have to. Visor makes execution deterministic through constraints, schemas, validation, retries, and auditability—so outcomes are governable.

Can we run this on-prem and use our own model?

Yes. Deploy in your environment and choose the LLM per workflow. You can switch models without rewriting your entire workflow strategy.

How do you avoid vendor lock-in?

Open-source core, deploy anywhere, model-agnostic execution, and open standards like OpenTelemetry for observability.

Why not one powerful agent with access to everything?

Because it's hard to trust, debug, and govern. Many specialized agents inside explicit pipelines are predictable and scalable.

Is this just "chat with your repo"?

No. Probe retrieves the right context deeply across enterprise codebases and docs, and Visor turns that context into validated, observable execution.

Build workflows you can trust—and prove.

If you're moving toward agent-first development, the question isn't whether you'll automate. It's whether your automation will be repeatable, inspectable, and safe—with clear boundaries, validations, and audit trails.

Prefer bottom-up adoption? Start with one workflow, run it on-prem, then expand into a library.

What you'll see in the demo:
  • A real workflow definition (steps + permissions)
  • Validated outputs (schema/gates)
  • A trace you can audit (OpenTelemetry)