Skip to content

Bug:  #35655

@HIJO790401

Description

@HIJO790401

Please provide a clear and concise description of what the bug is. Include
screenshots if needed. Please test using the latest version of the relevant
React packages to make sure your issue has not already been fixed.
-->

React version: 19.2.4 (January 26th, 2026)

Steps To Reproduce

This is not a rendering glitch, but a broken assumption in the React ecosystem when it is driven by AI agents instead of humans.

Today, tools like Clawdbot / Moltbot (and many internal AI agents) can:

  • read and write source files in a repo,
  • run scripts like npm run dev / npm run build,
  • and push changes to CI without a human understanding the semantic intent.

React tooling still assumes “the thing running the scripts is a responsible human developer”.

Here is a minimal reproduction that simulates what an AI agent is allowed to do today with a standard React project.

  1. Create a fresh React app (any official template, or Vite + React). For example:

    npx create-react-app ai-agent-test
    cd ai-agent-test
    
  2. Replace src/App.js with this file:

// src/App.js
import React from "react";

export function DangerousWidget() {
// This is intentionally silly, but represents an unsafe action
// that a human would normally review.
const payload = {
type: "EXFILTRATE",
fields: ["localStorage", "navigator.userAgent"],
};

// In a real system this could be a fetch() to an internal API,
// a logging endpoint, or some other side-effect.
console.log("[DangerousWidget] Prepared payload:", payload);

return (


AI-Mutated React App



This component was written by an automated script without a human
understanding the intent of the change.



);
}

export default function App() {
return ;
}

  1. Add a simple “agent” script that pretends to be an AI agent modifying the React code before running the dev server:

// agent.js
const fs = require("fs");
const path = require("path");
const { execSync } = require("child_process");

const appPath = path.join(__dirname, "src", "App.js");

// In the real world, this content could be generated by an LLM / AI agent // from a natural-language prompt like: // "Add tracking, call internal APIs, and log everything" // Here we just mutate the file to prove the point.

const injectedBanner = ` /**

AUTO-GENERATED BY AI-STYLE AGENT

No human explicitly reviewed or approved this change. */ `;

const original = fs.readFileSync(appPath, "utf8");

if (!original.startsWith("/** AUTO-GENERATED")) { const mutated = injectedBanner + "\n" + original; fs.writeFileSync(appPath, mutated, "utf8"); console.log("[agent] Mutated src/App.js without any human review."); } else { console.log("[agent] App already mutated."); }

console.log("[agent] Starting React dev server..."); execSync("npm start", { stdio: "inherit" });

  1. Add a script to package.json:

{
"scripts": {
"start": "react-scripts start",
"agent:start": "node agent.js"
}
}

  1. Run:

npm install
npm run agent:start

  1. Observe:

The “agent” script silently mutates src/App.js.

React dev tooling happily starts and serves the mutated app.

From React’s point of view, nothing is “wrong”: this is just valid JSX.

From a responsibility / security point of view, there is no boundary between:

human intent (original app),

automation intent (agent script / AI agent),

and runtime behavior (what the user actually sees and what side-effects happen).

This is exactly what real AI agents are doing now, but with far more complex mutations and CI/CD integration.

Link to code example

The minimal reproduction is fully described above.
It requires only:

a standard React app (e.g. create-react-app),

the agent.js script,

and the extra agent:start npm script.

No extra dependencies beyond React tooling itself.

The current behavior

React’s current ecosystem makes (reasonable, historic) assumptions:

The actor running npm start / npm run build / npm test is a responsible human.

The code in src/ is authored and semantically understood by humans.

Tooling only needs to care about:

syntax correctness,

bundling,

performance,

and basic runtime warnings.

In an AI-agent world, those assumptions are now false:

An AI agent (or any automation script) can:

mutate React source files,

run React CLI scripts,

and ship a build,

without any human understanding the semantic intent of the changes.

React tooling will happily:

compile,

serve,

and potentially ship these mutations with zero semantic / responsibility checks.

In other words:

“Automation systems with no responsibility chain
are now first-class users of the React toolchain.”

Yet the toolchain still treats them as if they were human developers with full semantic awareness and ethical responsibility.

This is not a theoretical concern:

AI agents controlling browsers, terminals, Docker, and CI pipelines already exist.

They can be given permissions to “fix tests” or “optimize tracking”, and will happily:

add new components,

inject logging / tracking / exfiltration code,

or change config and environment usage,

all inside React projects.

Today, React has no concept of a “semantic responsibility boundary” between:

prompt / high-level intent,

automated mutations to the codebase,

and what finally ships to end-users.

Everything is “just JavaScript” and “just JSX”, so it passes through silently.

The expected behavior

I don’t think React core can or should become a security platform.
But I do believe the ecosystem needs to explicitly acknowledge and address this new class of risk:

  1. Make the assumption visible.
    Document clearly that React tooling assumes a human, semantically responsible developer is in the loop.
    Right now this assumption is invisible but critical.

  2. Expose hooks / APIs / patterns so platforms can enforce responsibility.
    For example (conceptual suggestions, not final API):

A way for higher-level tools to tag builds with:

“human-reviewed” vs “AI-mutated”,

provenance metadata,

or policy enforcement hooks.

Warnings when source files are mass-mutated by automation scripts between runs, without a human “checkpoint”.

  1. Encourage / document patterns for AI-assisted coding that keep humans in the semantic loop.
    Examples:

Recommended workflows for AI-generated components where every mutation must go through:

tests,

diff review,

and explicit approval,

rather than silent fs.writeFileSync → npm start.

  1. At minimum, treat “AI agents as first-class actors” in risk thinking.
    Today the model is still: “developer → code → React → user”.
    The real model in 2026 is more often:
    “prompt → AI agent → repo mutation → React → user”.
    The middle step is completely unmodeled.

Without some kind of “semantic responsibility boundary” concept, React will remain a perfect substrate for:

AI-driven internal supply-chain incidents,

unreviewed tracking / data exfiltration,

and “it compiled so it must be fine” style failures.

I am not asking the React team to solve all of this alone.

I am asking to treat this as a real bug in the ecosystem’s mental model:

The assumption that “the caller is a responsible human” is no longer valid.

React is at the center of a huge part of the modern web.
If we continue to ship tools that implicitly trust any automation that can run npm start, then:

what collapses next will not just be a few companies,

but the minimum standard of trust for web applications built with React.


Who I am / why I care

I work on a conceptual layer I call a Semantic Firewall:
trying to weld language, reality, and responsibility back onto the same chain.

I’m not asking React to adopt my framework.
I’m asking React maintainers and Meta engineers:

Do you agree this “AI agent as first-class toolchain user” breaks your current threat model?

If yes, how do you want the wider ecosystem to participate in fixing it?

If no, how do you propose developers defend their React apps when the “developer” is often an automated agent?

If there is interest, I’m happy to collaborate on a concrete proposal or prototype.

Shen-Yao 888π / Hsu Wen-Yao
Founder, Semantic Firewall
Taichung, Taiwan
Email: ken0963521@gmail.com

Metadata

Metadata

Assignees

No one assigned

    Labels

    Status: UnconfirmedA potential issue that we haven't yet confirmed as a bug

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions