October 7th, 2025
heart1 reaction

Developer and AI Code Reviewer: Reviewing AI-Generated Code in .NET

Wendy Breiding (SHE/HER)
Senior Manager, Product Management

Enhancing the role of the developer with the responsibility of reviewing AI-generated code is a transformative step for developers. You become a critical gatekeeper for the quality, reliability, and maintainability of code produced by advanced AI tools like GitHub Copilot. While the volume of code reviews may increase, so does the opportunity to raise the bar for your team’s output. This post explores how reviewing AI-generated code can make you more productive and effective and provides practical tips for navigating common review challenges.

How Reviewing AI-Generated Code Boosts Productivity

Data from recent development teams shows that integrating AI code generation can increase feature delivery speed by 20–40%. However, this gain is only sustainable if code reviewers ensure the produced code meets the highest standards. By adopting consistent review practices, developers spend less time debugging and refactoring later, resulting in a net productivity gain even with the extra reviews required. Moreover, reviewers report a deeper understanding of the codebase and technologies as they regularly encounter new patterns and solutions presented by AI.

Key Areas for Reviewing AI-Generated Code

When faced with code from AI assistants, code reviewers should pay special attention to the following areas:

1. API Design & Interface Architecture

Interface Abstraction: AI often introduces unnecessary abstraction layers; scrutinize interfaces for simplicity and directness.

@copilot TokenCredential is already abstract, we don't need an interface for it.

Method Naming: Naming conventions can be inconsistent (e.g., WithHostPort vs WithBrowserPort); ensure adherence to project standards.

Public vs Internal APIs: AI may expose more methods as public than needed—be deliberate about API surface.

Extension Method Patterns: Confirm builder extensions follow established conventions.

2. Testing & Testability

Unit Test Coverage: AI-generated methods may lack comprehensive tests for new public methods—insist on full coverage.

@copilot add unit tests for GetOrCreateResourceAsync

Test Organization: Prefer snapshot testing (e.g., Verify) over generic assertions, which are common in AI-generated tests.

Concrete Assertions: Review for tests that assert specific values, not just general outcomes.

Preserve Existing Tests: Guard against unnecessary changes to existing tests when integrating new code.

3. File Organization & Architecture

Auto-generated Files: AI may inadvertently modify auto-generated API surface files (/api/.cs)—review for accidental changes.

Layer Separation: Confirm code is placed within the correct architectural context (Infrastructure vs Publishing).

Namespace Organization: Check that new classes and interfaces are organized in the appropriate assemblies.

@copilot Move the tests for BicepUtilities to a BicepUtilitiesTest class

4. Error Handling & Edge Cases

Null Checking: Validate that null-checking patterns are applied consistently.

@copilot This should never be null.

Exception Handling: Ensure the use of proper exception types and handling strategies; AI might use generic exceptions.

Edge Case Coverage: Be thorough in considering error scenarios and defensive programming, especially as AI may overlook rare cases.

5. Configuration & Resource Management

Resource Lifecycle: Inspect resource creation, configuration, and cleanup, as AI code may neglect disposal patterns.

@copilot We should see if the DockerComposeEnvironmentResource already has a dashboard resource and this should noop if it does.

Configuration Patterns: Confirm adherence to established callbacks and resource configuration approaches.

Environment-Specific Logic: Ensure correct behavior in different contexts (e.g., publish vs run modes).

6. Code Quality & Standards

Documentation: AI-generated code often lacks comprehensive XML documentation for public APIs.

Code Style: Watch for formatting and style inconsistencies that AI can introduce.

Performance Considerations: Critically assess the performance implications of AI-generated designs.

Key Insights for Reviewing AI-Generated Pull Requests

  • Iterative Refinement: Expect Copilot PRs to go through more rounds of feedback and incremental edits than human-authored code.
  • Architectural Guidance: Provide strong architectural support to ensure new features mesh with existing patterns and conventions.
  • Standards Enforcement: Maintain rigorous standards, as AI often defaults to generic practices unless explicitly guided.
  • Quality Focus: Devote attention to maintainability and test coverage; AI may solve the immediate task but miss long-term concerns.
  • Incremental Changes: Encourage smaller, focused pull requests to simplify review and integration.

Conclusion: Elevate Your Impact as an AI Code Reviewer

Embracing the role of reviewing AI-generated code allows you to steer your team’s adoption of new technologies toward success. By applying deliberate review strategies, enforcing standards, and guiding iterative refinement, you ensure that the promise of AI productivity is realized without compromising quality. Step up as a reviewer, help make every AI-generated contribution robust and maintainable, and lead the way for excellence in .NET development.

Author

Wendy Breiding (SHE/HER)
Senior Manager, Product Management

11 comments

Sort by :
  • Odonyde 3 days ago · Edited

    Once upon a time, tools were created to assist developers (paleontologists call it the “Developer! Developer! Developer!” era🦕). But these times are gone. Nowadays, as a modern example of the “Inversion of Control” principle, developers are supposed to assist the slopping tool. 🤖😉

  • Alexander Luzgarev 3 days ago

    Congratulations, you “automated” the actually fun and creative part of programming, and left the chores to human developers. I can see only two reasons for doing that: either you want to sell us AI slop generators (which, I guess, you do), or you fundamentally misunderstand and despise programming.

    • Tyler 3 days ago

      AI/Automation is doing the exact opposite of what we were told to expect all of our lives. It’s destroying Arts and creative professions instead of taking over the menial tasks.

      Need a writer? Nah, just gen AI some bland, soulless article. Need a graphic designer? Nope, gonna prompt some slop for my product image. Need a software architect? No, I can “vibe” that up.

      Need your toilets scrubbed? Ok, we’ll get a janitor on it right away.

  • Jon Worek 4 days ago

    I felt that this post was pretty spot on with my experiences over the last few months of co-writing and reviewing AI generated code. Good job. I use copilot and VS code, and formerly used Cline. Copilot has come a long way in terms of giving me output that requires less oversight. But model choice is extremely important, with the latest Claude Sonnet models giving the best results.

  • Thomas Glaser 5 days ago · Edited

    Was this article AI generated too? Because frankly it feels like it.

    But I’m wondering, if AI PRs require more rounds of feedback, are we truly saving time? A junior will learn over time and seniors will have to provide less feedback. That retention of information/learning across PRs is currently missing. Something that can be fixed I am sure, but at least currently I don’t see the same productivity gains as mentioned.

    • Wendy Breiding (SHE/HER)Microsoft employee Author 5 days ago

      Honestly, yes, AI was used in a lot of ways to help write this blog post. First, I used GitHub Copilot with the GitHub MCP server to review all the issues that were assigned to Copilot in several repos to understand the types of comments that were made and the requested changes that code reviewers were finding and asking for them to fix. Once I had an understanding of the most common comments across those repos, I asked M365 Copilot to create a base blog post for me, setting up the structure for the post. For the rest of...

      Read more
      • Tyler 4 days ago

        This is just embarassing.

      • Thomas 5 days ago

        Thanks, I appreciate the honest response! AI has a certain writing style that becomes obvious over time 🙂

        Yeah, maybe I’ll start using it more on some of the less complex issues, those should also be quick to review. I’m all for reducing mental load of programming and making it quicker, just hoping it won’t lead to a maintenance nightmare if we get too “relaxed” on letting AI write the code without serious reviews

  • Thomas Levesque · Edited

    > This post explores how reviewing AI-generated code can make you more productive and effective

    I call bullsh*t. Reviewing code is much, much harder than writing it. Unless you're doing a very cursory review, it can also take more time than writing it. More importantly, it's much more boring than writing code, making it difficult to stay focused while doing a long code review. And when it gets hard to stay focused, you typically start glossing over the details, and you can miss important mistakes in the code. Sure, that can also happen when the code was written by a human....

    Read more
    • Mark Adamson · Edited

      > But now, you spend much more time doing reviews and much less time coding

      This generally describes what happens as you get more senior too, which tends to mean focussing more on the high level steer that you give people / agents. So copilot coding agent can help to remove yourself from the drudgery of coding in a helpful way once you are able to guide it with the right principles and standards.

      This article seems to have a more pessimistic view of the errors it can make than I have seen in practise. And it's worth highlighting all the...

      Read more
      • Rolf Kristensen 3 days ago · Edited

        I find the time I spent coding the most fun part, together with design discussions. The code-review part is a necessary job, but when you know the contributor is an AI-bot, then you also know who is going to handle any support issues/questions afterwards.

        But I like the idea of using AI bot to help write unit tests, and help perform code reviews.