Building shared coding guidelines for AI (and people too)
Coding guidelines and standards for agents need to be a little different—more explicit, demonstrative of patterns, and obvious.

Coding guidelines and standards for agents need to be a little different—more explicit, demonstrative of patterns, and obvious.

Ryan is joined by Kayvon Beykpour, CEO and founder of Macroscope, to dive into AI-powered code review’s potential for managing large codebases, the need for humans-in-the-loop for PR reviews so AI tools can efficiently and effectively debug, and how AI can increase visibility through summarization at the abstract syntax tree level and high signal-to-noise ratio code reviews.

Ryan is joined by Greg Foster, CTO of Graphite, to explore how much we should trust AI-generated code to be secure, the importance of tooling in ensuring code security whether it’s AI-assisted or not, and the need for context and readability for humans in AI code.

Would updating a tool few think about make a diff(erence)?

For this episode, we spoke with Carol Lee, PhD, principal research scientist in the Developer Success Lab at Pluralsight, about her research into code review anxiety, how developers are coping, and how a workbook can help.
