Thursday
Room 1
15:00 - 16:00
(UTC+11)
Talk (60 min)
Are LLMs good software engineers?
In the first paper on using LLMs trained on code in 2021, the authors at OpenAI warned "[the LLM] may suggest solutions that superficially appear correct but do not actually perform the task the user intended. This could particularly affect novice programmers, and could have significant safety implications depending on the context".
5 years later we're starting to grapple with some of the implications of that issue. LLM-generated code is being smuggled into large code bases, some successfully and others not. Open-Source maintainers are creating policies forbidding LLM-generated contributions, yet others are delegating their tech-debt and backlogs to LLMs.
Are LLMs good software engineers?
In this talk I'll be sharing some data and insights on the analysis of millions of lines of LLM-generated code. I'll explore some principles of software engineering, maintainability and quality and explore how they apply to an AI-generated toolset. I'll share some techniques for improving software quality when using LLMs and my opinions on the long-term risks with letting the LLM write the code.
