How AI Assistance Impacts the Formation of Coding Skills
Overview
Anthropic researchers conducted a randomized controlled trial examining whether AI coding assistance helps or hinders skill development among software developers. The study reveals a significant trade-off: while AI speeds up task completion slightly, it substantially reduces learning outcomes.
Key Findings
Main Result: Participants using AI assistance scored 17% lower on comprehension quizzes than those coding manually—equivalent to nearly two letter grades. The AI group averaged 50% on assessments versus 67% for hand-coders (Cohen's d=0.738, p=0.01).
Speed Trade-off: AI provided minimal time savings (approximately two minutes faster) that did not reach statistical significance.
Critical Skill Gap: The largest performance gap appeared in debugging questions, suggesting that "the ability to understand when code is incorrect and why it fails may be a particular area of concern."
Study Design
Participants: 52 junior software engineers with regular Python experience but unfamiliar with the Trio library used in testing.
Task Structure:
- Warm-up phase
- Two coding tasks using Trio (asynchronous programming concepts)
- Comprehension quiz
Assessment Types:
- Debugging (identifying code errors)
- Code reading (comprehension)
- Code writing (implementation approach)
- Conceptual understanding (core principles)
Interaction Patterns
Low-Scoring Approaches (averaging <40%):
- AI delegation: Complete reliance on code generation
- Progressive AI reliance: Initial questions, then full delegation
- Iterative AI debugging: Using AI to solve rather than clarify
High-Scoring Approaches (65%+ average):
- Generation-then-comprehension: Code generation followed by explanatory follow-ups
- Hybrid code-explanation: Requesting explanations alongside generated code
- Conceptual inquiry: Asking only conceptual questions, independently resolving errors
Implications
The research suggests that intentional AI use matters profoundly. Developers who engage actively with AI tools—asking clarifying questions, requesting explanations, and maintaining independent problem-solving—retained substantially more knowledge than those treating AI as an automated solution provider.
Workplace Considerations: Managers should implement systems ensuring engineers continue learning while working, particularly regarding oversight capabilities for increasingly automated code environments.
Limitations
The study measured immediate comprehension rather than long-term retention, involved a relatively small sample, and focused specifically on coding tasks. Questions remain about longitudinal effects, applicability beyond software development, and differences between AI and human assistance.
Conclusion
"Productivity gains may come at the cost of skills necessary to validate AI-written code if junior engineers' skill development has been stunted by using AI in the first place." Strategic deployment requires balancing efficiency with sustained expertise development.