Pragmatic AI Adoption
Avoid the Slop and Amplify Innovation
Software engineering organizations have made AI adoption a priority. After over a year working with these tools, I’ve settled on a pragmatic approach focused on what we know about software engineering and developer experience.
The promises haven’t materialized. Recent research found experienced developers took 19% longer to complete tasks when using AI tools, despite believing they were 20% faster. Trust is declining as well, even as the models improve. According to a recent Stack Overflow Developer Survey, only 33% of developers trust AI accuracy, while 46% actively distrust it.
Let’s be clear about what we’re working with. LLMs are stochastic parrots that generate tokens based on probability distributions. They don’t reason, learn from mistakes, or remember what failed minutes ago. As scope and complexity increase, output correctness decreases. This isn’t a bug. It’s a fundamental limitation. This is why it is important to prioritize the human-in-the-loop when adopting AI.
What AI Does Well:
Pattern matching across vast training data
Generating boilerplate and common code structures
Quick context synthesis from large text volumes
Identifying potential issues based on common patterns
Providing near-instant feedback on common mistakes
⠀What Humans Do Well:
Understanding business context and user needs
Making judgment calls on trade-offs
Learning from mistakes and adapting approaches
Reasoning about novel problems from limited data
Collaborating across disciplines to clarify requirements
Where AI Adds Value
These differences matter. An organization’s ability to innovate is directly proportional to its long-term value. You can’t automate away the work that drives competitive advantage. The real value lies in quality-of-life improvements when AI supports rather than replaces developer roles. Here a five ways to adopt AI to augment rather than replace developers. Successful adoption requires proper configuration and realistic expectations.
Quality Gates
One of the easiest ways to introduce AI into the software development lifecycle is to have it review code changes in Pull Requests/Merge Requests. AI acts as a fuzzy logic linter. It won’t eliminate the need for human review, but it can catch common mistakes and suggest improvements that traditional linters cannot.
This reduces the burden on senior developers who perform code reviews and provides faster feedback to code authors, allowing them to address issues more quickly. However, AI tools can produce more noise than signal if not correctly configured.
Code Assistant
When integrated into an IDE, AI enhances refactoring, autocomplete, and quick-fix tools by leveraging patterns from training data. It can also be used to explain complex code when documentation is missing.
Copying and pasting code from a chat interface or working from a terminal CLI only creates more friction. While articulating a problem can be a valuable thinking tool, a chat interface is the wrong UI for most programming tasks. The AI coding CLIs are amazing in what they can do, but they make it difficult for developers to see the full context of the software, thereby relinquishing control to the AI. The AI code assistant should be nearly invisible, keeping the developer in the flow. So invest in an IDE or text editor that has native AI integration.
Analysis
The source of many software bugs is a misunderstanding of requirements. AI can help by asking clarifying questions, rephrasing business requirements in engineering terms, and translating needs into technical designs.
This leads to clearer shared understanding between engineering and product, more thoroughly considered designs, and faster identification of requirement gaps before implementation. At the same time, AI can’t make decisions about technical trade-offs or replace human-to-human collaboration.
Debugging
AI assists in analyzing crash logs or error logs to find root causes. The most valuable implementations don’t just read logs in isolation. They correlate signals across the system, linking runtime behavior, performance counters, recent code changes, and deployment history to identify patterns humans might miss.
This accelerates troubleshooting and helps generate root-cause analysis documentation, preventing similar issues in the future.
Research
Whether designing a feature, developing a strategy proposal, or fixing a bug, web search tasks can be outsourced to AI. This can simplify the research efforts by leveraging AI to quickly search across multiple sources and summarize findings. This can be useful for exploring both divergent and convergent ideas to weigh trade-offs and identify gaps.
With a good prompt or MCP server, AI can be constrained to search only trusted sources and official documentation, reducing the likelihood of hallucinations. But it is still important to think critically of any finding to avoid losing time going down the wrong path.
Setting Realistic Expectations
The current LLM architecture can’t replace human ingenuity or drastically improve developer velocity when working on complex software. But it can make daily work less tedious. That’s valuable, even if it’s not the transformation vendors promised.
Be realistic about AI’s impact. Research shows automation often shifts toil rather than eliminating it. Time saved in writing code often reappears later in increased review time, rework, or bug fixing. So instead of positioning AI as a productivity improvement, position it as a developer experience improvement to amplify what makes great engineers valuable: judgment, creativity, and reasoning through complex problems.
The risk of poor AI implementation isn’t just immediate productivity loss. It’s long-term skill erosion. Over-automation creates engineers who can’t diagnose problems because they never learned the fundamentals. Junior engineers need to make mistakes, debug their own code, and understand system behavior from first principles. AI should guide this learning, not replace it.


