Skip to main content
Skip to main content

The Evolution of the Software Engineer in the AI and Agentic Era

Β· 19 min read
David Sanchez
David Sanchez

Writing Code Was Never the Job β€” Delivering Outcomes Was​

For decades, the role of the Software Engineer has evolved alongside tooling, platforms, and abstractions. From low-level systems programming to high-level frameworks, from waterfall to agile, from on-prem to cloud, each shift changed how software is built, but not who ultimately builds it.

The rise of AI-assisted development and more recently, agentic software engineering represents a fundamentally different kind of shift. Software engineers are no longer the sole producers of code. They are increasingly becoming designers of systems that produce code, operators of autonomous collaborators, and stewards of quality, security, and intent.

Evolution of the Software Engineer

In my previous post on DevOps foundation practices for agentic software engineering, I focused on the systems, pipelines, and guardrails required to safely introduce agents into real-world environments. In this post, I want to zoom out and focus on the human side of the equation:

How is the role of the software engineer evolving in the AI and agentic era?

This is not about replacing engineers. It is about redefining leverage.


From Code Author to System Designer​

Traditionally, the software engineer's primary output was code. Even when working in teams, ownership was explicit: a feature, a service, a module. You were judged by the quality, elegance, and correctness of your code.

With AI copilots and agents like GitHub Copilot, this paradigm is shifting rapidly.

Increasingly, engineers are responsible for:

  • 🎯 Defining intent instead of writing every implementation detail
  • πŸ“ Designing constraints and contracts that guide autonomous behavior
  • πŸ” Reviewing, correcting, and refining outputs produced by non-human actors
  • πŸ—οΈ Architecting repositories and pipelines that agents can operate safely within

The engineer shifts from author to architect of behavior.

This mirrors previous transitions in our industry:

EraWhat Got Abstracted
CompilersAssembly language
FrameworksInfrastructure plumbing
CloudHardware management
Agentic AIExecution of engineering work itself

Each layer of abstraction didn't eliminate the need for engineers, it elevated the problems they could solve. Agentic AI is doing the same thing, but at a higher order of magnitude.


The Engineer as Orchestrator​

Perhaps the most powerful metaphor for the modern software engineer is that of a conductor not playing every instrument, but ensuring the entire orchestra produces a coherent, beautiful result.

In the agentic era, engineers are becoming orchestrators of multi-agent workflows. This goes far beyond delegating a single task to a copilot. It means designing, coordinating, and supervising systems where multiple agents, each with different capabilities, work together toward a shared goal.

What Orchestration Looks Like in Practice​

Imagine a typical feature delivery workflow powered by agents:

  1. A planning agent receives a GitHub Issue and breaks it down into sub-tasks with acceptance criteria
  2. A coding agent (like GitHub Copilot coding agent) picks up a sub-task, reads the codebase, and opens a pull request with an implementation
  3. A testing agent generates and runs test suites against the proposed changes
  4. A security agent scans for vulnerabilities, secrets, and compliance violations
  5. A documentation agent updates API docs, changelogs, and README files based on the changes
  6. A deployment agent stages the change in an ephemeral environment for validation

The software engineer orchestrates this entire flow, defining the sequence, handling exceptions, resolving conflicts between agents, and making the final judgment calls that require human context.

The Orchestration Skill Set​

CapabilityWhat the Engineer Does
Workflow designDefines which agents participate, in what order, and with what permissions
Context managementEnsures each agent has the right context, repo structure, coding standards, business rules
Conflict resolutionMediates when agents produce contradictory outputs (e.g., a performance optimization that breaks a security rule)
Exception handlingDesigns fallback paths for when agents fail, hallucinate, or produce low-confidence results
Quality orchestrationSets the bar for what "good enough" looks like at each stage, and escalates when it isn't met
Feedback loopsFeeds agent outcomes back into prompts, configurations, and guardrails to improve future runs

From Solo Player to Conductor​

This shift has profound implications for how engineering teams are structured:

  • Individual contributors become more impactful, one engineer supervising five agents can deliver what previously required a team of ten
  • Team leads focus on designing orchestration patterns rather than assigning individual tasks
  • Architects define the "agent topology", which agents exist, what they can access, and how they interact
  • Platform engineers build the infrastructure that makes multi-agent orchestration reliable and observable

The best analogy isn't a manager delegating tasks, it's a film director coordinating actors, crew, and technology to bring a vision to life. The director doesn't operate every camera or say every line, but they are responsible for the coherence and quality of the final product.

The engineer of the future doesn't just write code or review code, they orchestrate systems of agents that write, test, secure, and ship code.


The New Engineering Loop​

In classic software development, the feedback loop looked like this:

Design β†’ Code β†’ Test β†’ Deploy β†’ Operate

In an agentic model, the loop evolves into something more strategic:

Define Intent β†’ Configure Agents β†’ Review Outcomes β†’ Reinforce Constraints β†’ Iterate

The engineer's value moves upstream:

  • Clear problem framing: agents can't infer business context
  • High-quality prompts, specifications, and examples: the quality of the input determines the quality of the output
  • Well-designed repositories, pipelines, and environments: these become the "operating system" for agents
  • Effective review and feedback: catching what agents miss and teaching them through constraints

Code is still fundamental, but it is no longer the bottleneck. Clarity of intent is.

A Practical Example​

Consider a scenario where you need to build a new REST API endpoint:

Before AI agents: You'd spend hours writing boilerplate, wiring up middleware, writing validation logic, creating tests, and documenting the API.

With AI agents: You define the contract (OpenAPI spec, input/output types, validation rules, security requirements), configure GitHub Copilot coding agent to generate the implementation, review the output against your architectural standards, and iterate on edge cases.

The engineer who can frame the problem precisely and define clear constraints will get dramatically better results than one who just says "write me an endpoint."


Software Engineers as Curators of Trust​

One of the most underestimated shifts in this evolution is the centrality of trust.

When Humans Write Code​

Trust is interpersonal and process-driven:

  • Code reviews build shared understanding
  • Ownership models create accountability
  • Team norms establish quality baselines

When Agents Write Code​

Trust becomes systemic and architectural.

Software engineers now participate in a new kind of trust engineering:

Trust DomainEngineer's Responsibility
PermissionsDefining what agents can and cannot do
Blast RadiusEstablishing boundaries for autonomous changes
IdentityEnsuring agent actions are traceable and auditable
SecretsManaging credential lifecycles agents depend on
PolicyEncoding organizational standards as automated checks
ComplianceMaintaining regulatory alignment with AI-assisted workflows

This makes security, compliance, and governance core engineering concerns, not afterthoughts delegated to separate teams.

The modern software engineer must understand:

Trust is engineered, not assumed.


Human-in-the-Loop Is a Feature, Not a Failure​

A common misconception is that autonomy equals full automation. If an agent needs human approval, it must be "broken" or "not smart enough."

In practice, the most effective agentic systems are human-in-the-loop by design. This isn't a limitation, it's an architectural decision that reflects real-world complexity and risk.

Software engineers increasingly:

  • βš–οΈ Decide where autonomy is allowed: auto-merge dependency updates, but require review for business logic
  • 🚦 Define when human approval is mandatory: production deployments, security-sensitive changes, breaking API changes
  • πŸ†˜ Act as escalation points: for ambiguity, edge cases, and high-risk decisions

Pull requests, environments, and releases become control surfaces, not bureaucracy.

This reframes familiar tools in a new light:

ToolTraditional RoleAgentic Role
GitHub IssuesTask trackingAgent work assignments
Pull RequestsCode review workflowHuman-agent collaboration interface
CI/CD PipelinesBuild automationAgent supervision and validation
EnvironmentsDeployment targetsAgent testing sandboxes
Branch ProtectionProcess enforcementAutonomy boundaries

AI Is a Superpower β€” With a Warning Label​

Let's be honest about something: the speed at which AI can help you build is absolutely incredible.

What used to take days, scaffolding a project, writing CRUD endpoints, generating test suites, building CI/CD pipelines, drafting documentation, can now happen in minutes. An engineer working effectively with GitHub Copilot can prototype an entire feature in a single afternoon that might have taken a sprint. That velocity is real, and it's transformative.

But here's the part that doesn't get enough attention: AI is confident, fast, and sometimes completely wrong.

The Failure Modes You Must Watch For​

AI coding assistants and agents can and will:

  • πŸ“¦ Reference packages and dependencies that don't exist: AI models can hallucinate library names, inventing plausible-sounding packages that have never been published. If you install them blindly, you'll get build failures, or worse, you could fall victim to dependency confusion attacks where malicious actors register those hallucinated package names
  • πŸ”„ Use deprecated APIs and outdated patterns: models are trained on historical data, so they may suggest approaches that were best practice two years ago but are now obsolete or insecure
  • πŸ” Generate code with subtle security vulnerabilities: SQL injection, improper input validation, hardcoded secrets, insecure defaults, AI doesn't inherently understand your threat model
  • 🧩 Produce code that compiles but doesn't do what you intended: syntactically correct but semantically wrong, especially for complex business logic and edge cases
  • πŸ“Š Introduce performance issues invisibly: inefficient algorithms, unnecessary database calls, memory leaks that only manifest under load
  • πŸ”— Make plausible but incorrect architectural decisions: coupling services that should be decoupled, choosing the wrong data structure, or violating patterns established elsewhere in your codebase

The Danger of "It Works, Ship It"​

The most insidious risk isn't obviously broken code, it's subtly wrong code that passes CI and looks reasonable in review. When AI generates something that compiles, passes tests, and reads well, the temptation to merge without deep scrutiny is enormous. This is how technical debt accumulates at AI speed.

AI doesn't understand your code. It predicts the next likely token. That's an important distinction.

This doesn't mean AI is unreliable, it means it requires an engineer who understands what they're looking at. The AI is the accelerator. The engineer is the steering wheel, the brakes, and the GPS.

When It Works, It's Extraordinary​

With proper foundations in place, the results speak for themselves:

  • ⚑ Tasks that took days now take hours: boilerplate, scaffolding, migrations, test generation
  • 🌍 Language and framework barriers shrink: an engineer proficient in C# can confidently contribute to a Python project with AI assistance
  • πŸ§ͺ Test coverage improves dramatically: AI can generate comprehensive test cases including edge cases you might not have considered
  • πŸ“ Documentation gets written: because AI makes it nearly effortless, documentation that would have been skipped actually gets created
  • πŸ”„ Refactoring becomes less scary: large-scale code transformations that would have been too risky to attempt become feasible
  • πŸš€ Prototyping accelerates innovation: ideas can be validated with working code in hours instead of weeks

The key is understanding that AI is a multiplier, not a replacement, for engineering judgment. Multiply good judgment and you get extraordinary results. Multiply poor judgment and you get extraordinary problems.


Skills That Matter More β€” and Skills That Matter Less​

The skillset of a software engineer is being reweighted, not replaced. Here's what's shifting:

πŸ”Ί Skills That Matter More​

SkillWhy It Matters Now
Systems thinkingUnderstanding end-to-end workflows, dependencies, and failure modes across distributed systems
Specification and communicationClarity over cleverness, the better you express intent, the better agents perform
DevOps and platform literacyPipelines, environments, infrastructure-as-code, the operating system for agents
Security fundamentalsIdentity, permissions, threat modeling, non-negotiable in an agent-assisted world
Judgment and critical thinkingKnowing when not to automate, recognizing subtle bugs in AI-generated code, evaluating tradeoffs
Architecture and designDefining boundaries, contracts, and patterns that scale with autonomous contributors
Prompt engineeringCrafting effective instructions, examples, and constraints for AI systems
Data literacyUnderstanding what agents need, how they learn, and what signals to trust

πŸ”» Skills That Matter Less (But Don't Disappear)​

SkillWhat's Changing
Memorizing syntaxIDEs and agents handle this instantly
Boilerplate generationAgents produce scaffolding faster and more consistently
Manual scaffoldingProject templates and generators are increasingly AI-driven
Rote refactoringPattern-based transformations are agent-friendly tasks

These skills aren't obsolete, they're simply no longer differentiators. The engineer who can write a perfect for loop but can't design a secure, observable, maintainable system will struggle to stay relevant.


Embracing AI at Every Career Level​

One of the most common questions engineers ask is: "Where do I even start with AI?" The answer depends on where you are in your career, but the opportunity exists at every level.

🌱 Junior Engineers: Build Foundations First, Then Amplify​

If you're early in your career, AI can feel like a shortcut, and that's exactly the trap to avoid.

Junior engineers who rely on AI without understanding the fundamentals risk becoming prompt operators instead of software engineers. They can generate code but can't debug it. They can scaffold a project but can't explain why it's structured that way. They can pass an interview with AI assistance but struggle when they need to reason about a production incident at 2 AM.

The foundation matters more than ever:

  • Learn data structures and algorithms: not to memorize them, but to recognize when AI suggests an O(nΒ²) solution where O(n log n) exists
  • Understand how systems work: networking, databases, operating systems, memory management, this is what lets you evaluate whether AI-generated code will actually perform in production
  • Practice debugging without AI: build the muscle of reading stack traces, setting breakpoints, and reasoning about state, because AI can't debug your production system for you (yet)
  • Write code by hand regularly: the act of writing builds intuition that you'll rely on when reviewing AI output

The good news? AI is an incredible learning accelerator. Use GitHub Copilot to explore unfamiliar codebases, ask it to explain patterns you don't recognize, and use it as a teaching assistant, not a substitute for learning.

The junior engineer who uses AI to learn faster will outpace the one who uses AI to avoid learning.

πŸ”οΈ Mid-Level Engineers: The Sweet Spot of Amplification​

Mid-level engineers are in the best position to benefit from AI right now. You have enough experience to evaluate AI output critically, and enough daily implementation work where AI can save you significant time.

This is where the magic happens:

  • You know enough to spot when AI suggests a bad pattern, but AI helps you implement good patterns faster
  • You understand your codebase well enough to give AI meaningful context, and the better the context, the better the output
  • You can focus your freed-up time on higher-value work, system design, mentoring, architectural decisions, that accelerates your growth toward senior roles
  • You become the bridge between AI capabilities and team adoption, helping juniors use AI effectively and showing seniors what's possible

🎯 Senior Engineers and Architects: Redefine Your Leverage​

Senior engineers and architects might be tempted to dismiss AI tools as toys that produce mediocre code. That's a mistake, and an opportunity cost.

Your deep expertise is precisely what makes AI most powerful in your hands:

  • Your architectural judgment means you can direct AI to produce code that fits within well-designed systems, rather than letting it make structural decisions
  • Your pattern recognition lets you review AI output at speed, you'll catch subtle bugs, security issues, and anti-patterns that less experienced engineers would miss
  • Your domain knowledge means you can provide AI with context that produces dramatically better results, the difference between a generic implementation and one that handles real-world edge cases
  • Your organizational influence means you can shape how your entire team or organization adopts AI, defining guardrails, establishing best practices, and creating a culture of responsible AI use

The senior engineer who embraces AI doesn't write more code, they design better systems, review faster, mentor more effectively, and multiply the output of their entire team.

The Universal Truth: Critical Thinking Is the Multiplier​

Regardless of your level, the single most important skill in the AI era is critical thinking.

Adopting AI is not about:

  • ❌ Copying AI-generated code and hoping it works
  • ❌ Accepting the first suggestion without understanding it
  • ❌ Treating AI output as authoritative
  • ❌ Abandoning your engineering judgment because "the AI said so"

Adopting AI is about:

  • βœ… Using AI to generate options, then applying your judgment to choose the best one
  • βœ… Understanding why AI suggests what it suggests, not just what it suggests
  • βœ… Verifying AI output against your knowledge of the system, the requirements, and the constraints
  • βœ… Building workflows where AI handles the repetitive work while you focus on the work that requires human reasoning

The engineer who thinks critically and uses AI will always outperform the engineer who does either one alone.


The Rise of the "Agent-Ready" Engineer​

We are starting to see a new archetype emerge in the industry:

The Agent-Ready Software Engineer.

This engineer:

  • πŸ—οΈ Designs repositories that agents can safely operate in, clear structure, consistent conventions, comprehensive documentation
  • πŸ”„ Builds pipelines that assume non-human contributors, automated testing, required checks, progressive rollouts
  • 🧊 Treats infrastructure as ephemeral and disposable, spin up, test, tear down, repeat
  • πŸ” Optimizes for review, rollback, and recovery because agents will make mistakes, and the system must handle it gracefully
  • πŸ“ Establishes guardrails not to slow things down, but to enable safe speed

They don't ask:

"Can an agent do this?"

They ask:

"Under what constraints should an agent do this?"

The Agent-Ready Checklist​

Here's a practical self-assessment for engineering teams:

  1. Well-documented repositories: Are your repos structured with clear conventions, README files, and contribution guides that both humans and agents can follow?
  2. Non-bypassable quality gates: Do your pipelines enforce checks that no one, human or agent, can skip?
  3. Disposable environments: Can you spin up ephemeral environments to safely test agent-generated changes in isolation?
  4. Automated security posture: Is scanning, secrets detection, and policy enforcement baked into every PR automatically?
  5. Clear ownership models: Do you have CODEOWNERS, required reviewers, and domain-specific approval rules in place?
  6. Fast rollback capability: Can you revert any deployment quickly and safely when something goes wrong?
  7. Comprehensive test coverage: Are your tests robust enough to catch regressions introduced by AI-generated code?
  8. Equal rigor for all PRs: Do you review agent-generated pull requests with the same scrutiny as human-generated ones?

What Doesn't Change​

Despite all of this evolution, some things remain constant, and arguably become more important, not less:

  • βœ… Software engineering is still about solving human problems, technology is the means, not the end
  • βœ… Quality still matters, faster doesn't mean sloppier
  • βœ… Reliability still matters, users don't care whether a bug was written by a human or an agent
  • βœ… Ethics still matter, bias in AI-generated code is still bias, and engineers are still accountable
  • βœ… Empathy still matters, understanding user needs, team dynamics, and business context

Agents do not remove responsibility, they concentrate it. The engineer remains accountable for outcomes, even when they are not the one typing every line.

This is perhaps the most important mindset shift: ownership expands, not contracts.


Looking Forward: Five Predictions​

The evolution of the software engineering role is not a cliff, it's a slope. Here are five predictions for where we're heading:

  1. "Agent-readiness" becomes a team metric: just like deployment frequency and lead time, organizations will measure how effectively they collaborate with AI agents.

  2. Code review skills become premium: the ability to quickly assess AI-generated code for correctness, security, and architectural alignment becomes a top-tier engineering skill.

  3. Specification languages evolve: we'll see new tools and formats that bridge the gap between natural language intent and machine-executable specifications.

  4. Engineering roles diversify further: new specializations emerge: agent supervisors, trust engineers, prompt architects, AI quality assurance specialists.

  5. The best engineers become force multipliers: a single engineer with strong architectural skills and agent fluency will have the impact traditionally associated with a small team.

Engineers who embrace agentic systems early will gain disproportionate leverage. Those who resist entirely will find themselves optimizing the wrong part of the workflow.

The future software engineer is:

  • Less focused on keystrokes, more focused on systems
  • Less individualistic, more orchestral
  • Less defined by what they can type, more defined by what they can envision

Not replaced by AI, but amplified by it.


Closing Thoughts​

Agentic software engineering forces us to confront a hard truth:

Writing code was never the job. Delivering outcomes was.

The tools are changing. The responsibility is not.

The role of the software engineer is evolving, not disappearing, into something broader, more strategic, and ultimately more impactful than ever before.

If you're a software engineer reading this, the best thing you can do right now is:

  1. Get hands-on with AI agents: GitHub Copilot is a great starting point
  2. Invest in your DevOps and platform skills: these are the foundation agents need
  3. Practice reviewing AI-generated code: build your judgment muscle
  4. Think in systems, not just code: the higher-order skill that compounds over time
  5. Stay curious: the pace of change is accelerating, and curiosity is your best compass

The engineers who thrive in this era won't be the ones who write the most code. They'll be the ones who design the best systems, ask the sharpest questions, and deliver the greatest outcomes.

Ask me about my website

Powered by Azure OpenAI

πŸ‘‹ Hello Friend!

You can ask me about:

  • Blog posts or technical articles.
  • Projects and contributions.
  • Speaking topics and presentations
  • Tech behind the website.