AI and Developers: What does the future hold?
Tim O’Reilly has a post out there about the huge shift AI is making with our relationship to software engineering.
Scary title.
But the content is less so. Why?
How many of you started with technologies you still use exactly the same way today? Even if you’re a master Lisp/Clojure developer, you still have evolved, right? Someday I’ve got to get my head around pure FP more.
The post starts with the familiar progression of tech, so I’ll spare you my C64 -> [scene missing] -> AI code assistance fable. We’re all there with various starting lines.
What encourages me from Tim’s post is that he sees not a legion of talented programmers out of work because of AI, but a legion of talented programmers enabled to focus more deeply on solving business problems. And AI is going to cause a lot of new tools to be written and people to help understand what is being generated.
Of course, it’s very helpful if you’re a React expert, and you ask Cursor to create a nice Next.js application and use your favorite design engine.
Tim can see the future where someone more deeply involved with the business could also participate, building starter projects / concepts, even actual running software, and then partner with a more experience engineer with both technical depth and stronger AI experience/tools who could shape it further.
I know the big fear here is loss of control. The fear is real.
I don’t think about my video drivers anymore (well, mainly because I don’t have a Surface Book 2 running Linux for example, a wonderfully bad disaster story), and in the same way, it could be possible in a few years to treat generated code a little more like that.
Even right now, you can get things moving. “We need to split this application’s features into two sections and secure each one to a different role”.[clickity clackity, feedback loop for a few hours / days as we iterate on the ideas with the AI tool/chat/agent, and show progress to our stakeholders along the way], done.
You have to be ok with small experiments, committing incrementally, being OK with rolling them back. Testing ideas out quickly (feature flags, anyone? OpenTelemetry instrumentation to prove / challenge assumptions?). Honeycomb, for example, fits quite nicely in here.
But if we think that kind of work will be done automatically, well, by a completely closed loop of “I tell the AI to do it, and I get a finished product” - we’re crazy. It’s all about feedback loops. The human must be in there. I assume the massive mistake of “let’s lay off 90% of our programmers” is about to run into the “oops, we don’t have enough people to get things done anymore!”
What do you think?