New Tools, Same Job
- Published on
- Authors
- Name
- Cameron Perry
3 min read • ––– views
I wonder how evergreen any post about AI might be. After all, things are changing faster than most of us can keep up. Yet amidst the rapid change, the core concepts underlying our work in software remain steady, even as the tools and languages evolve.
Whether it's punching cards with intricate patterns, writing assembly to run on "bare metal", or using a high-level language of your choice, we do it for the same purpose: giving instructions to a machine to complete some sort of task.
Telling a large language model what we need is one more abstraction on top of a long history of transforming instructions into outcomes. The LLM will write code in your programming language of choice, which likely compiles into some kind of bytecode, which executes specific instructions for the processor it's running on. Even if you were writing python, javascript, or other yourself, I'm fairly certain you didn't touch an interpreter or compiler that translated your instructions into instructions on the CPU or GPU.
What's different now with powerful coding assistants is that (in theory, and even in practice, anyway) you can skip all the abstractions and describe the outcomes you desire and let the model translate it into code. Sure, "one-shotting" these prompts - getting it right on the first try - sounds great, but more realistically, it's a process. We're giving instructions, refining, re-refining, etc. until we reach the desired outcomes. I have more thoughts on that, but for another time. My point is that we run this iterative loop, as if we were typing functions and algorithms into our editor, then hitting run.
Iterating. Refining. Yes. What are we really doing here?
We're defining the scope and functionality, one step at a time. We're figuring out what needs to happen next, and how to achieve it. Instead of our programming language, we're using plain language to describe what we need. As we refine our understanding, we add to the logic and complexity.
Doesn't this sound like the process of determining and writing requirements?
That's it! Let the process be as simple, complex, straightforward, or convoluted as you need it to be (a topic for another day). Either way, we're defining the desired behavior of the application, then writing some kind of code that tells a machine what to do. Just as I use English to instruct an LLM what and how I want built, the LLM is using python or typescript (the abstractions) to tell the machine what I want.
The job's always been about defining requirements and turning that into an outcome. Whether our "code" is a programming language or plain-language prompting of LLMs, the requirements aren't changing. The tools are. Be comfortable with change - embrace it.
As technology continues to evolve rapidly, I encourage you to think about the product requirement definition process. Learn from our teammates in product and design. Learn about the business impact of the work you're doing. By deepening your understanding of the what/why/how behind your work, you take on a more holistic product view that benefits everyone.
At the end of the day, the tools are ever-changing, but the nature of our work hasn't. We're still taking in requirements, shaping them into a cohesive story, and turning ideas into outcomes. We will always have new tools. The job, however, remains the same.