You still own the correctness and the quality
You, the human software developer, are accountable for the correctness, and the quality, of the systems that you build. No matter what tools you use to generate the code. By quality, I mean all reasonably foreseeable aspects of quality. Both the explicit expectations, and the implicit assumptions.
We know that AI coding agents can very quickly generate systems that are beyond our comprehension. We simply do not have the time, or the mental capacity, to inspect them well enough. I mean, we can. And we probably should. But really we can't.
The demands of business, and the potential (🤑 🤑 🤑) of AI, will dictate that we do these things faster. That we don't allow humans to be the annoying bottleneck, in the name of quaint concepts like correctness, quality, and safety. We also don't yet truly trust AI agents to sufficiently police their own quality (though they're getting better).
So, there's a good conundrum for you. How are you going to maintain your control and accountability - the caring and ownership that differentiate you from just another machine - while also not becoming an antiquated bottleneck?