usingnamespace std is still an effective way to shoot yourself in the foot, and if anything is a bigger problem than it was in the past now that std has decades worth of extra stuff in it that could have a name collision with something in your code.
We used to call that premature optimization. Now we complain tasks don’t have enough AI de-optimization. We must all redesign things that we have done in traditional, boring not-AI ways, and create new ways to do them slower, millions or billions of times more computationally intensive, more random, and less reliable! The market demands it!
I call this shit zero-sum optimization. In order to “optimize” for the desires of management, you always have to deoptimize something else.
Before AI became the tech craze du jour I had a VP get obsessed with microservices (because that’s what Netflix uses so it must be good). We had to tear apart a mature and very efficient app and turn it into hundreds of separate microservices… all of which took ~100 milliseconds to interoperate across the network. Pages that used to take 2 seconds to serve before now took 5 or 10 because of all the new latency required to do things they used to be able to basically for free. And it’s not like this was a surprise. We knew this was going to happen.
But hey, at least our app became more “modern” or whatever…
programming nitpicks (for the lack of better word) that I used to hear:
then this person implemeting time checking work via LLM over network and costs $0.75 each check lol
using namespace stdis still an effective way to shoot yourself in the foot, and if anything is a bigger problem than it was in the past now thatstdhas decades worth of extra stuff in it that could have a name collision with something in your code.We used to call that premature optimization. Now we complain tasks don’t have enough AI de-optimization. We must all redesign things that we have done in traditional, boring not-AI ways, and create new ways to do them slower, millions or billions of times more computationally intensive, more random, and less reliable! The market demands it!
I call this shit zero-sum optimization. In order to “optimize” for the desires of management, you always have to deoptimize something else.
Before AI became the tech craze du jour I had a VP get obsessed with microservices (because that’s what Netflix uses so it must be good). We had to tear apart a mature and very efficient app and turn it into hundreds of separate microservices… all of which took ~100 milliseconds to interoperate across the network. Pages that used to take 2 seconds to serve before now took 5 or 10 because of all the new latency required to do things they used to be able to basically for free. And it’s not like this was a surprise. We knew this was going to happen.
But hey, at least our app became more “modern” or whatever…