Its similar for me as a software developer. I’m sometimes using LLMs to get alternative concepts or implementations to the task at hand or simply to refresh my memory about something with an example. This works well, because I’m already well aware of what I am trying to accomplish and so my prompts are precise enough to get a decent result. However generating directly usable code that meets my expectations just with prompts is really hard to do. There is so much fine tuning necessary that I’m faster just doing it myself.
I don’t see the technology itself as evil, there are some good uses if you know about the capabilities and limitations. What’s evil is big corporations selling this technology to people who are not prepared to understand or handle the limitations.
But personally I’m using it less and less these days. I don’t want to take part in the massive environmental damage AI is causing and the thought of contributing to this mess makes me feel icky every time I think of using it.
Its similar for me as a software developer. I’m sometimes using LLMs to get alternative concepts or implementations to the task at hand or simply to refresh my memory about something with an example. This works well, because I’m already well aware of what I am trying to accomplish and so my prompts are precise enough to get a decent result. However generating directly usable code that meets my expectations just with prompts is really hard to do. There is so much fine tuning necessary that I’m faster just doing it myself.
I don’t see the technology itself as evil, there are some good uses if you know about the capabilities and limitations. What’s evil is big corporations selling this technology to people who are not prepared to understand or handle the limitations.
But personally I’m using it less and less these days. I don’t want to take part in the massive environmental damage AI is causing and the thought of contributing to this mess makes me feel icky every time I think of using it.