I’m on the fence, but will say that if, for whatever reason, it was never actually connected to the data or the connection had some flaw, I could totally believe it would just fabricate a report that looks consistent with what the request asked for. Maybe it failed to ever convey that an error occurred. Maybe it conveyed the lack of data and the user thought he could just tell the AI to fix the problem without trying to understand it himself and triggered it to generate a narrative consistent with fixing it without actually being able to fix it.
Sure if you do a sanity check it should fall apart, but that assumes they bother. Some people have crazy confidence in LLM and didn’t even check.
I’m on the fence, but will say that if, for whatever reason, it was never actually connected to the data or the connection had some flaw, I could totally believe it would just fabricate a report that looks consistent with what the request asked for. Maybe it failed to ever convey that an error occurred. Maybe it conveyed the lack of data and the user thought he could just tell the AI to fix the problem without trying to understand it himself and triggered it to generate a narrative consistent with fixing it without actually being able to fix it.
Sure if you do a sanity check it should fall apart, but that assumes they bother. Some people have crazy confidence in LLM and didn’t even check.