

My guess would be that using a desktop computer to make the queries and read the results consumes more power than the LLM, at least in the case of quickly answering models.
The expensive part is training a model but usage is most likely not sold at a loss, so it can’t use an unreasonable amount of energy.
Instead of this ridiculous energy argument, we should focus on the fact that AI (and other products that money is thrown at) aren’t actually that useful but companies control the narrative. AI is particularly successful here with every CEO wanting in on it and people afraid it is so good it will end the world.

It depends on what you are practicing. If it involves things out of your control, for instance poker, you definitely shouldn’t adjust after every result. In the poker case that leads to not playing well just because you lost one time.
Even in less random things you have to be absolutely sure you found the problem before adjusting.