I think it comes from “AI fatigue” with many people currently experimenting with LLMs. Half are excited, while others are deeply bored :-)
Also I choose to reuse an example from the Replit AI page for comparison without giving it much thought. The experiment was more about the process, not so much about this specific example and prompt.
You could make your own wrappers around async-shell-command and shell-command to autofill with your last command + have your whole history. Try the code below to see if it does the job like you’d like.