I don’t think this particular example is very good. It does showcase LLMs are basically a hardcoded web search. Given some tokens (words in human language) they can generate some data in some form. Which is a good thing in itself.
But it also means you still have to what you are looking for. In other words, one has to know the CSS syntax, still has to type some of it, and in addition, has to learn how to generate all that from the llm. It is a bit like typing a skeleton or a yasnippet template in a buffer and then immediately calling the expansion from the minibuffer to generate some code for you. Perhaps it is a good automation, IDK yet, haven’t used it myself, but I don’t think the particular example you have used illustrates it well.
What is not so good here is that this particular example is so small in utility. I guess the usefulness comes from the “payload”. In this example, it would be the amount of generated code compared to the amount of typed stuff. I don’t use LLMs myself, so I am not sure what a good example would be, but perhaps if you construct some more useful illustrations, it might be more apparent why LLMs are potentially useful.
Not necessarily; lots of people would test some stuff and than perhaps not use it, or after some time go over to something else and so on. I wouldn’t rely on download stats.
It would be certainly possible to monitor which files are required in your Emacs session, and you could setup a public server somewhere on the Internet where such stats are uploaded, putted together and published for viewing. But what would that matter to you what I or some other Joe are using? Learn a thing and build on it instead of switching and trying. As long as it solves your problems who cares what others are using?