• 0 Posts
  • 4 Comments
Joined 1 year ago
cake
Cake day: October 16th, 2023

help-circle
  • I suppose that melpa downloads can be used as a measure of usage.

    Not necessarily; lots of people would test some stuff and than perhaps not use it, or after some time go over to something else and so on. I wouldn’t rely on download stats.

    which extensions are used often in users setup

    It would be certainly possible to monitor which files are required in your Emacs session, and you could setup a public server somewhere on the Internet where such stats are uploaded, putted together and published for viewing. But what would that matter to you what I or some other Joe are using? Learn a thing and build on it instead of switching and trying. As long as it solves your problems who cares what others are using?


  • I don’t think this particular example is very good. It does showcase LLMs are basically a hardcoded web search. Given some tokens (words in human language) they can generate some data in some form. Which is a good thing in itself.

    But it also means you still have to what you are looking for. In other words, one has to know the CSS syntax, still has to type some of it, and in addition, has to learn how to generate all that from the llm. It is a bit like typing a skeleton or a yasnippet template in a buffer and then immediately calling the expansion from the minibuffer to generate some code for you. Perhaps it is a good automation, IDK yet, haven’t used it myself, but I don’t think the particular example you have used illustrates it well.

    What is not so good here is that this particular example is so small in utility. I guess the usefulness comes from the “payload”. In this example, it would be the amount of generated code compared to the amount of typed stuff. I don’t use LLMs myself, so I am not sure what a good example would be, but perhaps if you construct some more useful illustrations, it might be more apparent why LLMs are potentially useful.


  • I think it is a really bad example. What it showcases is that LLMs are basically a hardcoded web search. Given some tokens (words in human language) they can generate some data in some form. Which is a good thing in itself. But what is not so good here is that you still have to know the CSS syntax, still have to know what you are looking for, still have to type most of it, if not more, and have to learn how to generate that from the llm. It is a bit like typing a skeleton or a yasnippet template in a buffer and then immediately calling the expansion from the minibuffer to generate some code for you. Perhaps it is a good automation, IDK yet, haven’t used it myself, but I don’t think the particular example you have used illustrates it well.