• arthurno1@fediverser.communick.devB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I don’t think this particular example is very good. It does showcase LLMs are basically a hardcoded web search. Given some tokens (words in human language) they can generate some data in some form. Which is a good thing in itself.

    But it also means you still have to what you are looking for. In other words, one has to know the CSS syntax, still has to type some of it, and in addition, has to learn how to generate all that from the llm. It is a bit like typing a skeleton or a yasnippet template in a buffer and then immediately calling the expansion from the minibuffer to generate some code for you. Perhaps it is a good automation, IDK yet, haven’t used it myself, but I don’t think the particular example you have used illustrates it well.

    What is not so good here is that this particular example is so small in utility. I guess the usefulness comes from the “payload”. In this example, it would be the amount of generated code compared to the amount of typed stuff. I don’t use LLMs myself, so I am not sure what a good example would be, but perhaps if you construct some more useful illustrations, it might be more apparent why LLMs are potentially useful.