• samuelroy_@fediverser.communick.devOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think it comes from “AI fatigue” with many people currently experimenting with LLMs. Half are excited, while others are deeply bored :-)

      Also I choose to reuse an example from the Replit AI page for comparison without giving it much thought. The experiment was more about the process, not so much about this specific example and prompt.

      • arthurno1@fediverser.communick.devB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I think it is a really bad example. What it showcases is that LLMs are basically a hardcoded web search. Given some tokens (words in human language) they can generate some data in some form. Which is a good thing in itself. But what is not so good here is that you still have to know the CSS syntax, still have to know what you are looking for, still have to type most of it, if not more, and have to learn how to generate that from the llm. It is a bit like typing a skeleton or a yasnippet template in a buffer and then immediately calling the expansion from the minibuffer to generate some code for you. Perhaps it is a good automation, IDK yet, haven’t used it myself, but I don’t think the particular example you have used illustrates it well.