• sigmonsays@fediverser.communick.devB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    this is pretty silly, you wrote more text than the css and you basically translated it 1 to 1. Also you obviously knew all the directives ahead of time

    • entangledamplitude@fediverser.communick.devB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’m trying to understand the same. Here’s my guess: if you don’t have the syntax at your finger tips, you might actually be searching google/stackoverflow/etc and then deciding what to type in your editor. This “integration” short circuits your process by letting you do the whole thing in your editor — the query as code comment (it turns out LLM queries look & feel different from search engine queries 🤷‍♂️) followed by what you might have copy-pasted before editing/adapting for your purpose.

    • samuelroy_@fediverser.communick.devOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The example comes from a demo of the Replit AI assistant that does something similar but through autocomplete (https://replit.com/public/images/ghostwriter/demos/creativity/css_complete.mp4). I tried to see if we could have something similar and other helpers.

      Also it’s not only about LLMs, you can use this process to sort, dedup or make other operations on your text selection with a shell command which is a nice tool to have.

  • arthurno1@fediverser.communick.devB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I don’t think this particular example is very good. It does showcase LLMs are basically a hardcoded web search. Given some tokens (words in human language) they can generate some data in some form. Which is a good thing in itself.

    But it also means you still have to what you are looking for. In other words, one has to know the CSS syntax, still has to type some of it, and in addition, has to learn how to generate all that from the llm. It is a bit like typing a skeleton or a yasnippet template in a buffer and then immediately calling the expansion from the minibuffer to generate some code for you. Perhaps it is a good automation, IDK yet, haven’t used it myself, but I don’t think the particular example you have used illustrates it well.

    What is not so good here is that this particular example is so small in utility. I guess the usefulness comes from the “payload”. In this example, it would be the amount of generated code compared to the amount of typed stuff. I don’t use LLMs myself, so I am not sure what a good example would be, but perhaps if you construct some more useful illustrations, it might be more apparent why LLMs are potentially useful.

    • samuelroy_@fediverser.communick.devOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I think it comes from “AI fatigue” with many people currently experimenting with LLMs. Half are excited, while others are deeply bored :-)

      Also I choose to reuse an example from the Replit AI page for comparison without giving it much thought. The experiment was more about the process, not so much about this specific example and prompt.

      • arthurno1@fediverser.communick.devB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I think it is a really bad example. What it showcases is that LLMs are basically a hardcoded web search. Given some tokens (words in human language) they can generate some data in some form. Which is a good thing in itself. But what is not so good here is that you still have to know the CSS syntax, still have to know what you are looking for, still have to type most of it, if not more, and have to learn how to generate that from the llm. It is a bit like typing a skeleton or a yasnippet template in a buffer and then immediately calling the expansion from the minibuffer to generate some code for you. Perhaps it is a good automation, IDK yet, haven’t used it myself, but I don’t think the particular example you have used illustrates it well.

    • TheSnowIsCold-46@fediverser.communick.devB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      OOOOh this is cool, will take a look. I wanted to write something to interact with a local HuggingFace model running on my system, this may be the ticket. Thank you, very interesting

    • samuelroy_@fediverser.communick.devOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I don’t think so, if you’re not using evil-mode, you can use the function `shell-command-on-region` to achieve almost the same effect. The point was to experiment with LLMs through existing workflows in our editors

    • Clayh5@fediverser.communick.devB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      who’s relying on anything here? and really, you can’t think of any reason anyone might want to use it in emacs? maybe we can start with the same reasons they’d use it in any other editor?

      And waste of time? quite the contrary! copilot and chatgpt integration in my Emacs save me tons of time writing boilerplate and catching stupid errors (admittedly, much more copilot than chatgpt). It’s not perfect, it’s sometimes annoying/misleading, and I’m certainly very skeptical of the ethics of the whole thing and where it’s headed, but at the moment I really can’t argue with the results I’m getting.