this is pretty silly, you wrote more text than the css and you basically translated it 1 to 1. Also you obviously knew all the directives ahead of time
It’s useful if you have to comment your code anyway.
Good comments communicate “why” over “what”.
Yes would have been quicker to just write the CSS without comments
Why do people want to code like this? Sigh.
I’m trying to understand the same. Here’s my guess: if you don’t have the syntax at your finger tips, you might actually be searching google/stackoverflow/etc and then deciding what to type in your editor. This “integration” short circuits your process by letting you do the whole thing in your editor — the query as code comment (it turns out LLM queries look & feel different from search engine queries 🤷♂️) followed by what you might have copy-pasted before editing/adapting for your purpose.
Maybe that’s why I don’t get it. I don’t have a copy-paste workflow. I have a search-read-type workflow. (Search can be search engine, chatgpt…whatever)
The example comes from a demo of the Replit AI assistant that does something similar but through autocomplete (https://replit.com/public/images/ghostwriter/demos/creativity/css_complete.mp4). I tried to see if we could have something similar and other helpers.
Also it’s not only about LLMs, you can use this process to sort, dedup or make other operations on your text selection with a shell command which is a nice tool to have.
I don’t think this particular example is very good. It does showcase LLMs are basically a hardcoded web search. Given some tokens (words in human language) they can generate some data in some form. Which is a good thing in itself.
But it also means you still have to what you are looking for. In other words, one has to know the CSS syntax, still has to type some of it, and in addition, has to learn how to generate all that from the llm. It is a bit like typing a skeleton or a yasnippet template in a buffer and then immediately calling the expansion from the minibuffer to generate some code for you. Perhaps it is a good automation, IDK yet, haven’t used it myself, but I don’t think the particular example you have used illustrates it well.
What is not so good here is that this particular example is so small in utility. I guess the usefulness comes from the “payload”. In this example, it would be the amount of generated code compared to the amount of typed stuff. I don’t use LLMs myself, so I am not sure what a good example would be, but perhaps if you construct some more useful illustrations, it might be more apparent why LLMs are potentially useful.
Some bizarre negativity in this thread. It’s great work and loads of people will find this useful.
I think it comes from “AI fatigue” with many people currently experimenting with LLMs. Half are excited, while others are deeply bored :-)
Also I choose to reuse an example from the Replit AI page for comparison without giving it much thought. The experiment was more about the process, not so much about this specific example and prompt.
I think it is a really bad example. What it showcases is that LLMs are basically a hardcoded web search. Given some tokens (words in human language) they can generate some data in some form. Which is a good thing in itself. But what is not so good here is that you still have to know the CSS syntax, still have to know what you are looking for, still have to type most of it, if not more, and have to learn how to generate that from the llm. It is a bit like typing a skeleton or a yasnippet template in a buffer and then immediately calling the expansion from the minibuffer to generate some code for you. Perhaps it is a good automation, IDK yet, haven’t used it myself, but I don’t think the particular example you have used illustrates it well.
Hey everyone, I wrote a blog post about this if you are interested to experiment with this:
https://modernchaos.heytwist.com/p/follow-up-vim-llm-small-things-that-awwwrelated github project: https://github.com/wearedevx/llm-bash
OOOOh this is cool, will take a look. I wanted to write something to interact with a local HuggingFace model running on my system, this may be the ticket. Thank you, very interesting
So…totally unrelated to emacs?
I don’t think so, if you’re not using evil-mode, you can use the function `shell-command-on-region` to achieve almost the same effect. The point was to experiment with LLMs through existing workflows in our editors
Not sure why anyone would want to use – and rely on – LLM in their emacs. Waste of time.
who’s relying on anything here? and really, you can’t think of any reason anyone might want to use it in emacs? maybe we can start with the same reasons they’d use it in any other editor?
And waste of time? quite the contrary! copilot and chatgpt integration in my Emacs save me tons of time writing boilerplate and catching stupid errors (admittedly, much more copilot than chatgpt). It’s not perfect, it’s sometimes annoying/misleading, and I’m certainly very skeptical of the ethics of the whole thing and where it’s headed, but at the moment I really can’t argue with the results I’m getting.
if it works for you, go for it,
That’s pretty interesting. I’m glad to see that emacs is moving forward. :D
Eglot,tree-sitter, LLMs, oh my!
Totally love it