I’ve found that AI has done literally nothing to improve my life in any way and has really just caused endless frustrations. From the enshitification of journalism to ruining pretty much all tech support and customer service, what is the point of this shit?
I work on the Salesforce platform and now I have their dumbass account managers harassing my team to buy into their stupid AI customer service agents. Really, the only AI highlight that I have seen is the guy that made the tool to spam job applications to combat worthless AI job recruiters and HR tools.
It’s really useful for churning out some basic code. For searching the web, it’s providing better results than Google these days.
I hate that it monetized generally knowledge that use to be easily searchable then repackaged it as some sort of black box randomizer.
Text generation is Frozen Yogurt now.
Noticeably worse, but you can have so much more.
An LLM (large language model, a.k.a. an AI whose output is natural language text based on a natural language text prompt) is useful for the tasks when you’re okay with 90% accuracy generated at 10% of the cost and 1,000% faster. And where the output will solely be used in-house by yourself and not served to other people. For example, if your goal is to generate an abstract for a paper you’ve written, AI might be the way to go since it turns a writing problem into a proofreading problem.
The Google Search LLM which summarises search results is good enough for most purposes. I wouldn’t rely on it for in-depth research but like I said, it’s 90% accurate and 1,000% faster. You just have to be mindful of this limitation.
I don’t personally like interacting with customer service LLMs because they can only serve up help articles from the company’s help pages, but they are still remarkably good at that task. I don’t need help pages because the reason I’m contacting customer service to begin with is because I couldn’t find the solution using the help pages. It doesn’t help me, but it will no doubt help plenty of other people whose first instinct is not to read the f***ing manual. Of course, I’m not going to pretend customer service LLMs are perfect. In fact, the most common problem with them seems to be that they go “off the script” and hallucinate solutions that obviously don’t work, or pretend that they’ve scheduled a callback with a human when you request it, but they actually haven’t. This is a really common problem with any sort of LLM.
At the same time, if you try to serve content generated by an LLM and then present it as anything of higher quality than it actually is, customers immediately detest it. Most LLM writing is of pretty low quality anyway and sounds formulaic, because to an extent, it was generated by a formula.
Consumers don’t like being tricked, and especially when it comes to creative content, I think that most people appreciate the human effort that goes into creating it. In that sense, serving AI content is synonymous with a lack of effort and laziness on the part of whoever decided to put that AI there.
But yeah, for a specific subset of limited use cases, LLMs can indeed be a good tool. They aren’t good enough to replace humans, but they can certainly help humans and reduce the amount of human workload needed.
A friend’s wife “makes” and sells AI slop prints. He had to make a twitter account so he could help her deal with the “harassment”. Not sure exactly what she’s dealing with, but my friend and I have slightly different ideas of what harassment is and I’m not interested in hearing more about the situation. The prints I’ve seen look like generic fantasy novel art that you’d see at the checkout line of a grocery store.
It looks impressive on the surface but if you approach it with any genuine scrutiny it falls apart and you can see that it doesn’t know how to draw for shit.
I find it helpful to chat about a topic sometimes as long as it’s not based on pure facts, You can talk about your feelings with it.
There are a few uses where it genuinely speeds up editing/insertion into contracts and warns of you of red flags/riders that might open you up to unintended liability. BUT the software is $$$$ and you generally need a law degree before you even need a tool like that. For those that are constantly up to their chins in legal shit, it can be helpful. I’m not, thankfully.
I use LLMs for multiple things, and it’s useful for things that are easy to validate. E.g. when you’re trying to find or learn about something, but don’t know the right terminology or keywords to put into a search engine. I also use it for some coding tasks. It works OK for getting customized usage examples for libraries, languages, and frameworks you may not be familiar with (but will sometimes use old APIs or just hallucinate APIs that don’t exist). It works OK for things like “translation” tasks; such as converting a MySQL query to a PostGres query. I tried out GitHub CoPilot for a while, but found that it would sometimes introduce subtle bugs that I would initially overlook, so I don’t use it anymore. I’ve had to create some graphics, and am not at all an artist, but was able to use transmission1111, ControlNet, Stable Diffusion, and Gimp to get usable results (an artist would obviously be much better though). RemBG and works pretty well for isolating the subject of an image and removing the background too. Image upsampling, DLSS, DTS Neural X, plant identification apps, the blind-spot warnings in my car, image stabilization, and stuff like that are pretty useful too.
to copy my own comment from another similar thread:
I’m an idiot with no marketable skills. I put boxes on shelves for a living. I want to be an artist, a musician, a programmer, an author. I am so bad at all of these, and between having a full time job, a significant other, and several neglected hobbies, I don’t have time to learn to get better at something I suck at. So I cheat. If I want art done, I could commission a real artist, or for the cost of one image I could pay for dalle and have as many images as I want (sure, none of them will be quite what I want but they’ll all be at least good). I could hire a programmer, or I could have chatgpt whip up a script for me since I’m already paying for it anyway since I want access to dalle for my art stuff. Since I have chatgpt anyway, I might as well use it to help flesh out my lore for the book I’ll never write. I haven’t found a good solution for music.
I have in my brain a vision for a thing that is so fucking cool (to me), and nobody else can see it. I need to get it out of my brain, and the only way to do that is to actualize it into reality. I don’t have the skills necessary to do it myself, and I don’t have the money to convince anyone else to help me do it. generative AI is the only way I’m going to be able to make this work. Sure, I wish that the creators of the content that were stolen from to train the ai’s were fairly compensated. I’d be ok with my chatgpt subscription cost going up a few dollars if that meant real living artists got paid, I’m poor but I’m not broke.
These are the opinions of an idiot with no marketable skills.
I made an AI song for my mom’s birthday on Suno and she loved it so much she cried. So that was nice.
I don’t like how people are using it to just replace artists. It would be find if it’s just to automate some things, like, “AI can tell you when ___ needs to be replaced,” but it feels more like it’s being used as a stick to workers. Like, “Keep acting up and I’ll replace you with dun dun dun AI!”
Porn has been ruined by AI too. Jokes aside it’s really a boner killer.
Idk who faps to that whack shit but it’s trying so hard to make everything look baby silk smooth with unrealistic bodies most likely stolen from hentai.
I use it all the time, to translate, explain, give guides, write code, do repetitive menial tasks, fix code, understand others code.
I get the hatred for it, but I use it almost every day.
I agree, but I’m the spirit of the question, do you work in a corporate environment?
Unfortunately. But I also use my corpo AI accounts for personal stuff too, because it’s immune to being used for training.
Don’t corpo accounts leave logs for auditing though? I wouldn’t like HR going over my personal notes I (accidentally) shared there.
Yeah, sure, but with like 2k employees, they will only look if there are issues with me. Having worked in the IT industry for a long time, I’ve only once or twice had to dig into shit like that for HR and it was only when the person did something bad.
To me it’s glorified autocomplete. I see LLM as a potencial way of drastically lowering barrier of entry to coding. But I’m at a skill level that coercing a chatbot into writing code is a hiderance. What I need is good documentation and good IDE statical analysis.
I’m still waiting on a good, IDE integrated, local model that would be capable of more that autompleting a line of code. I want it to generate the boiler plate parts of code and get out of my way of solving problems.
What I don’t want, is a fucking chatbot.
Have you seen neuro sama?
There are plenty of uses for it. There are also plenty of bad implementations that don’t use it in a way that helps anyone.
We’re going through an overhyped period currently but we’ll see actual uses in a few years once the dust settles. About 10 years ago, a similar thing happened with AI vision and now everyone has filters they can use on cameras and face detection. We’ll reach another plateau until the next tech hype comes about.