• 84 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: June 10th, 2023

help-circle





















  • Blaed@lemmy.worldOPtoTechnology@lemmy.mlVicuna v1.5 Has Been Released!
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    I used to feel the same way until I found some very interesting performance results from 3B and 7B parameter models.

    Granted, it wasn’t anything I’d deploy to production - but using the smaller models to prototype quick ideas is great before having to rent a gpu and spend time working with the bigger models.

    Give a few models a try! You might be pleasantly surprised. There’s plenty to choose from too. You will get wildly different results depending on your use case and prompting approach.

    Let us know if you end up finding one you like! I think it is only a matter of time before we’re running 40B+ parameters at home (casually).


  • I am actively testing this out. It’s hard to say at the moment. There’s a lot to figure out deploying a model into a live environment, but I think there’s real value in using them for technical tasks - especially as models mature and improve over time.

    At the moment, though, performance is closer to GPT 3.5 than GPT 4, but I wouldn’t be surprised if this is no longer the case within the next year or so.


  • After finally having a chance to test some of the new Llama-2 models, I think you’re right. There’s still some work to be done to get them tuned up… I’m going to dust off some of my notes and get a new index of those other popular gen-1 models out there later this week.

    I’m very curious to try out some of these docker images, too. Thanks for sharing those! I’ll check them when I can. I could also make a post about them if you feel like featuring some of your work. Just let me know!







  • OpenAI has launched a new initiative, Superalignment, aimed at guiding and controlling ultra-intelligent AI systems. Recognizing the imminent arrival of AI that surpasses human intellect, the project will dedicate significant resources to ensure these advanced systems act in accordance with human intent. It’s a crucial step in managing the transformative and potentially dangerous impact of superintelligent AI.

    I like to think this starts to explore interesting philosophical questions like human intent, consciousness, and the projection of will into systems that are far beyond our capabilities in raw processing power and input/output. What may happen from this intended alignment is yet to be seen, but I think we can all agree the last thing we want in these emerging intelligent machines is to do things we don’t want them to do.

    ‘Superalignment’ is OpenAI’s response in how to put up these safeguards. Whether or not this is the best method is to be determined.


  • All of these are great thoughts and ponderings! Totally correct in the right circumstances, too.

    Massive context lengths that can retain coherent memory and attention over long periods of time would enable all sorts of breakthroughs in LLM technology. At this point, you would be held back by performance, compute, and datasets, rather than LLM context windows and short-term memory. In this context, our focus would be towards optimizing attention or improving speed and accuracy.

    Let’s say you had hundreds of pages of a digital journal and felt like feeding this to a local LLM (where your data stays private). If the model was running sufficiently at high quality, you could have an AI assistant, coach, partner, or tutor that was caught up to speed with your project’s goals, your personal aspirations, and your daily life within a matter of a few hours (or a few weeks, depending on hardware capabilities).

    Missing areas of expertise you want your AI to have? Upload and feed it more datasets Matrix style, any text-based information that humanity has shared online is available to the model.

    From here, you could further finetune and give your LLM a persona, having an assistant and personal operating system that breaks down your life with you, or you could simply ‘chat’ with your life, those pages you fed it, and reflect upon your thoughts and memories, tuned to a super intelligence beyond your own.

    Poses some fascinating questions, doesn’t it? About consciousness? Thought? You? This is the sort of stuff that keeps me up at night… If you trained a private LLM on your own notes, thoughts, reflections and introspection, wouldn’t you be imposing a level of consciousness into a system far beyond your own mental capacities? I have already started to use LLMs on the daily. In the right conditions, I would absolutely utilize a tool like this. We’re not at super intelligence yet, but an unlimited context window for a model of that caliber would be groundbreaking.

    Information of any kind could be digitalized and formatted into datasets (at massive lengths), enabling this assistant or personal database to grow overtime with innovations of a project, you, your life, learning and discovering things alongside the intention and desire for it to function. At that point, we’re starting to get into augmented human capabilities.

    What this means over the course of many years and breakthroughs in models and training methods would be fascinating thought experiment to consider for a society where everyone is using massive context length LLMs regularly.

    Sci-fi is quickly becoming a reality, how exciting! I’m here for it, that’s for sure. Let’s hope the technology stays free, and open and accessible for all of us to participate in its marvels.



  • Great question. I ponder this too, which is why I started /c/FOSAI. We have to do everything we can to make sure our future stays open for all, our faith cannot be put into the hands of a select few, but rather - the majority of many.

    Time will tell who truly supports this. I’m hopeful OpenAI is the good guy we want them to be, but other businesses keep me from jumping to that conclusion. I like what they are doing alongside Microsoft, but we need more players in the game. Fresh minds to shake things up a little.

    If you’re reading this, support FOSS, support FOSAI, and support the Fediverse. It’s the only way we can take back the internet, one server at a time.






  • FWIW, it’s a new term I am trying to coin in FOSS communities (Free, Open-Source Software communities). It’s a spin off of ‘FOSS’, but for AI.

    There’s literally nothing wrong with FOSS as an acronym, I just wanted to use one more focused in regards to AI tech to set the right expectations for everything shared in /c/FOSAI

    I felt it was a term worth coining given the varied requirements and dependancies AI/LLMs tend to have compared to typical FOSS stacks. Making this differentiation is important in some of the semantics these conversations carry.