• 1 Post
  • 27 Comments
Joined 1 year ago
cake
Cake day: August 8th, 2023

help-circle



  • Only half joking: there was this one fanfic you see…

    Mainly I don’t think there was any one inciting incident beyond its creation: Yud was a one man cult way before LW, and the sequences actively pushed all the cultish elements required to lose touch with reality. (Fortunately, my dyslexic ass only got as far as the earlier bits he mostly stole from other people rather than the really crazy stuff.)

    There was definitely a step-change around the time CFAR was created, that was basically a recruitment mechanism for the cult and part of the reason I got anywhere physically near those rubes myself. An organisation made to help people be more rational seemed like a great idea—except it literally became EY/MIRI’s personal sockpuppet. They would get people together in these fancy ass mansions for their workshops and then tell them nothing other than AI research mattered. I think it was 2014/15 when they decided internally that CFAR’s mission was to create more people like Yudkowsky. I don’t think its a coincidence that most of the really crazy cult stuff I’ve heard about happened after then.

    Not that bad stuff didn’t happen before either.___


  • Good point with the line! Some of the best liars are good at pretending to themselves they believe something.

    I don’t think its widely known, but it is known, (old sneeeclub posts about it somwhere) that he used to feed the people he was dating LSD and try to convince them they “depended” on him.

    First time I met him, in a professional setting, he had his (at the time) wife kneeling at his feet wearing a collar.

    Do I have hard proof he’s a criminal? Probably not, at least not without digging. Do I think he is? Almost certainly.






  • As you were being pedantic, allow me to be pedantic in return.

    Admittedly, you might know something I don’t, but I would describe Andrew Ng as an academic. These kinds of industry partnerships, like the one in that article you referred to, are really, really common in academia. In fact, it’s how a lot of our research gets done. We can’t do research if we don’t have funding, and so a big part of being an academic is persuading companies to work with you.

    Sometimes companies really, really want to work with you, and sometimes you’ve got to provide them with a decent value proposition. This isn’t just AI research either, but very common in statistics, as well as biological sciences, physics, chemistry, well, you get the idea. Not quite the same situation in humanities, but eh, I’m in STEM.

    Now, in terms of universities having the hardware, certainly these days there is no way a university will have even close to the same compute power that a large company like Google has access to. Though, “even back in” 2012, (and well before) universities had supercomputers. It was pretty common to have a resident supercomputer that you’d use. For me, and my background’s orginally in physics, back then we had a supercomputer in our department, the only one at the university, and people from other departments would occasionally ask to run stuff on it. A simpler time.

    It’s less that universities don’t have access to that compute power. It’s more that they just don’t run server farms. So we pay for it from Google or Amazon and so on, like everyone in the corporate world—except of course the companies that run those servers (they still have to pay costs and lost revenue). Sometimes that’s subsidized by working with a big tech company, but it isn’t always.

    I’m not even going to get into the history of AI/ML algorithms and the role of academic contributions there, and I don’t claim that the industry played no role; but the narrative that all these advancements are corporate just ain’t true, compute power or no. We just don’t shout so loud or build as many “products.”

    Yeah, you’re absolutely right that MIRI didn’t try any meaningful computation experiments that I’ve seen. As far as I can tell, their research record is… well, staring at ceilings and thinking up vacuous problems. I actually once (when I flirted with the cult) went to a seminar that the big Yud himself delivered, and he spent the whole time talking about qualia, and then when someone asked him if he could describe a research project he was actively working on, he refused to, on the basis that it was “too important to share.”

    “Too important to share”! I’ve honestly never met an academic who doesn’t want to talk about their work. Big Yud is a big let down.


  • The closest thing LLMs have to a sense of truth is the corpus of text they’re trained on. If a syntactic pattern occurs there, then it may end up considering it as truth, providing the pattern occurs frequently enough.

    In some ways this is made much worse by ChatGPT’s frankly insane training method where people can rate responses as correct or incorrect. What that effectively does is create a machine that’s very good at providing you responses that you’re happy with. And most of the time those responses are going to be ones that “sound right” and are not easy to identify as obviously wrong.

    Which is why it gets worse and worse when you ask about things that you have no way of validating the truth of. Because it’ll give you a response that sounds incredibly convincing. I often joke when I’m presenting on the uses of this kind of software to my colleagues that the thing ChatGPT has automated away isn’t the writing industry as people have so claimed. It’s politicians.

    In the major way it’s used, ChatGPT is a machine for lying. I think that’s kind of fascinating to be honest. Worrying too.

    (Also more writing like a dweeb please, the less taking things too seriously on the Internet the better 😊)



  • Yeah, if anything cleaning up speech to text (and probably character recognition too) is the natural use of (these kind of) LLMs as they pretty much just guess what words should be there based on the others. They still struggle with recognising words when the surrounding words don’t give enough context clues, but we can’t have everything!

    (Well until the machine gods get here /s 🙄)

    They’re also (annecdotally) pretty good at returning the wording of “common” famous quotes if you can describe the content of the quote in other words and I can’t think of other tools that do that quite so well. I just wish people would stop using them to write content for them: recently I was recruiting for a new staff member for my team and someone used ChatGPT to write their application. In what world they thought statisticians wouldn’t see right through that I don’t know 😆



  • Oh, it is supposed to be an article about the experiment, or rather an experiment itself; the kind of writing I output for my job is very different. It seems like my intentions were pretty roundly misinterpreted here in general, still it took 10mins to write from inception of the idea for the article so I’m not too upset by that.

    Agreed re paragraph titles, pretty much for me this is all about making dictation a more streamlined process. This is the first time I’ve found it accurate enough to be useful and had a way (via ChatGPT splitting things into paragraphs) to make it accessible to edit.

    Wildly I wouldn’t actually say I’m overworked writing wise as an academic, but I am certainly the exception there.




  • serious question: did you expect otherwise, and if so, why? I’ve seen a number of people attempt this tooling for this reason and it seems absurd to me (but I’m already aware of the background of how these things work)

    In answer to your first question, no, I didn’t expect it to be good for finding references.

    For some context on myself, I’m a statistician, essentially. I have some background in AI research, and while I’ve not worked with large language models directly, I have some experience with neural networks and natural language processing.

    However, my colleagues, particularly in the teaching realm, are less familiar with what ChatGPT can be used for, and do try to use it for all the things I’ve mentioned.

    this is actively worsening from both sides - on goog’s side with doing all the weird card/summation/etc crap, on the other side where people are (likely already with LLMs) generating filler content for clickthrough sites. an awful state of affairs

    You are right that the quality of Google search results are worse, but I’ll admit to using the term Google somewhat pejoratively to mean the usual process I would use to seek out information, which would involve Google, but also involve Google Scholar, my university’s library services, and searching the relevant journals for my field. Apologies for the imprecision there.

    nit: this is correct but possibly not in the way that you meant

    With regards to the hallucinations, I am using the word in a colloquial sense to mean it’s generating, “facts that aren’t true”. So, I’m using the word in a colloquial sense to mean it’s generating, quote, facts that aren’t true, end quote.

    that the post itself was characterised by a number of short-header-short-paragraph entries is notable (and probably somewhat obvious as to why?). what I can’t see is how that can necessarily gain you time in the case of something where you’d be working in much longer/more complex paragraphs, or more haltingly in between areas as you pause on structure and such

    The structure being short paragraphs is partly to down to the way I was speaking, I was speaking off the top of my head and so my content wouldn’t form coherently long paragraphs anwyay. Having used this approach in a few different contexts, it does break things into longer paragraphs. I couldn’t predict exactly when it would break things into longer or shorter paragraphs, but it does a good enough job for being able to edit the text as a first draft.

    Chat GPT is certainly aggressive with generating the headers, and honestly, I don’t tend to use it with the header version all that much. I just thought it was an interesting demonstration.

    Also, with this example, in contrast to the ones in my work, I had the idea for this post come into my head, recorded it, and posted it here in under ten minutes. Well, that’s not strictly true. There was a bug when I tried to post it that I had to get mod support for, but otherwise, it was under ten minutes.

    At work, the content is not stuff that’s off the top of my head. I talk about my subject and I teach my subject all the time so I’m already able to speak with precision about it, as such dictation is helpful for capturing what I can convey verbally.

    in the end precision is precision, and it takes a certain amount of work, time, and focus to achieve. technological advances can help on certain dimensions of this, but ime even that usually comes at a tradeoff somewhere

    You’re right that precision does take time, and as the stuff comes out, it’s not suitable for the final draft of a research paper. However, you can get 80% of the way there, and often, in the early stages of writing a research paper or similar, the key thing is to communicate what you’re working on with colleagues. And being able to draft several thousand words rapidly in under an hour so I can give someone a good idea of what I’m aiming for is very useful.

    Anyway, thanks for your feedback. I really appreciate it.

    (Full disclosure: I also wrote this comment using ChatGPT/Whisper AI and copying your quotes in.)

    (Well, I say using ChatGPT. This isn’t really about using ChatGPT to do anything more than put paragraphs in, and headings of you so desire. I just thought this was worth posting because the technique is useful to me and I thought others might find it handy.)




  • My worry in 2021 was simply that the TESCREAL bundle of ideologies itself contains all the ingredients needed to “justify,” in the eyes of true believers, extreme measures to “protect” and “preserve” what Bostrom’s colleague, Toby Ord, describes as our “vast and glorious” future among the heavens.

    Golly gee, those sure are all the ingredients for white supremacy these folk are playing around with what, good job there are no signs of racism… right, right!!!

    In other news, I find it wild that big Yud has gone on an arc from “I will build an AI to save everyone” to “let’s do a domestic terrorism against AI researchers.” He should be careful, someone might this this is displaced rage at his own failure to make any kind of intellectual progress while academic AI researchers have passed him by.

    (Idk if anyone remembers how salty he was when AlphaGo showed up and crapped all over his “symbolic AI is the only way” mantra, but it’s pretty funny to me that the very group of people he used to say were incompetent are a “threat” to him now they’re successful. Schoolyard bully stuff and wotnot.)