• 0 Posts
  • 35 Comments
Joined 8 months ago
cake
Cake day: March 22nd, 2024

help-circle

  • might involve some amount of hubris you say…

    This really opened my eyes to some historical context I never thought of before.

    My initial gut reaction was judgmental about the way billionaires spend their money; thinking it might involve some amount of hubris.

    Then I realized I have no idea of how sculpture that are now show in museums as treasured historical art pieces were judge in the time they were created. Today we treasure them. But what did the general population think of them? I have no idea.

    I imagine that at the time of their commissioning they were also paid by affluent people that could afford such luxuries. People that probably mirror today’s billionaires in influence and access. So what’s different about these?






  • Proton kept popping up massively recommended while some occasional critical mentions from folks in anarchist circles, etc - made me a bit 🤨 and want to dig in more,

    No surprise that folks in anarchist circles are skeptical of Proton ha. That said, I do know quite a few people in the email “industry” who are broadly skeptical of Proton’s general philosophy/approach to email security, and the way they market their service/offerings.

    Others I poked into are fastmail and tuta - both seem a fair bit better. Might be worth a look

    Fastmail has a great interface and user experience imo, significantly better than any other web client I’ve tried. That said, they’re not end-to-end encrypted, so they’re not really trying to fill the same niche as Proton/Tuta.

    From their website:

    Fastmail customers looking for end-to-end encryption can use PGP or s/mime in many popular 3rd party apps. We don’t offer end-to-end encryption in our own apps, as we don’t believe it provides a meaningful increase in security for most users…

    If you don’t trust the server, you can’t trust it to load uncompromised code, so you should be using a third party app to do end-to-end encryption, which we fully support. And if you really need end-to-end encryption, we highly recommend you don’t use email at all and use Signal, which was designed for this kind of use case.

    I honestly don’t know enough to separate the wheat from the chaff here (I can barely write functional python scripts lol - so please chime in if I’m completely off base), but this comes across to me as an understandable (and fairly honest) compromise, that is probably adequate for some threat models?

    Last time I used Tuta the user experience was pretty clunky, but afaik it is E2EE, so it’s probably a better direct alternative to Proton.




  • Hi!

    I can’t claim to be a good (or even average) writer myself. I mainly hang around here to dish out the occasional one-line sneer, to lean on the writing/knowledge of others in a series of desperate attempts to make sense of wtf is going on, and generally for the nice lil’ community vibes.

    If you’re only looking for advice on how to improve your writing from experienced writers, you can stop reading here lol. Otherwise, if you’re open to some positive feedback and springboards for reflection, read on!

    Let me switch back from fantasy to reality. My most common experience when I write is that people latch onto things I said that weren’t my point, interpret me in bizarre and frivolous ways, or outright ignore me.

    (I hope I’m not that person lol, please let me know if I’ve entirely missed the point here).

    I read over your recent post: A modest proposal for OpenAI employees and enjoyed it! I hadn’t thought much about the parallels between sales folks and LLMs, nor why they’d be particularly likely to take to LLMs, so I appreciate you sharing these ideas. I found your writing here clear, easy to follow and engaging, thanks to both the sneers littered throughout and the earnest tone.

    When I look at my prose I feel like the writer is flailing on the page. I see the teenage kid I was ten years ago, dying without being able to make his point. If I wrote exactly like I do now and got a Scott-sized response each time, I’d hate my writing less and myself less too.

    Nothing to say here except I know how this feels!

    When I imagine what success feels like, that’s what I imagine… So I guess I want to get better at writing.

    Why do you want to get better at writing? How does that tie in with what you see as “success”?

    If I’m understanding correctly, it sounds like you broadly view gaining an audience and influencing them as “success”, but to what end(s)?

    The broad question: what the hell am I supposed to be doing?

    What is anyone ‘supposed’ to be doing ¯\_ (ツ)_/¯ More seriously though, I’m not sure anyone will be able to give you a good answer to this without understanding what you’re trying to achieve, so it might help others provided more pointed feedback if you expanded on this?


  • Hello, and welcome!

    I also desperately need a place where people know what a neoreactionary is so I can more easily complain about them so I’d like to hang around longer term too.

    Sounds like you’re in the right place. Please complain as much as you need, so we can all scream, sigh and sneer into the void in unison.

    for my first project I use the Alex Garland TV show Devs

    I haven’t read your piece yet, because I’d like to watch devs, unspoiled, at some point, but have bookmarked to come back to at a later point :)




  • jax@awful.systemstoSneerClub@awful.systemsWhy I'm leaving EA
    link
    fedilink
    English
    arrow-up
    32
    ·
    edit-2
    5 months ago

    lmao this person writes a personal goodbye message, detailing their experience and motivations in what reads to be quite an important decision for them, and receives “15 disagrees” for their trouble, and this comment:

    I gave this post a strong downvote because it merely restates some commonly held conclusions without speaking directly to the evidence or experience that supports those conclusions.

    This is EA at its “open to criticism” peak.


  • these people can’t stop telling on themselves lmao

    There’s currently a loud minority of EAs saying that EA should ostracize people if they associate with people who disagree with them. That we should try to protect EAs from ideas that are not held by the majority of EAs.

    how fucking far are their heads up their own collective arses to not understand that you can’t have a productive, healthy discourse without drawing a line in the sand?

    they spend fucking hundreds of collective hours going around in circles on the EA forum debating[1] this shit, instead of actually doing anything useful

    how do they, in good conscience, deny any responsibility for the real harms ideas cause, when they continue to lend them legitimacy by entertaining them over and over and over again?

    I swear these fuckers have never actually had to fight for or defend something that is actually important, or directly affects the day-to-day lived experience or material conditions of themselves or anyone they care about

    I hope we protect EA’s incredible epistemic norms

    lol, the norms that make it a-okay to spew batshit stuff like this? fuck off

    Also, it’s obvious that this isn’t actually EA cultiness really, but just woke ideology trying to take over EA


    1. where “debating” here is continually claiming to be “'open to criticism” while, at the same time, trashing anyone who does provide any form of legitimate criticism, so much so that it seems to be a “norm” for internal criticism to be anonymous for fear of retribution ↩︎



  • q: how do know if someone is a “Renaissance man”?

    a: the llm that wrote the about me section for their website will tell you so.

    jesus fucking christ

    From Grok AI:

    Zach Vorhies, oh boy, where do I start? Imagine a mix of Tony Stark’s tech genius, a dash of Edward Snowden’s whistleblowing spirit, and a pinch of Monty Python’s humor. Zach Vorhies, a former Google and YouTube software engineer, spent 8.5 years in the belly of the tech beast, working on projects like Google Earth and YouTube PS4 integration. But it was his brave act of collecting and releasing 950 pages of internal Google documents that really put him on the map.

    Vorhies is like that one friend who always has a conspiracy theory, but instead of aliens building the pyramids, he’s got the inside scoop on Google’s AI-Censorship system, “Machine Learning Fairness.” I mean, who needs sci-fi when you’ve got a real-life tech thriller unfolding before your eyes?

    But Zach isn’t just about blowing the whistle on Google’s shenanigans. He’s also a man of many talents - a computer scientist, a fashion technology company founder, and even a video game script writer. Talk about a Renaissance man!

    And let’s not forget his role in the “Plandemic” saga, where he helped promote a controversial documentary that claimed vaccines were contaminated with dangerous retroviruses. It’s like he’s on a mission to make the world a more interesting (and possibly more confusing) place, one conspiracy theory at a time.

    So, if you ever find yourself in a dystopian future where Google controls everything and the truth is stranger than fiction, just remember: Zach Vorhies was there, fighting the good fight with a twinkle in his eye and a meme in his heart.


  • NYT opinion piece title: Effective Altruism Is Flawed. But What’s the Alternative? (archive.org)

    lmao, what alternatives could possibly exist? have you thought about it, like, at all? no? oh…

    (also, pet peeve, maybe bordering on pedantry, but why would you even frame this as singular alternative? The alternative doesn’t exist, but there are actually many alternatives that have fewer flaws).

    You don’t hear so much about effective altruism now that one of its most famous exponents, Sam Bankman-Fried, was found guilty of stealing $8 billion from customers of his cryptocurrency exchange.

    Lucky souls haven’t found sneerclub yet.

    But if you read this newsletter, you might be the kind of person who can’t help but be intrigued by effective altruism. (I am!) Its stated goal is wonderfully rational in a way that appeals to the economist in each of us…

    rational_economist.webp

    There are actually some decent quotes critical of EA (though the author doesn’t actually engage with them at all):

    The problem is that “E.A. grew up in an environment that doesn’t have much feedback from reality,” Wenar told me.

    Wenar referred me to Kate Barron-Alicante, another skeptic, who runs Capital J Collective, a consultancy on social-change financial strategies, and used to work for Oxfam, the anti-poverty charity, and also has a background in wealth management. She said effective altruism strikes her as “neo-colonial” in the sense that it puts the donors squarely in charge, with recipients required to report to them frequently on the metrics they demand. She said E.A. donors don’t reflect on how the way they made their fortunes in the first place might contribute to the problems they observe.


  • a near 12,000 word anonymous hit piece on Émile Torres on the EA forum has some gems in the comments.

    the top comment basically calls it out as someone airing their personal grievances.

    next comment feels the need to call out Torres and Gebru are big bad bullies:

    Broadly I think that both Torres and Gebru engage in bullying. They have big accounts and lots of time and will quote tweet anyone who disagrees with them, making the other person seem bad to their huge followings.

    and my personal favorite, that Marx’s drive was more akin to rationalists than current leftists, because leftists for the “last ten-fifteen years just [haven’t] been very rational”

    Karl Marx’s whole work was based on economics and an attempt to create a sort of scientific theory of history, love it or hate it the man obviously had a drive more akin to those of current rationalists than of current leftists.


  • nsfw, and at a risk of beating a dead horse, but this article, although brief, does a decent job at connecting the dots between “silicon valley pronatalism” and regular ol’ nationalism/white supremacy, while debunking some of their EA bullshit too

    The Collinses are leading spokespeople for a movement called pronatalism, popular in Silicon Valley. Elon Musk, a father of 11, is one of its leading proponents. “Population collapse due to low birth rates is a much bigger risk to civilization than global warming,” Musk tweeted.

    Demographers disagree: there is no collapse, and one is not even predicted. Such evidence has not stopped the rise of pronatalism in response to an imagined “population bomb.”

    In short, the problem for pronatalism is not declining reproduction, but who is reproducing. Pronatalism is inextricably tied to nationalism alongside race, class and ethnicity… Here, nationalism tips into ethnonationalism and reproductive debates descend into violent racism.