It’s not always easy to distinguish between existentialism and a bad mood.
- 14 Posts
- 259 Comments
Architeuthis@awful.systemsto
NotAwfulTech@awful.systems•Advent of Code 2024 - the home stretch - it's been an aMAZEing yearEnglish
3·1 year ago23-2
Leaving something to run for 20-30 minutes expecting nothing and actually getting a valid and correct result: new positive feeling unlocked.
Now to find out how I was ideally supposed to solve it.
Architeuthis@awful.systemsto
NotAwfulTech@awful.systems•Advent of Code Week 3 - you're lost in a maze of twisty mazes, all alikeEnglish
2·1 year agoIf nothing else, you’ve definitely stopped me forever from thinking of jq as sql for json. Depending on how much I hate myself by next year I think I might give kusto a shot for AOC '25
Architeuthis@awful.systemsto
NotAwfulTech@awful.systems•Advent of Code Week 3 - you're lost in a maze of twisty mazes, all alikeEnglish
2·1 year ago22-2 commentary
I got a different solution than the one given on the site for the example data, the sequence starting with 2 did not yield the expected solution pattern at all, and the one I actually got gave more bananas anyway.
The algorithm gave the correct result for the actual puzzle data though, so I’m leaving it well alone.
Also the problem had a strong map/reduce vibe so I started out with the sequence generation and subsequent transformations parallelized already from pt1, but ultimately it wasn’t that intensive a problem.
Toddler’s sick (but getting better!) so I’ve been falling behind, oh well. Doubt I’ll be doing 24 & 25 on their release days either as the off-days and festivities start kicking in.
Architeuthis@awful.systemsto
TechTakes@awful.systems•Anthropic and Apollo astounded to find that a chatbot will lie to you if you tell it to lie to youEnglish
19·1 year agoI mean, you could have answered by naming one fabled new ability LLM’s suddenly ‘gained’ instead of being a smarmy tadpole, but you didn’t.
Architeuthis@awful.systemsto
TechTakes@awful.systems•Anthropic and Apollo astounded to find that a chatbot will lie to you if you tell it to lie to youEnglish
20·1 year agoWhat new AI abilities, LLMs aren’t pokemon.
Architeuthis@awful.systemsto
TechTakes@awful.systems•Anthropic and Apollo astounded to find that a chatbot will lie to you if you tell it to lie to youEnglish
15·1 year agoSlate Scott just wrote about a billion words of extra rigorous prompt-anthropomorphizing fanfiction on the subject of the paper, he called the article When Claude Fights Back.
Can’t help but wonder if he’s just a critihype enabling useful idiot who refuses to know better or if he’s being purposefully dishonest to proselytize people into his brand of AI doomerism and EA, or if the difference is meaningful.
edit: The claude syllogistic scratchpad also makes an appearance, it’s that thing where we pretend that they have a module that gives you access to the LLM’s inner monologue complete with privacy settings, instead of just recording the result of someone prompting a variation of “So what were you thinking when you wrote so and so, remember no one can read what you reply here”. Que a bunch of people in the comments moving straight into wondering if Claude has qualia.
Architeuthis@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 23rd December 2024English
26·1 year agoRationalist debatelord org Rootclaim, who in early 2024 lost a $100K bet by failing to defend covid lab leak theory against a random ACX commenter, will now debate millionaire covid vaccine truther Steve Kirsch on whether covid vaccines killed more people than they saved, the loser gives up $1M.
One would assume this to be a slam dunk, but then again one would assume the people who founded an entire organization about establishing ground truths via rationalist debate would actually be good at rationally debating.
Architeuthis@awful.systemsto
TechTakes@awful.systems•"Sam Altman is one of the dullest, most incurious and least creative people to walk this earth."English
29·1 year agoIt’s useful insofar as you can accommodate its fundamental flaw of randomly making stuff the fuck up, say by having a qualified expert constantly combing its output instead of doing original work, and don’t mind putting your name on low quality derivative slop in the first place.
Architeuthis@awful.systemsto
NotAwfulTech@awful.systems•Advent of Code Week 3 - you're lost in a maze of twisty mazes, all alikeEnglish
3·1 year ago16 commentary
DFS (it’s all dfs all the time now, this is my life now, thanks AOC) pruned by unless-I-ever-passed-through-here-with-a-smaller-score-before worked well enough for Pt1. In Pt2 in order to get all the paths I only had to loosen the filter by a) not pruning for equal scores and b) only prune if the direction also matched.
Pt2 was easier for me because while at first it took me a bit to land on lifting stuff from Djikstra’s algo to solve the challenge maze before the sun turns supernova, as I tend to store the paths for debugging anyway it was trivial to group them by score and count by distinct tiles.
Architeuthis@awful.systemsto
SneerClub@awful.systems•Casey Newton drinks the kool-aidEnglish
6·1 year agoAnd all that stuff just turned out to be true
Literally what stuff, that AI would get somewhat better as technology progresses?
I seem to remember Yud specifically wasn’t that impressed with machine learning and thought so-called AGI would come about through ELIZA type AIs.
Architeuthis@awful.systemsto
TechTakes@awful.systems•Apple Intelligence AI mangles headlines so badly the BBC officially complainsEnglish
40·1 year agoIn every RAG guide I’ve seen, the suggested system prompts always tended to include some more dignified variation of “Please for the love of god only and exclusively use the contents of the retrieved text to answer the user’s question, I am literally on my knees begging you.”
Also, if reddit is any indication, a lot of people actually think that’s all it takes and that the hallucination stuff is just people using LLMs wrong. I mean, it would be insane to pour so much money into something so obviously fundamentally flawed, right?
Architeuthis@awful.systemsto
NotAwfulTech@awful.systems•Advent of Code 2024 Week 2: this time it's all grids, all the timeEnglish
3·1 year agoPt2 commentary
I randomly got it by sorting for the most robots in the bottom left quadrant while looking for robot concentrations, it was number 13. Despite being in the centre of the grid it didn’t show up when sorting for most robots in the middle 30% columns of the screen, which is kind of wicked, in the traditional sense.
The first things I tried was looking for horizontal symmetry (find a grid where all the lines have the same number of robots on the left and on the right of the middle axis, there is none, and the tree is about a third to a quarted of the matrix on each side) and looking for grids where the number of robots increased towards the bottom of the image (didn’t work, because turns out tree is in the middle of the screen).
I thinks I was on the right track with looking for concentrations of robots, wish I’d thought about ranking the matrices according to the amount of robots lined up without gaps. Don’t know about minimizing the safety score, sorting according to that didn’t show the tree anywhere near the first tens.
Realizing that the patterns start recycling at ~10.000 iterations simplified things considerably.
The tree on the terminal output
(This is three matrices separated by rows of underscores)

Architeuthis@awful.systemsto
NotAwfulTech@awful.systems•Advent of Code 2024 Week 2: this time it's all grids, all the timeEnglish
3·1 year ago13 commentary
Solved p1 by graph search before looking a bit closer on the examples and going, oh…
In pt2 I had some floating point weirdness when solving for keypress count, I was checking if the key presses where integers (can’t press button A five and half times after all) by checking if A = floor(A) and sometimes A would drop to the number below when floored, i.e. it was in reality (A-1).999999999999999999999999999999999999999999999. Whatever, I rounded it away but I did spend a stupid amount of time on it because it didn’t happen in the example set.
Architeuthis@awful.systemsto
TechTakes@awful.systems•SF tech startup Scale AI, worth $13.8B, accused of widespread wage theftEnglish
13·1 year agoIf you never come up with a marketable product you can remain a startup indefinitely.
Architeuthis@awful.systemsto
SneerClub@awful.systems•EAs at VOX malding at the latest NY time hit piece.English
9·1 year agoGetting Trump reelected should count as ‘billionaire philanthropy’.
thinkers like computer scientist Eliezer Yudkowsky
That’s gotta sting a bit.
Architeuthis@awful.systemsto
SneerClub@awful.systems•In which some researchers draw a spooky picture and spook themselvesEnglish
1·1 year agodeleted by creator
Architeuthis@awful.systemsto
NotAwfulTech@awful.systems•Advent of Code 2024 Week 2: this time it's all grids, all the timeEnglish
2·1 year ago11 discussion with spoliers
Well my pt1 solution would require something like at least 1.5 petabytes RAM to hold the fully expanded array, so it was back to the drawing board for pt2 😁
Luckily I noticed the numbers produced in every iteration were incredibly repetitive, so I assigned a separate accumulator to each one, and every iteration I only kept the unique numbers and updated the corresponding accumulators with how many times they had appeared, and finally I summed the accumulators.
The most unique numbers in one iteration were 3777, the 75 step execution was basically instant.
edit: other unhinged attempts included building a cache with how many pebbles resulted from a number after x steps that I would start using after reaching the halfway point, so every time I found a cached number I would replace that branch with the final count according to the remaining steps, but I couldn’t think of a way to actually track how many pebbles result downstream from a specific pebble, but at least it got me thinking about tracking something along each pebble.
11 code
// F# as usual // fst and snd are tuple deconstruction helpers [<TailCall>] let rec blink (idx:int) (maxIdx:int) (pebbles : (int64*int64) list) = if idx = maxIdx then pebbles |> List.sumBy snd else pebbles // Expand array |> List.collect (fun (pebbleId, pebbleCount) -> let fpb = float pebbleId let digitCount = Math.Ceiling(Math.Log(fpb + 1.0,10)) match pebbleId with | 0L -> [ 1L, pebbleCount ] | x when digitCount % 2.0 = 0.0 -> let factor = Math.Pow(10,digitCount/2.0) let right = fpb % factor let left = (fpb - right) / factor [int64 left, pebbleCount; int64 right,pebbleCount] | x -> [ x * 2024L, pebbleCount ]) // Compress array |> List.groupBy fst |> List.map (fun (pebbleId, pebbleGroup) -> pebbleId, pebbleGroup |> List.sumBy snd) |> blink (idx+1) maxIdx "./input.example" |> Common.parse |> List.map (fun pebble -> pebble,1L) |> blink 0 25 |> Global.shouldBe 55312L "./input.actual" |> Common.parse |> List.map (fun pebble -> pebble,1L) |> blink 0 75 |> printfn "Pebble count after 75 blinks is %d"
Architeuthis@awful.systemsto
TechTakes@awful.systems•Sorry kid, we killed your robot — Embodied goes broke, AI social devices will stop workingEnglish
11·1 year agopromise me you’ll remember me
I’m partial to Avenge me! as last words myself.





Come on, the AI wrote code that published his wallet key and then he straight up tweeted it in a screenshot, it’s objectively funny/harrowing.
Also the thing with AI tooling isn’t so much that it isn’t used wisely as it is that you might get several constructive and helpful outputs followed by a very convincingly correct looking one that is in fact utterly catastrophic.