kivikakk.ee

kat

Today marks 20 years since Kat de Koning’s suicide.

Missing you.

unedited thoughts about LLMs

Imagining that LLM assistants will help with what is essentially an exercise in theory-building seems deeply mistaken.

  • Creating stable and maintainable software requires knowing the properties of the environments in intimate clarity (envs e.g.: Elixir, the web, AWS as a whole, HTTP in particular, whatever domains you cross), and having a clear theory of the software under development: as a whole, working project; as an implementation of ideas in a certain language or framework or otherwise; as an artefact being worked on over time (vis-à-vis version control, project management, etc.); and so forth with different lenses.
  • LLMs can help with none of these, and actively hinder several aspects.
  • Today’s best approaches only really serve to distance the user from all of these concepts. The work becomes instead “convince the generative model to produce code closest to what I want”, but “closest” is a non-specific property that a generative model will naturally exploit. It will give code a certain way (per its (quite literally) illegally and unethically sourced training data! woo!), and that will then dictate a bit more of the shape of the code you write (or generate) next. There is an entire class of tiny decisions you are repudiating, and difference in craft between the two (and I do mean by craft a proxy for intelligibility, performance, reliability, etc.) is one that will become more and more obvious as time goes.
    • rarely bother to “predict” anything, so this is interesting. Usual assumptions apply: don’t particularly feel it or anything will be true, mostly because always extremely prepared to be disappointed even more. e.g. may well not happen, well-written/reliable software might just cease to exist in the large instead ¯\_(ツ)_/¯
  • If typing speed has even been the bottleneck for your programming you are Doing It Wrong. LLM-centric approaches, even in an agentic scenario (or whatever! the model fundamentally hallucinates! in these approaches, they always will. stop falling for the next thing every 6 months, it’s boring!), decentre all of the theory-building aspect and the hands-on experience necessary if you ever again see yourself having to work on this by hand (vs. declaring it write-only, hoping the agent definitely gives you good things to paste in the console when there’s an incident and you can’t understand the data flow yourself!).
  • Where do you honestly see this going? Has there ever been any indication that this isn’t another bubble? Do you not already see the horror stories? I haven’t even mentioned the environmental costs; the ones that threaten to displace all other costs with its effects. Or are you going soft on “global warming” too?

know the difference

I think one of my biggest surprises in going to Estonia was discovering that European wasps are just called wasps there.

Dismantling MIFare Classic

Since the tag nonce and uid are sent as plaintext, we also recover the LFSR state before feeding in nTuid (step 4). Note that this LFSR state is the secret key!

Dismantling MIFARE Classic

1179648

// This is a weird looking number! We really want our first request size to be 1MiB,
// which is a common IO size. But Linux's readahead will try to read an extra 128k on on
// top of a 1MiB read, which we'd have to wait for a second request to service. Because
// FUSE doesn't know the difference between regular reads and readahead reads, it will
// send us a READ request for that 128k, so we'll have to block waiting for it even if
// the application doesn't want it. This is all in the noise for sequential IO, but
// waiting for the readahead hurts random IO. So we add 128k to the first request size
// to avoid the latency hit of the second request.
//
// Note the CRT does not respect this value right now, they always return chunks of part size
// but this is the first window size we prefer.
const INITIAL_READ_WINDOW_SIZE: usize = 1024 * 1024 + 128 * 1024;

github.com/awslabs/mountpoint-s3/mountpoint-s3-fs/src/s3/config.rs:243-254