Ok, so here's the thing.
There's no way I'm ever catching up.
I've been reading, watching, thinking about, and working on entire freaking worlds of such intricate interest that there is absolutely no way I'm ever giving you the full picture, even if I were to somehow start writing at the staggering pace of an article per week1. At some point, I have to start being honest with myself about what a realistic expectation looks like here. Which, unfortunately means writing about a relatively smaller subset of my experience, but trying to do it a lot more regularly.
This is still very probably superior in some way to sitting in spin-locked silence, but do be aware that you're not getting anywhere near the complete picture. I suppose you never really were, but there was a glorious, 8-year moment where we could both squint and pretend. So, with that mindset, here's an incomplete, molten core sample from my thoughts at the moment.
There's a project I've been poking at for a few weeks at this point meant to ease some pain in a particular collective decision making process. We need to decide which paper to read next, you see. And the way we do it right now is, embarrasingly, by mildly abusing the
github issue system. What I'm trying to do is come up with a voting and scheduling system for us that both hooks into the
github auth system and doesn't suck.
The starting point is my usual prototyping toolkit of Common Lisp combined with
fact-base, and the first thing I'm doing is putting together the minimal web API for interacting with collections of papers in an effort to prioritize them. I'm not going to talk more about this now, mostly because it isn't anywhere near done yet, and that's the goal state in the near future2.
"Robustness", as in, a computation should be very, very hard to disrupt adversarially. And "founding principle" as in, it should take priority over correctness and performance. This is the bizarre-seeming idea behind a series of videos that I've been digesting lately. The particular approach that ends up evolving is that of a massively parallel cellular automaton, where effectively each cells' behavior is specified separately.
I've got a minimal, toy simulator implemented over on github that more or less works, and seems like it'll make exploring the idea relatively simple going forward. And there's a bunch of things that seem worth exploring, ranging from the implications of demoting correctness and performance in importance, to the specifics of how robustness works in the face of things like grey goo.
A recent paper3 showed us how we might go about writing a self-interpreter for a language whose type system you would expect to prevent such shenanigans. The thrust of it so far is that this is possible by playing semantic games with the definition of "quoting" and "unquoting" in the macro sense. I'll let you know if additional insights are refined from today's reading.
A different recent paper showed us that machines are even more laughably insecure than we thought. There are supply-chain level attacks that can compromize hardware in a way that is almost fiendishly hard to detect. We didn't think much of the revelation, but discussions on how we might have some security guarantees despite the presence of hardware trojans happened regardless. There doesn't seem to be a very good way of preventing eavesdropping, but guarding against maliciously corrupted computations at least seems possible. It also seems like Robustness-first principles from the previous section might help out here in some way. That part, I'll have to get back to you on after I do a bit more prototyping.
- Which is already ridiculous, but that would just mean I'd start keeping up with the stream. In order to actually catch up in a year or two? That would take a full article every two days for the duration.↩
- Though I reserve the right to re-write it in Clojure using
gardendband some monstrosity I put together to replace the centralized routing tables that Clojure web frameworks seem to have in common.↩
- Which we're actually still reading as I write this. It's one of the papers interesting enough to warrant a second week being spent on it.↩