Home Archive Trilium Notes About
Home Archive Trilium Notes About

I need a new system

Posted on 2018-03-13

It’s now March 2018. I have been working at Google for a year. A few days after I started, my personal laptop broke, along with all my meticulously tuned personal infrastructure. I have procrastinated on getting a new one, mostly because I hoped I would fix it myself, or because I did not want to make a big purchase like that.

There’s a lot of problems that keep bugging me and that I need to solve, and after that’s done and my head stops screaming bloody anxiety at me, there’s aspirations. I feel that I need a strategy. It’s useful to take a step back to think about what things are important, what’s working for me and what’s not working, and such. Over the last year I have had a few times where I picked up a piece of paper and started scribbling some strategical stuff on it, but it was always spare moments and not very connected to other such times. I would always start from scratch.

That’s bad. I am obese and need to change that, and changing a thing like that can’t be done by getting a spark of inspiration one day, running on an agenty high for 10 hours, and then it’s fixed. I weigh 124 kg, which is very near my lifelong maximum. The Internet says that you lose 1 pound of fat by burning 3500 kcal (of course, not a figure to take seriously, goodness, I’m just doing a Fermi calculation here, leave me alone), so to get to my optimal weight of let’s-say-78-kg, I’d need to lose 353 500 kcal. The last 5 days I’ve been trying to attack the snacking habits through which I balooned to this weight by eating just a Soylent packet a day. A Soylent packet is 2082 kcal and my basal metabolic rate is 2300 kcal. At this rate, I would get down to 78 kg in… 4.4 years :/ (And that’s not accounting for the fact that BMR decreases the with weight. But on the other hand, I am not completely sessile.)

A goal like this needs some system for coming back to the drawing board every now and then, learning from which things work and which don’t, safety nets for not losing hope and determination if I inevitably fall short at some point. And of course there’s other things in which I could use more wisdom/strategy.

For example, at work. A thing which often happens is that I spend most of the day kind of switching between not-very-useful tasks, like some refactorings or minor features, and feel very frustrated when I get blocked. And I don’t often take a moment to step back to prioritize, and I recognize that if I want to optimize for promotions and recognition (which I do want, among other things), I have gotten myself into a trap.

Or, happiness - optimizing for being happy, being with people I love.

And when things I want conflict, without the opportunity to “detach” from the conflicted parts and allow for some higher-level strategy or prioritization, I just burn cycles feeling bad because of something I also do. Example: I want to move to the Bay, because I think I’ll like the rationalist and furry communities there, but I also feel scared of looking for a new team within Google or scoping for options outside. I want to work in AI safety (part of it is probably about feeling it’s a super high-prestige thing to do), but I also want to be rich and have an upwards career trajectory. And such stuff.

And often, when I feel bad about something, I find myself walking through some conflict I have walked through many times before. Maybe I have walked through that conflict before even with a piece of paper and doing some kind of exercise to reduce internal conflict (like compassion with parts or internal double crux) but somehow that piece of paper is never anywhere to be found.

So, I want a new system for strategical stuff and I want to turn it into a kind of keystone habit which I have. Let this post be a commitment that I want to flesh this out. I’m setting up a weekly 2-hour goal in Google Calendar called “Systems Maintenance”.

What I want from the system

I’ll try to put into words what I want from this system.

I want the system to align my short-term wants (e.g., “I want a cupcake”) with my long-term goals (“I want to lose weight”).

I want the system to track the really important things. I don’t want the system to track things just for the sake of tracking things, or just for the sake of “getting good-boy points for doing an agenty-looking thing”.

If, on reflection, it turns out that a thing that I am doing because of the system is a thing I don’t really want to be doing, then sometimes that will be a fault of the system for picking that thing - not of me for not doing it.

The system is not a system for losing weight or for tracking work.

Some specific things that are bugging me at this time

Here’s a few candidates for things the system might, but also might not lead to me making some progress on. Here, I am deliberately using non-commital language. I am not going to say that I absolutely have to do something. I have found that commitment-mechanism-bombs are sometimes self-blackmail and end up to me causing violence to myself.

But there’s a few ideas for things which often bug me. They do often bug me now, but that does not necessarily mean that dealing with them directly on their own terms is a good idea. Maybe some of them point at problems I actually on reflection want to address. Maybe some of them are distractions, to which the correct solution might be “just stop worrying about it” or “get some fancy antidepressant / do a mind hack to realize those things are not important”.

There’s also a few aspirational things in mind, which might be candidate goals, but might also be distractions which should be abandoned or shelved. Actually, now that I think about it, they’re mostly “maybe I want to work in AI safety research” and “maybe I want to do some serious EA-planning”. Right now, I’m mostly feeling like setting those aside and focusing on getting myself in order and feeling good without trying to string on specific goals like that.

Sidenote: I don’t think I want to be religious in a certain sense

Relatedly, I’ve gotten somehow less certain about EA stuff - in particular, the role that I want it to play in my life. I have been a religious effective altruist (for a wide sociological take on religion). Independently of whether or not do I want to continue or discontinue some EA-type behaviors (like identifying as an EA, going to EA/rationality meetups, etc.), it’s not healthy to be too identified with a particular belief system.

In my understanding, religions are community plus belief system plus value system. Hang out in the community, and you’re prone to soak up all the rest. And you may see your own (possibly implicit) value system and belief system come into conflict with that of the community. And if you don’t want to leave the community, maybe because you by now know few people outside of the in-group and because you (like me) are deeply pained by loneliness, the part of you which wants to do the right in-group signalling is going to fight the parts of you which want something else.

Say that you identify as an EA and a rationalist and to get social points, the right thing to say is “I want to work on AI safety in the Bay Area”. That’s called a load-bearing belief by analogy with a load-bearing wall: if it comes down, you can lose a lot.

And earlier this year, largely due to talking with a person highly critical of rationality/EA, I have become worried of the fact that I apparently have load-bearing beliefs about EA and rationality. Note that a belief being load-bearing does not imply that it’s false. (Though after reading The Elephant in the Brain, I wouldn’t be surprised if there were some argument for why group-cohesion-beliefs would tend to be outlandish, honest-costly-signalling-something.) So I have become concerned that I might be acting out some beliefs because they’re load-bearing for my need for community and acceptance. I think what’s warranted is a gentle de-identification from the community, by mixing with more people who are not in it and diversifying, and a kind of retracing of my steps in how I came to do EA things.

If I remember correctly, they came mostly from moments when I expanded my empathy over the suffering of all things, and wanted to make things okay, and I expect I will still prefer to try to make the world better on reexamination. But a sort of scary thing is that I think it would feel bad if I came to discover I don’t really care about making the world better.

On the other hand, the general form of this reasoning is: “Huh. Maybe I don’t actually want X. And the thought ‘Maybe I don’t actually want X’ makes me feel bad. That’s a reason to re-examine whether I want X.” Substituting for “X” anything I care about will have the effect of making me doubt whether I actually do, and I’ve had this particular security hole exploited by the said person-critical-of-rationality/EA.

Something for me to maybe think over when I feel like it. But after this writing-it-out, I don’t feel the need for doing anything in particular about it.

Back to the system.

Broad failure modes I want the system to avoid

I know about at least two failure modes I want to avoid.

First, I want to avoid the failure mode where some bump makes the system fall apart.

That is, I want the system to fail gracefully and recover. If part of me wants the system to fail for some reason, the means I should bring things into harmony not by forcing the part to behave, but by accomodating the system to meet the needs of all relevant parts.

Second, I want to avoid the failure mode of “doing all the rationality techniques just so I can get the points for doing self-improvement”.

After attending my CFAR workshop in May 2017, I fell into the second one. I’ve had a document with TAPs that I practiced every day. When I started using Complice, I picked ~5 goals and didn’t revise them, and felt bad when I stopped working on them. I want to put the system in place so I can be awesome. If I am doing self-improvement-type things just because I would feel bad if I would skip them, I have fallen into a trap.

Specific tools that could figure in the system and their failure modes

Every TODO system that grows with time ends in bankrupcy

Something that I guess maybe?? Miranda Dixon-Luinenburg might have remarked on in a document that I have no idea how to Google now (a document discussing productivity tools by some members of the CFAR alumni community) is that everything which looks like a TODO list or inbox is doomed to fail. My Google Inbox, my Google Keep and all the other places which I have used to try to keep track of tactical concerns have over time become full of items which I don’t want to address immediately, but also don’t want to shelve indefinitely. The system’s working memory has to stay constant-size over time. Not keeping the system’s working memory constant-size would lead to the second failure mode, in which obsolete tactical concerns end up dominating, and the system becomes a bother to keep running. The inevitable consequence is that some day, I would declare bankrupcy and start over from scratch.

The thoughts which come to mind upon seeing this are:

Writing things down is useful

Yep. It lets reasoning be explicit. When I write down my thoughts, I’m more likely to notice thinking going askew.

Publishing things for other people to look at feels nice and works as a commitment mechanism slash reward signal. On the other hand, things I publish go through my social filter. Probably some balance to strike here.

Too much automation is bad, but I also want the system be my own

As I mentioned, I used to have a whole scaffolding of scripts on my old laptop which would do things like plot my net worth over time, try to mount and unmount an encrypted partition (mostly because of NSFW stuff in it), synchronize Anki decks with a human-readable Git repo of information, and such. And over time, this scaffolding tended to accumulate bugs, like when a service I was relying on changed its API.

I like to program, but time spent programming the system is time not spent being object-level awesome.

I have tried to reduce the custom scaffolding I used (partially out of necessity, because I just didn’t have access without a personal laptop), by supplementing with Keep and such, but I ended up not, for example, keeping up with writing my diary. The more I use general tools, the less will they fit my personal idea of ergonomy.

Also: I’m a programmer and I love programming a nice system. That’s both a blessing and a trap.

Taking up too much time is bad

The system I had in place some time after CFAR accumulated more and more small things, which I would check off every day. The final version was something like: every morning, do a boot-up with a few free-text questions, pick up ~5 Complice daily tasks and maybe put in a few extra Complice goals. Every evening, do a self-improvement round which lasted probably more than 1 hour. It included practicing TAPs, a kind of Murphy-jitsuing around possible problems in my daily routine (a free-form text exercise) and checking in with some of my parts (also free-form text).

I have kind of hard-committed into doing all of this daily, and when it crossed some threshold, I just stopped doing all of it at once.

A way I could have avoided this would be keeping the “mandatory” part constant-time. And also having a looser schedule, in which I could easily spend, say, a whole evening reflecting and improving the routine (as opposed to having very little time and so not feeling like I have the time for meta-stuff, so just chugging along the overly long routine). And by coming into peace with the fact that there are 24 hours in a day, and that if my self-improvement routine takes 2 hours, those are 2 hours less of sleep or fun or whatever.

That feels important. Time is a scarce resource.

It has to be okay to stop doing some things

If I have decided at some point that I want to, say, write a diary entry every day and I end up not doing that, that does not mean that I have failed. If I today decide that something is important, but tomorrow I no longer think it is, that’s okay. I have the right to revise what is important - in fact, revisions are good and welcome.

At one point, I felt that I could change a bunch of bad habits by doing TAPs. But later, the TAP practice routine became very long, but I felt bad about the prospect of just saying “this is no longer the thing to be doing and so I will stop doing that”. A more productive and less conflict-producing way to look at it would be: “I am not feeling like doing this TAP routine, and that’s okay. I deserve rest.”

Tactics are contingent on being useful.


Wall of text ends here

I have an idea of how could a software implementation of a part of the system look like for me - something like a versioned directed graph where nodes would be free text, ideas, tactical priorities, or resources like links to websites, or nodes grouping events or people. It would probably be very fun to implement. However, also time-consuming.

I’ll let my thoughts sit in my mind, and my next action is to sit down in a few days and think some more.