Home Archive Wiki About
Home Archive Wiki About


2019-09-02 - Model of continuous asset growth

Open In Colab

FIRE stands for financial independence/early retirement. The point is to save and invest money, and pay yourself a salary from the interest, eventually becoming independent on other sources of inocme.

There is a relationship between:

  • How much you have invested
  • The interest your investment makes. (The widely cited “Trinity study” suggests 4% as a “safe withdrawal rate”.)
  • The salary you pay yourself
  • How long your savings last for you

I have a program named worthy (on Github) that tracks my net worth and models when will I be financially independent under various assumptions. Here I describe the slightly fancy math behind a more accurate model for this relationship I finished implementing today.

I am probably rediscovering Financial Mathematics 101 ¯\_(ツ)_/¯

The questions

  • The “how much” question: I want to pay myself 1000 USD. My stocks grow 4% per year. How much money do I need?
  • The “how long until” question: I have 100 000 USD and save 3000 USD per month. How long until I have 200 000 USD?

First shot

Previously the tool’s model was very basic, and answered the two questions as follows:

  • I want to pay myself 1000 USD per month. My stocks grow 4% per year. How much money do I need? Well, the 4% you get per year should cover the yearly costs, so 1000/(1.041/12 − 1) ≈ 306 000 USD.
  • I have 100 000 USD and save 3000 USD per month. How long until I have the 306 000 USD that you said I need? That I modelled linearly, with just $ (306000 - 100000) / (3000/) 69 $.


Assuming infinite retirement time

If you pay yourself a monthly salary of $ $1000 $ and your monthly interest is $ $1000 $, your money will last forever, beyond your (likely) lifespan. If you are fine with retiring with $ $0 $, you can pay yourself a bit more than just the $ $1000$ interest.

Ignoring growth while saving

“Take how much money I need - how much I have, divide by monthly savings” ignores that the money I saved up so far also earn interest, before I’m done saving. It’s too pessimistic.

Stand aside, I know differential equations!

Let’s model the depletion of your money as a function f, which will map number of years since retirement to the amount of money. You start with some initial amount f(0). If we pretend you withdraw the salary for a year and add interest once yearly, we’d get:

f(x + 1) = f(x) + i ⋅ f(x) − c

Where i is the yearly interest rate and c are the yearly costs. In the example above, i = 0.04 and c = 12000 USD.


f(x + 1) − f(x) = i ⋅ f(x) − c

If we instead pretend that everything is continuous and squint, this looks like a differential equation:

f′(x) = i′ ⋅ f(x) − c

(Where i plays sorta the same role as i - except it’s not equal to i. For now let’s pretend it’s some unknown variable. Its relationship to i will eventually pop out.)

Wikipedia’s Ordinary differential equations article says that if dy/dx = F(y), then the solution is $x=\int ^{y}{\frac {d\lambda }{F(\lambda )}}+C$. In our case, we have F : x ↦ ix − c, so:

$$x = \int^{f(x)}{\frac{1}{i'\lambda-c'} d\lambda}+C =_\text{Wolfram Alpha} \frac{\log(i'f(x)-c')}{i'} + C$$

Solving for f(x):

$$ \log(i'f(x)-c') = i'(x-C) \\ i'f(x)-c' = \exp(i'(x-C)) \\ f(x) = \frac{\exp(i'(x-C)) + c'}{i'} $$

So, magic happened and I pulled the general form of f(x) out of a hat. We know what are the i and c values when we assumed interest and costs happen only once yearly.

What about i? Let’s guess it. If we had no yearly costs (so c = c′ = 0), we wanted to have f growing at a constant rate, gaining i in interest per year:

f(x + 1)/f(x) = 1 + i

Substituting in the above equation of f, we get:
exp (i′(x + 1 − C))/exp (i′(x − C)) = 1 + i

When we simplify the fraction, we get exp (i′) = 1 + i and therefore i′ = log (1 + i). So, we have now successfully guessed the right value for i :)

Now what’s the right value of c?

If we set interest to i = 0, f(x) should simplify to a nice linear equation losing c per 1 unit of x.

$$x=\int^{f(x)} -\frac{1}{c'} d\lambda + C = -f(x)/c' + C$$

$$-f(x)/c' = x-C\\ -f(x)=c'(x-C)\\ f(x)=-c'(x-C) $$

So the right value for c is exactly c.

So we have:
$$ f(x) = \frac{\exp(\log(1+i)(x-C)) + c}{\log(1+i)} = \frac{(1+i)^{x-C} + c}{\log(1+i)} $$

C mediates a multiplicative factor before (1 + i)x. C is just some constant that makes the function work with the f(0) boundary condition. Instead of wiggling the C, we can instead wiggle C2 = (1 + i)C, which is the actual multiplicative factor, and relabel C2 as C. (It’s an abuse of notation, but an OK one. *handwave*)

$$ f(x) = C \cdot (1+i)^{x} + \frac{c}{\log(1+i)} $$
The one remaining unknown variable is C, which we will get from f(0) - which are the initial savings.

$$f(0) = C + \frac{c}{\log(1+i)}$$


$$C = f_0 - \frac{c}{i'}$$

Okay this is a little bit ugly. Let’s play.

c = 12000  # yearly costs
f_0 = 100000  # initial savings
i = 0.04  # interest
from math import log, exp
i_prime = log(1+i)

C = f_0 - (c / i_prime)

def f(x):
  return C * (1+i) ** x + (c / i_prime)

for r in range(11):
  print("after", r, "years, got:", f(r))
after 0 years, got: 100000.0
after 1 years, got: 91761.56878830638
after 2 years, got: 83193.60032814502
after 3 years, got: 74282.91312957724
after 4 years, got: 65015.79844306671
after 5 years, got: 55377.9991690958
after 6 years, got: 45354.68792416598
after 7 years, got: 34930.44422943902
after 8 years, got: 24089.23078692297
after 9 years, got: 12814.368806706276
after 10 years, got: 1088.512347280921

Cool, it seems to be giving reasonable results. But our two questions were: how much money do I need to pay myself a given salary and how long until I save up the money I need.

Let’s instead first solve another question: if I have 100 000 USD and spend 1000 USD per month, how long will it last me.

For that, we just need to invert the familiar function:

$$ f(x) = C \cdot (1+i)^{x} + \frac{c}{\log(1+i)} $$

We want to know the number of years x at which we will run out of money (so f(x) = 0)
$$ 0 = C \cdot (1+i)^x + \frac{c}{\log(1+i)} \\ (1+i)^x = \frac{-c}{C \log(i+1)} \\ x = \frac{\log{\frac{-c}{C \cdot i'}}}{i'} $$

And let’s test it:

x = (log(-c / (C * i_prime))) / i_prime

Cool, this matches what the Python f(x) predicted above - after 10 years, it was just dwindling at about 1088 USD.

Answering the how long question

To answer the question “if I now have 100 000 USD collecting 4% interest per year and put in 1000 USD per month, how long until I have 306 000 USD”, we can use the same procedure - just plug in a target f(x) = 306 000 instead of zero and set a negative c to represent savings instead of costs. Details left as homework for the curious reader.

If you’re curious about the Go code, see this commit.

Answering the how much question

As a reminder, the “how much” question asks: if I want to pay myself a salary of 1000 USD per month, how much money do I need. Previously, I solved that with saying “the interest should cover all the costs”, which resulted in an investment that would last forever (a perpetuity). But now have a function that models an investment under conditions of withdrawing (or saving) money, and we can use that to model with a finite time horizon, and get a better estimate.

Say that we know that we are 40 years old and want our money to run out on our 100th birthday. So, after x = 60 years of paying ourselves, say, 1000 USD per month (so the yearly costs c = 12000), we want to have f(x) = 0. How much initial money f(0) do we need for that stunt of precious timing?

Okay, from above, we know:

$$ f(x) = C (1+i)^{x} + \frac{c}{i'} = \left(f(0) - \frac{c}{i'}\right) \cdot (1+i)^{x} + \frac{c}{i'} $$


$$ f(x) = f(0)(1+i)^x - \frac{c}{i'}(1+i)^x + \frac{c}{i'} \\ -f(0)(1+i)^x = \frac{c}{i'} -f(x) - \frac{c}{i'}(1+i)^x $$

Let’s remember that we want f(x) to be 0.

$$ -f(0)(1+i)^x = \frac{c}{i'} - \frac{c}{i'}(1+i)^x = \frac{c}{i'}(1-(1+i)^x) \\ f(0) = \frac{c}{i'}(1-(1+i)^{-x}) $$

Let’s try it out:

c = 12000  # yearly costs
x = 60  # years for the investment to survive
i = 0.04  # interest
i_prime = log(1+i)
f0 = (c/i_prime) * (1-(1+i)**(-x))


Recalling the numbers in the first section, the first algorithm which assumed an infinite horizon prescribed 306 000 USD for that situation (“1000 USD per month at 4% interest rate”). This more precise estimate cut 30 000 USD from the number :)

2018-12-09 - It might be interesting to have a realistic planning system

There’s many different kinds of things to do

There’s a bunch of things I want or need to do, and they have different shapes.

  • Some can be finite sequences of actions. Thing like this are for example “move to a new apartment”. You have actions like “get moving boxes”, “look into moving options”, “pack these things into this box”. You have dependencies like “you can only unpack if you have already moved the boxes to the new place”, or “you need to order the boxes online before packing them up”. When you finish all the tasks, you are done, and you can forget about the problem forever.
  • Some are basic needs, like the need to sleep or the need to get food. If I don’t get sleep, I am slow, and will eventually fall asleep no matter how much I try to stay awake. Unlike moving to a new apartment, I will always need to sleep. I can’t, say, sleep for 24 hours and then go awake for a whole week.
  • Some things take a long time and are open-ended, like learning a language. There will be a time when I will be competent enough, but it will take a lot of practice, and cannot be practically modelled as a finite sequence of steps.
  • And more advanced needs, like “I want to do something fun” or “I want to hang out with friends”.

Productivity methodologies I know about are way too narrow

There’s a bunch of methodologies for productivity, but I feel they often only model a small part of planning. For example, (my interpretation of) GTD puts everything in a framing of “you have Projects and projects have Actions”. But I don’t think that’s a good framing for, say, learning a language.

The model of “cost of time” is also wrong. It is a useful heuristic, but the value you assign to any activity is going to assume perfect elasticity. For example, you cannot just walk up to your employer and say “I want to work 120 hours per week”. No (reasonable) employer will say “yes” to that. Also, the many things a person wants cannot be converted into a one-dimensional value. (Footnote: Yeah I know about von Neumann-Morgenstern theorem. But people are not rational agents, and von Neumann-Morgenstern does not say anything about how practical will the resulting utility function be to evaluate.)

What I tend to do is some sort of intuitive “higher-level planning” which is sometimes a bit reflective, but not very often. When I’m in “work mode” (i.e., in my actual job), I have a few ways I try to figure out what to do on any particular day.

But the process by which I decide, say, “enough work today, let’s go get some sleep”, or maybe “I’m a bit tired but let’s go walk on the treadmill for a while”, is very intuitive.

I don’t know about any formal methodology which would basically take an input like:

  • I need to work so I get money and feel good;
  • I also need to sleep roughly 8 hours daily;
  • I need to have some fun;
  • I also want to learn German; and
  • I need to do a bunch of multi-step things before their deadline,

and which would output a plan of what I should be doing at any given time.

And I would like the plan to be reasonable. The methodology should not, for example, assume I can do without sleep, or without a regular sleep schedule.

And that’s what I mean by a “realistic planning system”.

Maybe good old fashioned AI-style planning could be adapted?

In uni, I studied a bunch of planning and scheduling, which is mostly used for cases like “this is how you construct a submarine; you can’t screw in screw B217 until you screw in screws B210-B216; make a schedule which takes as little time as possible”.

These algorithms can be extended to work in more general environments, like:

  • Some people have working hours and when a worker is not working, you can’t plan jobs for them.
  • There’s limited resources, like “you need a drill to drill a hole and there are only 10 drills; you can’t have more than 10 workers drilling at the same time”.

I wonder if you could make a realistic and formal planning system from some extension of that setup. Say something like:

  • There is a “sleep meter”. If sleep meter goes to 0, agent must sleep for 8 hours. “Sleep meter” slowly goes down during the day.
  • There is some sort of penalty on effectiveness when underslept or when sleep is irregular.
  • Different types of actions deplete other types of meters. Light socialization depletes “introvert points” (for me). Snuggles increase “oxytocin meter” (which is a dimension of “happiness”).
  • And there could be some modelling of “if you get too tired you have to sleep”. Perhaps the algorithm would compete for control with “drive to sleep”, and the lower the sleep counter is, the higher the likelihood the person just falls asleep wherever they are.
  • And I think it would be nice if the algorithm treated all “needs” symmetrically. According to internal family systems, people have many relatively smart parts good at different things and wanting different things. If the system is cooperative and does not place any particular part at a “command” level or at a “subordinate” level, it would hopefully make it easy for parts to agree to collective decisions.

Meh, too hard. I got other stuff to do.

I guess getting any model halfway realistic would be too complicated, and I probably will keep using my bunch of ad-hoc heuristics to make decisions. I have way too many things that I want to do to spend the first couple years developing a planning algorithm rooted in psychological theory.

A thing I used to do that might be useful to start doing again is having some time in which I try to optimize what I’m doing. It could bootstrap into more conscious/mindful action.

2018-12-09 - Groundhog Life: a cute, but eventually frustrating game you can't win

Take-aways up front:

  • Yes, the game Groundhog Life can be won.
  • However, if you want to win it, you have to either design a system for playing JavaScript games that does not depends on energy from the Sun, or you have to apply some serious astroengineering, because it sure looks like, by default, the Sun is going to be pretty cold at the point when you will win.
  • The ending is not worth the effort.

So. The last couple weeks a lot of my free time went into playing the game Groundhog Life (I’m not linking it to save other addictive minds). It’s a game where each playthrough gives you a small bonus on future playthroughs. I have gotten to the point where I am obviously in the end game, but have been making no further progress with extra playthroughs.

I have not found anyone online who has finished the last version of the game. Only one person somewhere who claimed it was impossible to finish.

So. The frustrating thing is: the game does not tell you that it’s nigh impossible to win. It just keeps telling you to try again and again.

Because I noticed that I’m really unhappy with my time going into something like that, I spoiled it for myself. Obviously, spoilers ahead.

How to cheat

The game has a save/load feature. A save looks like this string:


(That is an actual save after a whole lot of playing. The last one I made before I decided to cheat.)

I have dug a little bit into the game’s source code. Turns out that the string is a “encoded URI component”-formatted LZM-compressed JSON blob. The compression is done by a package that is available as a NodeJS module. So:

#!/usr/bin/env node
'use strict';
const lzm = require('lz-string');
const fs=require('fs');
  fs.readFileSync('data', 'utf8')

Then, when you format the output JSON with jshon, it gets pretty straightforward. I went and changed area_loopTrapexperience.loopTrapMultiplier to 10000000000000000000000 (I’m not going to even bother counting the zeroes), and did the same to area_constructPowerPlantexperience.loopTrapMultiplier. You could use jq to do that. I just used Vim.

Just for comparison. My loop trap multiplier in the “study mirrored ship” skill was just 493.1. If it took me, say, 100 playthroughs to get to that point, getting to a value of 1000000000000000000000 would take 2e21 playthroughs, give or take a couple orders of magnitude. If a playthrough lasts me, say, 1 minute (I’m being really optimistic here), and one year has (being optimistic) 600000 minutes, if I’m counting correctly, it would take 3e15 years of continuous play to get to that value. According to Wikipedia, by that point Sun’s temperature will be below 5 degrees above absolute zero, and all planets will be detached from their orbits due to stars doing gravitational stuff to each others’ planetary systems. In addition to all the silly events before the 1-quintillion point.

Okay. Then you do the reverse thing and make a save-game string from the changed JSON:

#!/usr/bin/env node
'use strict';
const lzm = require('lz-string');
const fs=require('fs');
  fs.readFileSync('x.json', 'utf8')

And… When you do that, and maximize laser gun, your laser gun will have an effectiveness of only about 7-8. I got to like 2.6 without cheating. So. After having been playing the game to the point where Sun’s light is a distant memory of what Humanity has become since its embryonic existence on Old Earth, you are now about 3-4x stronger against aliens.

And it’s still not enough to beat the damn game.

It’s not enough zeroes. The aliens just keep coming stronger.

So I went and just read the source code.

Turns out that after you beat an ungodly number of waves, you just get a message in the same field that aliens say stuff like “EXHIBIT IS RESISTING”, and the message says something like “I guess you have finished the game, look at this site over here for updates”.

I’m putting this online so that when someone Googles for “Groundhog Life ending” or “how to finish Groundhog Life”, they will find this page, and know that if they want to do that, they’d better start sending tithes to the Strategies for Engineered Negligible Senescence foundation.

P.S.: A review and advice to game designers

Groundhog Life is actually a cute game. I like how it takes a bit of strategizing to maximize some things.

But by god, please, if you make a game, either make it obvious it’s unbeatable, or make it beatable with some indicator of progress. And if you do decide to make your game beatable, please make it beatable for humans in their current meat shells.

2018-09-12 - Review heuristic: Call out bad code

Code health tradeoffs in larger codebases

When people program, they often need to make tradeoffs between what will fix the current problem quickly, and what will make for a healthy codebase. Examples of such a tradeoff is:

I found a function which does almost what I want, but not quite. Do I wait for its maintainer to let me refactor it so I can reuse it, or do I just copy-paste? Or - if it’s just a 5-line chunk, do I extract it into a function if it will require a bunch of boilerplate (maybe it would necessiate adding a new module)?

It’s often a question of “do I pay the bigger upfront cost now, or do I make future-me front it”. These days I often work in a really big codebase, where there ends up being a bunch of those.

My review tip for copy-pasted code

When you review someone’s code that adds similar technical debt, you might not want to force the author to go and dig in 4 files just to extract out a function. It might frustrate them. And also, often it’s really not the best thing to do. If you really only duplicate the same 5-line chunk in two places, and it’s not often changed and not tricky, it might really be less costly to copy it than to agressively share such code.

So the request I often give in code reviews is: “if you duplicate code, add a TODO that the code is duplicated” (preferably to both copies).

This way, you allow the reviewer to quickly go along their day. But also you leave a “hey, this is a known bit of technical debt” affordance in the code.

When someone ends up copy-pasting the same code a third time, you will be much more likely to look and say “hey time to extract a function, there is this TODO which says it’s not the first time we’re doing this”.

General heuristic: “feel the pain”

There is a more general heuristic, which I use, that I call “feel the pain”. The heuristic is that bad things should be obviously painfully bad. Other instances of “feel the pain” are:

  • Let’s say someone breaks the contract of your Frobnicate RPC and sometimes doesn’t pass a required parameter foo, and you need to fix your service to accept that. Fine. But don’t do it like this:

    void HandleFrobnicateRPC(const string& foo, const string& bar) {
      if (foo.empty()) {
        foo = ComputeDefaultFoo(bar);

    Do it like this:

    void HandleFrobnicateRPC(const string& foo, const string& bar) {
      if (foo.empty()) {
        // NOTE: As of 2018-09-12, the Bazinator service calls Frobnicate with
        // an empty 'foo'. The Frobnicate service cannot be easily fixed to pass
        // the 'foo' itself, because it does not currently have access to the
        // backend which can find the right foo for the bar. If the bar has
        // multiple associated foos, this will only return the last written foo.
        // Other clients SHOULD NOT rely on this.
        foo = ComputeDefaultFoo(bar);
    Why? Because that way, people are less likely to build more hacks on top of this hack, because there is a long paragraph full of scary words.
  • Let’s say that you inherited 1 million LOC from someone in a hurry, and at some point, they made a misguided architectural decision that makes your code clunky and hard to understand. Create a central bug for this in your bug tracking system, and whenever you write new code that would be made better by fixing the bug, add a note like:

    // TODO(agentydragon): Once we resolve b/12345, we'll be able to replace the
    // Frobnicator with a mock for tests, so our tests won't need such
    // complicated fixtures.
    Why? Because that way once you get around to fixing b/12345, you will be able to grep all places where this hurt some code, and fix them one by one. Also, let’s say some kind soul needs to change this code 2 years after you’re done with it, and finds that TODO. (That person might be you.) When they see this TODO, they might go “hmmm, that bug number looks quite low. aha, it’s been filed 2 years back, and it’s fixed now. yay! that means that I can fix this now and the diff for my new feature will be about 20% less horrible!”
  • Let’s say your code processes two different kinds of things, which both happen to be sort-of-strings (indulge me for a second and assume your code does not use strong types for such things). Think “URLs and street addresses”. For some hacky reason, which you hope to get rid of at some point, you are computing one from the other. What I would do in that case is make that code obviously painful and horrible using devices like variable and function names and type aliases. Not like this:

    string GetResult(const string& input) {
      return UrlEncode(input + "-autogenerated");

    More like this:

    using StreetAddress = string;
    using Url = string;
    // Long and verbose comment about why this is necessary and what it should be
    // replaced with and when.
    Url BuildFallbackUrlFromStreetAddress(const string& address) {
      return UrlEncode(address + "-autogenerated");

Even generaler heuristic: “Call out problems”

There is actually an even more general heuristic than this, which is “call out problems”. I also use it in other contexts.

  • Let’s say I am writing a design document, and I notice that I did not actually verify some assumption that I’m making, let’s say, “when a customer frobnicates a Foo without also having a Bar, Frobnicator service will not give them cake”.

    When I notice that and don’t have time to immediately verify it, I’ll openly say, “I believe that when …, then … happens, but did not verify that.” This is good for when your future-self might be going over your notes later, and, reading that doc, starts assuming that the doc is an authoritative source on the particularities of customers getting cake from the Frobnicator service.

    Same about your colleagues. If you say “I did not verify this”, someone who has nagging doubts might be much more likely to say “hey I’m not sure because I’m a human and I forget but I think I got some cake last month when I tried a different thing”, instead of thinking “huh. so he says Frobnicator service does not give cake in this case. I guess they changed it or I don’t remember it correctly.”.

  • Let’s say that I’m talking with someone and they raise a counterpoint to my preferred opinion that I never considered yet. Instead of immediately rushing to defend myself, I sometimes try to instead take in the new thing, and sit there for a bit with the dissonance (“but but but I want to be right so this thing must be wroooong aaaaaaaah”, or maybe “but but but I don’t want to have to redo this 3k line change that I love aaaaaarrrhh”). And instead of “Hmm, but your proposal would not address the …insert ad-lib…”, say, “Huh. I didn’t think of that yet. I’ll need to think about that.”

“Call out problems” feels sort of close to non-violent communication. I feel much less like openly communicating problems if it feels like the environment will hurt me if I do that. That happens to me with some strong personalities, or with people who (probably mostly unknowingly) trigger my sense of “argh I’m being bullied I want to curl up in a corner”. On the other hand, I usually try to call out problems more often than people around me, because I want to nudge culture towards cooperation. (Google culture is pretty blameless, so I don’t feel I’m acting against my interests.)

2018-09-11 - Trying "Things to learn" and "Documented systems"

It’s been a long silence for me on this homepage. Stuff has been happening, but somehow I didn’t find the time to write much.

I’m trying out a new thing. I have a lot of stuff I want to learn some day, but I don’t have the time to learn all of it (and often I also don’t have the leftover energy to do personal-development type stuff, but that is another problem). I’ve had a bunch of systems to remember the interesting things I want to learn - a file on my disk, then a more complex file on my disk, then my laptop crashed, then Google Keep.

So the new thing I’m trying is that I’m adding a new page here called Things to learn. When I find a new thing I want to deeply understand some day, I write it into the list. And when I (some day) feel like diving into a thing, I will pick one up from the list, spend a bunch of time learning it, and then I’ll write a thing, put it on my website and link it from there.

Why this might be good:

  • I won’t hopefully end up forgetting and then relearning the same stuff.
  • Writing something down makes it easier to spot confusion or weak spots in understanding. (Maybe it’s because I spend a lot of time reviewing code. Seeing something written in monospace font makes me go like “huh I’m not sure please write a test”.)
  • I will produce stuff :)

Let’s see how it goes.

Documented systems

Oh by the way, there is a more general thing I’ve been trying which I think might be useful for me.

So CFAR teaches this thing they call “systematization”. (CFAR - Putting Names on Things ™) Examples are:

  • Having your underwear in the top shelf, then the socks, then shorts, etc., and keeping it that way.
  • A checklist you follow when you need to pack up for a flight.
  • Putting all your appointments and other fixed commitments into Google Calendar.

Systems are useful, because they standardize things. Preparing a flight checklist once and then following it 10 times is easier than figuring out if there’s anything important you’re forgetting 10 times at different occassions.

I have a few systems. They have come to be mostly “on their own”, and I might not remember the reason they turned out the way they are.

I want to try making my systems explicit - by writing down a documentation for what each system looks like, what problems does it solve, and such. I want that because:

  • It makes sure the system actually is there for something. This prevents a failure mode I’ve been in after my CFAR workshop, which I once called “cargo cult rationality”. It’s running around beating at random things with CFAR techniques, trying to be a great and conscientious rationalist. (I guess “cargo cult rationality” is also a thing worth writing about some day…)
  • It stores the system out of my mind. For example, I have a system for which things to have in my backpack and where. It tends to deteriorate, and when the chaos threatens to overwhelm my backpack, I can just refer to the explicit description of what goes where, and maybe update it if needed. I don’t have to solve the problem of “figure out the optimal things to put into my backpack and where to put them” every time it gets too messy.
  • It enables debugging.

The way I think of it right now is to have a “source code for my systems”, sort of. Right now I have a like 5-page Google doc with a mix of where things go in my apartment, how to pack for things, and where things go in my backpack, and the motivation and open problems for each.

2018-03-24 - When I use my touchpad to scroll and then press Ctrl, Chrome starts zooming in/out. How to fix that?

This has been plaguing me for a few days (I just got a new Lenovo X1 Yoga laptop), and I think I now figured it out. The Synaptics touchpad driver has a CoastingSpeed option (see man 4 synaptics).

The issue is that when I start two-finger scrolling on a webpage, the driver interprets that to mean “oh and when I take my fingers off the touchpad, please continue scrolling for a few more seconds in the same direction, while slowing down until you stop scrolling”.

Guess what? When you press Ctrl in Chrome and the touchpad driver keeps sending the “ooh we’re scrolling!” message, Chrome starts changing the font size, which is what “Ctrl + scroll” does.

Quoting from the manual:

Your finger needs to produce this many scrolls per second in order to start coasting. The default is 20 which should prevent you from starting coasting unintentionally. 0 disables coasting. Property: “Synaptics Coasting Speed”

I found a command at the Ubuntu community wiki article about Synaptics touchpads which disables this: synclient CoastingSpeed=0. But I guess that won’t stay between restarts.

I’ll add this to a new file in /etc/X11/xorg.conf.d/99-custom.conf:

Section "InputClass"
  # Disable annoying "zoom after two-finger scroll" in Chrome.
  Identifier "touchpad disable coasting"
  MatchDriver "synaptics"
  Option "CoastingSpeed" "0"

And I’ll report back if that doesn’t work. (So if this text is still here, you can assume it did and I was too lazy to update this post :p)

2018-03-21 - Triage

I am tired and feel like I shouldn’t go to sleep yet. The Soylent thing has been working only very very slowly. I assume the first few days were just losing the contents of my bowels and now I just want to keep it up. Maybe in the hope that it will remove the habit of snacking when I feel stressed/bad.

The thing about feeling stressed at work has gotten a little bit better. But somehow I don’t feel that much like I’m winning.

Tomorrow a pretty useful thing would be to wake up reasonably early, then ~8 hours of work. I could really use some emotional support. Something’s wrong, actually a bunch of things feel wrong. I feel lonely. I don’t want to be fat. Work sucks and looks like my attempts to hoist myself into an AI safety, or at least AI-related position might fall flat. I might have reached beyond what I am currently good enough for.

I guess the last part feels pretty close to it. Work sucks. If I could do anything with my life, I would do X. Solve for X.

My feelings about it are, … I feel like I kind of grew up with the implicit assumption that work matters. Like, when you are the best at school or at work, people will like you and you will be happy. I want to feel, …, okay? loved? like, I want to cry about how hard and painful things are. Speaking of which, dental pain sucks.

So what’s so painful. First answer is “the fact that I feel I don’t have anyone to cry to”. (At this time. Growth mindset I guess.) I guess compared with that the other things which come up are much lesser. Actually maybe not. I also feel that I have no idea where I’m running, but I’m running there as fast as I can. And stopping where I am, or even the entire idea of stopping, feels dangerous and scary and I don’t want that.

Huh. It would be pretty funny if the function of snacking was to make this cluster of things not hurt. On the other hand, I might still be in the period where going off SSRI’s can do funny chemical things to your head, so there’s that. I guess there might be some fancy antidepressant I could start on to make me itch less right now, but would I want this to stop itching? Maybe not medicating it and not trying to shut it down might be good signal to do something differently…

With that off my chest, it’s still almost midnight and I feel like there are things I should be doing tomorrow. I have a pretty good idea of work things that need doing.

Things on my mind are:

  • Work on my master’s thesis.
  • Check that my mass transit card in ZRH still works (been here a year, it might run out).
  • Buy batteries for my bank’s card reader 2FA machine.

Feeling of having to come up with more stuff.

  • Errrr…

  • I guess, I’m still fat, what do I do about it? Well, that feels kind of forced…
  • I don’t really want to do anything exercise-typey tomorrow. I even have a semi-good excuse for why not to, my ankle is still swollen.
  • But I feel that makes me bad.

Huh. Does it? It feels like it makes me bad, though S2 would publicly say it doesn’t. I also feel tired so probably not gonna IDC on that… Bookmark I guess.

I’m probably not going to reach a point of “yeah I have a plan for tomorrow and it’s a plan which totally resonates”, because I’m pretty tired.


Let’s also put some things into Complice. Record weight, eat just Soylent. (Hmm. My calendar reminders are also getting a bit unwieldy. Maybe cancel some of them. Or reevaluate why they’re there. Or start behaving by them…?)

Get the goddamn batteries. And while I’m getting the goddamn batteries, also get some (goddamn) laundry detergent. And check out what other things I’m missing. Lowest-effort way to do that would be coop’s delivery service to the Google office. Yeah, sure, that sounds good.

My room is again slowly becoming a mess. That felt good fixing last time, let’s add that.

Feeling of having to do more.

Okay, when I’m home, I’m going to try to write some Python code that will run in Kubernetes on Google’s cloud to download a dump of Wikipedia for my master’s thesis. Oh, also, I might be able to download it in chunks so it doesn’t have to be loaded into the backing store (probably some kind of BigTable or whatever) in a single thread.

Actually let’s change around my Complice goals, too.

I should also make plans for Easter break. Not sure about what’s the deliverable for that. There’s a bunch of ways I could spend Easter. Baseline, lounging around in Prague chatting with friends. Working intensely on my master’s thesis. (That sounds good in some way.)

Triage done? Argh doesn’t feel entirely like it. I should also make some better plans for one of my partners visiting Zurich.

Itch. But well, I guess this triage will do. The parts which are most important on it are keeping on track with not overeating or snacking, and making some progress on the thesis.

Part of me feels like this is just not enough. Just getting that one Kubernetes script done in one day? Hah, don’t make me laugh. At this rate, you’ll be stumbling around at roughly the same spot in the thesis until time runs out.

Oh, and also, I want to some day go and implement a bunch of ML papers so I can learn TensorFlow and get the confidence to say “I can write AI”. That might actually have higher priority than the master’s thesis, if I were to only care about getting a job in AI.

Sigh. 3, 2, 1, chaaaaaaaarge, and let’s go to sleep. And if we can’t, maybe we can try to probe this “nothing is enough” thing.

2018-03-15 - Against goals

First off, it’s morning so I want to plan out today. An unexpected thing that happened yesterday was misstepping on my run down the stairs when I was going to work, and spraining my ankle. My right ankle, in which I had arthroscopic surgery some years ago. So I had to go to see (another) doctor, and probably will be working from home until the swelling stops. Hopefully it’s just a simple sprain and will be alright in a couple of weeks.

Because I had been home and not at work, I didn’t have Joylent available - I just grabbed the ~1/4th a packet I had at home for breakfast, and then when I came back from having myself drilled in the mouth and touched on the legs, I had been hungry. And I ended up just very quickly cooking whatever was in the kitchen, which was a big can of beans from Denner, 5 eggs and a packet of Chinese noodles. (Since I switched to not cooking and going with Joylent instead, my kitchen is mostly free of any ingredients of my own…) I munched all of that down very quickly, and then felt bad, because my stomach was disturbed and I felt that I had slipped into bad eating habits. I think it was the beans - they were not very good, predictably. When I started cooking the stuff, I did not have much agency, and had the opposite of reflection and mindfulness. I had entered some mode like “gaah eat all the food”. It was lucky for me to not have much actual food at hand, otherwise I might have eaten much more. I actually think I still was under the 2000 kcal I’m getting daily from the Joylent.

I don’t really know what to do next time. I guess something like a TAP like “omg want food → remember to breathe and not eat everything in sight”. Doing mindful eating on the meal would probably also stop it.

Rationality Zurich went fine yesterday. Just 3 people, but we had a nice conversation about an alternative system of self-development brought up by my roommate. My summarization of it is:

Goals are pleasant when you meet them, but painful if you don’t. If you tell yourself “today I will run a mile” and then you don’t, you get an “ow”. The “ow” makes you feel bad, and so the consequence of missing the goal is is feeling bad about yourself for a while. If you can stay on a streak of successfully doing all the things you commit to doing, you will ride a nice wave of “yay I’m doing all the things and I’m doing well”. But miss once, and now you’re a bad bad person (ow) and if you’re like me, on a deep level, you just want to curl up and cry.

Also, my goals are sometimes about “forcing myself to do things good for the long-term, even if they are aversive in the short-term”. That can feel like the part of me which set the goal is grabbing control of everything and dragging along all parts which might be protesting, which is painful.

An alternative is looking at self-development as a process. In that process, you do not set goals which you have to achieve or it’s bad. You don’t try to create a master plan with 34 steps that will perfectly fix everything if you follow them perfectly. Instead, you do small things in the now which are available to you, things which you want to do (or might enjoy doing but aren’t sure yet) that will bring you a very short distance in the general direction of where you want to be. “Where you want to be” might also best be thought of as kind of an emotional “this is who I really am, this is what is really important to me” - not a S2-type explicit list of SMART goals like “I want to weight <= 90 kg by 2019-01-01”.

I really like some things about this view. It’s comparatively very non-violent, and it natually allows for “okay, so a year ago I thought I wanted to be more fit, and I started swimming because I like it, and I met this person and talked with them and changed my goals, so now I want something a bit different”, and you can change what you are “aiming for” (though “aiming for” in a very weak sense, more like “what kind of thing would feel right”) without feeling you’re betraying an earlier commitment.

On the other thing, parts of me seems to want some enforcement device - like the pain you get from missing a goal. Like for some reason I don’t want to stop feeling bad if I e.g. overeat. Maybe a way to make this part feel better about it would be building some self-trust, or what the internal family systems model calls “Self-leadership”. Having some Self which makes sure that parts get along without being violent at each other, and which makes sure that everyone’s needs are met and that parts don’t enter prolonged conflicts. Like, part of me wants to make sure I do get fit, and it’s afraid that if it lets punishing me for not getting fit, I will not get fit.

So, as for today. I’m staying home to nurse my ankle and having a MRI and a chat with a friend in the evening. My plan today is to go on one container of Soylent. I should probably also add another luggage to my flight home.

Part of me feels like this is “a bit too little”. Like “I should also argh be finishing this master’s thesis and argh getting fitter”.

Well, actually I’m doing really good so far. I feel good about the new system/the new process I’m putting into place. Writing about how yesterday went and thinking about the things I will do today felt nice, and I am now not in traps in which I used to be (e.g., the “feel bad → overeat” trap).

I just realized there’s one trap which I haven’t mentioned here yet, and maybe it’s one for which I don’t yet know how would I avoid it long-term. It’s the trap of the work environment pressuring me into acting not fully authentically in it.

Say that I am feeling stressed because there’s too much noise in my office and it hurts and I stayed up all night because I was playing Civilization 5. And what I really want at this point is to go get some sleep.

But it’s work, and if you are not at your post without explanation, your boss will reproach you for that.

But if you tell your boss “hey, sorry, I stayed up all night playing Civilization 5 and I feel really bad about it and I just want to sleep please”, that costs you social points. Because you’re socially-supposed to be a strong independent adult, and strong independent adults are not fragile. And also, you don’t want to show how fragile you are, because you have already had your fragility abused by others plenty of times.

So (barring the opportunity to actually get some sleep), you stay at your post, and you feel bad about yourself, and you just want everyone in your open space to shut up, but you can’t, because they’re allowed to talk at their work place, so aaaaaaaa :’(

Google’s research into effective teams identified this attribute called “psychological safety”, which seems to predict lots of good things, and it seems to me like the belief that it’s okay to make mistakes, that you will not be personally-judged for what you do, that you are not on thin ice, that you are free to be yourself here. I’m not sure if it’s supposed to be mostly work-related - i.e., “it’s okay to break the build for a few days and we won’t be mad at you”. What I think would make me feel better at work and more free to be authentic (and also like it more, because I would not feel forced into putting up a performance of a solid worker drone that I actually am not) would be a kind of “psychological safety” less about the content of my work and more about “it’s okay to be fragile, it’s okay to be disturbed by people talking a lot, it’s okay to cry if you’re overwhelmed”.

Feeling psychologically unsafe is also kind of self-perpetuating. If you don’t feel safe to e.g. express an overwhelming sadness when you feel you’ve done something wrong, and hence you don’t do it, you will use it later as more evidence that you are not psychologically safe here.

Maybe if my current work environment is actually supportive of everything like that and most of my fears/expectations-of-judgement are from this kind of self-driven feedback loop, some CoZE-type experiments could help. Like, maybe when I tell my boss I’m feeling bad, don’t say it while wearing the mask of “I’m an efficient worker and emotions are my slaves”, and instead let it drop and shed a few tears or let my voice break. Meh. Probably actually something weaker than that. This would already feel unsafe.

There’s a thought lingering in my head about the “think of it as a process” thesis. Maybe things which you have to force yourself into doing (by way of e.g., Complice or calendar reminders or willpower or what not) are not really worth it. Because being forced into something hurts. And maybe the thing to do instead is to start with where I am, and making progress through a sequence of comfortable expansions at the margins, all of which feel good and not forced and don’t make me feel bad if I don’t end up doing them.

2018-03-14 - Murphy-jitsu for today

An experiment in writing out and publishing my Murphy-jitsu for today. I have put this in as a reminder for the last ~5 days. The implementation so far has been just “grab a tablet and plan out the day, without real Murphy-jitsu”. Let’s see what happens if I give it some more structure.

What am I planning to do today?

  • I want to eat just 1 packet of Joylent today.
  • I have to go to the dentist.
  • I am organizing a Rationality Zurich meetup tonight.

I need to remember around 13:00 that I have the dental appontment, and to go home by ~18:00 so I am around by the time people start appearing. Let’s set an alarm at 13:00 and 18:00. Done.

The way eating just 1 packet of Joylent might go wrong is:

I feel frustrated and tired and depresssed sitting at my desk, and I default to going for a snack.

What has helped me avoid such situations the last ~5 days was instead going to a sleep pod and having an hour or so of sleep.

I feel annoyed by the feeling of hunger, and also I have slight heartburn for some reason. It would be nice to solve both of those.

Yesterday, I decided to work on implementing a new system. Let this be one step. But still, there’s things which I am not working on towards today. Like maybe getting some exercise to burn off more calories, and I am also not doing anything about my master’s thesis. Part of me feels a bit annoyed, but I think it’s actually fine that I’m not doing anything about those things today. I have a full-time job, a dentist appointment and a meetup in the evening.

I feel that there’s more I could be doing as the current organizer of Rationality Zurich. Maybe there needs to be some soul-searching. I currently don’t actively care as much about rationality-as-actively-trying-to-believe-true-things. Good things I like are tribe-type feelings and what could help could be accountability and goal-setting and stuff. However, those things are pretty different from epistemic-rationality things some rationalists do, like explicitly betting on beliefs. After I arrive home, I could see if I can come up with some cool thing people could do at today’s meetup.

Actually a good thing would be building a scaffolding of “you can always do better than you are doing and that’s a thing to celebrate”.

So, plan for today:

  • Go to work, skip Milliway’s (name of cafe at Google’s ZRH-BRA-110 site where I work), sit at desk, make some Joylent, break fast.
  • I will count how much Joylent is there left in my Joylent box, and probably order more.
  • If I feel overwhelmed and need some quiet, go to a nap room and have a nap.
  • What do I plan to work on today, actually…?
    • There is a task that I could get fully handed off to me from my teammate, and when that’s done, that will give me a nice chunk of useful work to do.
  • If I notice I’m working on some refactoring, I’ll go sit on a couch and think about whether there’s something more useful I could be doing.
    • There’s at least one minor change I could make in another teammate’s project.
  • In the afternoon, I’m going to the dentist.
  • At 18:00, I am leaving work to go back home.
  • Before the Rationality Zurich meetup, I’ll unwind and think about whether there’s some cool thing we could be doing today.

Intentions are entered into Complice, and I can look over them in the evening to see how things went.

2018-03-13 - I need a new system

It’s now March 2018. I have been working at Google for a year. A few days after I started, my personal laptop broke, along with all my meticulously tuned personal infrastructure. I have procrastinated on getting a new one, mostly because I hoped I would fix it myself, or because I did not want to make a big purchase like that.

There’s a lot of problems that keep bugging me and that I need to solve, and after that’s done and my head stops screaming bloody anxiety at me, there’s aspirations. I feel that I need a strategy. It’s useful to take a step back to think about what things are important, what’s working for me and what’s not working, and such. Over the last year I have had a few times where I picked up a piece of paper and started scribbling some strategical stuff on it, but it was always spare moments and not very connected to other such times. I would always start from scratch.

That’s bad. I am obese and need to change that, and changing a thing like that can’t be done by getting a spark of inspiration one day, running on an agenty high for 10 hours, and then it’s fixed. I weigh 124 kg, which is very near my lifelong maximum. The Internet says that you lose 1 pound of fat by burning 3500 kcal (of course, not a figure to take seriously, goodness, I’m just doing a Fermi calculation here, leave me alone), so to get to my optimal weight of let’s-say-78-kg, I’d need to lose 353 500 kcal. The last 5 days I’ve been trying to attack the snacking habits through which I balooned to this weight by eating just a Soylent packet a day. A Soylent packet is 2082 kcal and my basal metabolic rate is 2300 kcal. At this rate, I would get down to 78 kg in… 4.4 years :/ (And that’s not accounting for the fact that BMR decreases the with weight. But on the other hand, I am not completely sessile.)

A goal like this needs some system for coming back to the drawing board every now and then, learning from which things work and which don’t, safety nets for not losing hope and determination if I inevitably fall short at some point. And of course there’s other things in which I could use more wisdom/strategy.

For example, at work. A thing which often happens is that I spend most of the day kind of switching between not-very-useful tasks, like some refactorings or minor features, and feel very frustrated when I get blocked. And I don’t often take a moment to step back to prioritize, and I recognize that if I want to optimize for promotions and recognition (which I do want, among other things), I have gotten myself into a trap.

Or, happiness - optimizing for being happy, being with people I love.

And when things I want conflict, without the opportunity to “detach” from the conflicted parts and allow for some higher-level strategy or prioritization, I just burn cycles feeling bad because of something I also do. Example: I want to move to the Bay, because I think I’ll like the rationalist and furry communities there, but I also feel scared of looking for a new team within Google or scoping for options outside. I want to work in AI safety (part of it is probably about feeling it’s a super high-prestige thing to do), but I also want to be rich and have an upwards career trajectory. And such stuff.

And often, when I feel bad about something, I find myself walking through some conflict I have walked through many times before. Maybe I have walked through that conflict before even with a piece of paper and doing some kind of exercise to reduce internal conflict (like compassion with parts or internal double crux) but somehow that piece of paper is never anywhere to be found.

So, I want a new system for strategical stuff and I want to turn it into a kind of keystone habit which I have. Let this post be a commitment that I want to flesh this out. I’m setting up a weekly 2-hour goal in Google Calendar called “Systems Maintenance”.

What I want from the system

I’ll try to put into words what I want from this system.

I want the system to align my short-term wants (e.g., “I want a cupcake”) with my long-term goals (“I want to lose weight”).

I want the system to track the really important things. I don’t want the system to track things just for the sake of tracking things, or just for the sake of “getting good-boy points for doing an agenty-looking thing”.

If, on reflection, it turns out that a thing that I am doing because of the system is a thing I don’t really want to be doing, then sometimes that will be a fault of the system for picking that thing - not of me for not doing it.

The system is not a system for losing weight or for tracking work.

Some specific things that are bugging me at this time

Here’s a few candidates for things the system might, but also might not lead to me making some progress on. Here, I am deliberately using non-commital language. I am not going to say that I absolutely have to do something. I have found that commitment-mechanism-bombs are sometimes self-blackmail and end up to me causing violence to myself.

But there’s a few ideas for things which often bug me. They do often bug me now, but that does not necessarily mean that dealing with them directly on their own terms is a good idea. Maybe some of them point at problems I actually on reflection want to address. Maybe some of them are distractions, to which the correct solution might be “just stop worrying about it” or “get some fancy antidepressant / do a mind hack to realize those things are not important”.

  • Weight.
  • Unfinished master’s thesis.
  • I want a “feeling that I’m home” - a feeling like “I am safe, I won’t be hurt, things are fine and not fragile, I don’t have to try hard to fight off bad things”. This one feels important.
  • More generally, long-term non-depressed mood.
  • Stress at work. “I have no idea what’s going on higher-level. I am not in control. I feel like a small gear/pawn in a machine.”
  • Feelings of being forced into things by myself.
  • On the other hand, there is a specific thing which I’ve felt the last few days (though I’ve not always been feeling all that well), and which is associated with happiness and also productivity: something like “I can do this”, “things are actually okay”, “I am doing a good job”. “I have committed to a thing and look, I am actually making progress on it.”

There’s also a few aspirational things in mind, which might be candidate goals, but might also be distractions which should be abandoned or shelved. Actually, now that I think about it, they’re mostly “maybe I want to work in AI safety research” and “maybe I want to do some serious EA-planning”. Right now, I’m mostly feeling like setting those aside and focusing on getting myself in order and feeling good without trying to string on specific goals like that.

Sidenote: I don’t think I want to be religious in a certain sense

Relatedly, I’ve gotten somehow less certain about EA stuff - in particular, the role that I want it to play in my life. I have been a religious effective altruist (for a wide sociological take on religion). Independently of whether or not do I want to continue or discontinue some EA-type behaviors (like identifying as an EA, going to EA/rationality meetups, etc.), it’s not healthy to be too identified with a particular belief system.

In my understanding, religions are community plus belief system plus value system. Hang out in the community, and you’re prone to soak up all the rest. And you may see your own (possibly implicit) value system and belief system come into conflict with that of the community. And if you don’t want to leave the community, maybe because you by now know few people outside of the in-group and because you (like me) are deeply pained by loneliness, the part of you which wants to do the right in-group signalling is going to fight the parts of you which want something else.

Say that you identify as an EA and a rationalist and to get social points, the right thing to say is “I want to work on AI safety in the Bay Area”. That’s called a load-bearing belief by analogy with a load-bearing wall: if it comes down, you can lose a lot.

And earlier this year, largely due to talking with a person highly critical of rationality/EA, I have become worried of the fact that I apparently have load-bearing beliefs about EA and rationality. Note that a belief being load-bearing does not imply that it’s false. (Though after reading The Elephant in the Brain, I wouldn’t be surprised if there were some argument for why group-cohesion-beliefs would tend to be outlandish, honest-costly-signalling-something.) So I have become concerned that I might be acting out some beliefs because they’re load-bearing for my need for community and acceptance. I think what’s warranted is a gentle de-identification from the community, by mixing with more people who are not in it and diversifying, and a kind of retracing of my steps in how I came to do EA things.

If I remember correctly, they came mostly from moments when I expanded my empathy over the suffering of all things, and wanted to make things okay, and I expect I will still prefer to try to make the world better on reexamination. But a sort of scary thing is that I think it would feel bad if I came to discover I don’t really care about making the world better.

On the other hand, the general form of this reasoning is: “Huh. Maybe I don’t actually want X. And the thought ‘Maybe I don’t actually want X’ makes me feel bad. That’s a reason to re-examine whether I want X.” Substituting for “X” anything I care about will have the effect of making me doubt whether I actually do, and I’ve had this particular security hole exploited by the said person-critical-of-rationality/EA.

Something for me to maybe think over when I feel like it. But after this writing-it-out, I don’t feel the need for doing anything in particular about it.

Back to the system.

Broad failure modes I want the system to avoid

I know about at least two failure modes I want to avoid.

First, I want to avoid the failure mode where some bump makes the system fall apart.

That is, I want the system to fail gracefully and recover. If part of me wants the system to fail for some reason, the means I should bring things into harmony not by forcing the part to behave, but by accomodating the system to meet the needs of all relevant parts.

Second, I want to avoid the failure mode of “doing all the rationality techniques just so I can get the points for doing self-improvement”.

After attending my CFAR workshop in May 2017, I fell into the second one. I’ve had a document with TAPs that I practiced every day. When I started using Complice, I picked ~5 goals and didn’t revise them, and felt bad when I stopped working on them. I want to put the system in place so I can be awesome. If I am doing self-improvement-type things just because I would feel bad if I would skip them, I have fallen into a trap.

Specific tools that could figure in the system and their failure modes

Every TODO system that grows with time ends in bankrupcy

Something that I guess maybe?? Miranda Dixon-Luinenburg might have remarked on in a document that I have no idea how to Google now (a document discussing productivity tools by some members of the CFAR alumni community) is that everything which looks like a TODO list or inbox is doomed to fail. My Google Inbox, my Google Keep and all the other places which I have used to try to keep track of tactical concerns have over time become full of items which I don’t want to address immediately, but also don’t want to shelve indefinitely. The system’s working memory has to stay constant-size over time. Not keeping the system’s working memory constant-size would lead to the second failure mode, in which obsolete tactical concerns end up dominating, and the system becomes a bother to keep running. The inevitable consequence is that some day, I would declare bankrupcy and start over from scratch.

The thoughts which come to mind upon seeing this are:

  • Zero is the limit of constant-size working memory. Perhaps just regularly reflecting would be better than trying to keep explicit track of all tactical concerns. Maybe my brain will automatically garbage-collect.
  • On the other hand, the principle of Getting Things Done is that the brain does not automatically garbage-collect, and that putting “okay some day I should learn crotcheting” into a TODO list lets the brain be like “okay, now it’s in the TODO list and it won’t get lost, so I can stop thinking about it randomly at 2 in the morning”.
  • A thing which could be good would be explicitly keeping just a few “live tactical concerns”, and keeping everything else in non-working memory. I’m thinking of a DAG growing to the right in time, and keeping a few of the leaf nodes as “working on this”. A thing I tried before my laptop went “lol I won’t turn on now” was storing my collection of personal notes in a git repo, so I could even safely delete from them without losing the things forever.

Writing things down is useful

Yep. It lets reasoning be explicit. When I write down my thoughts, I’m more likely to notice thinking going askew.

Publishing things for other people to look at feels nice and works as a commitment mechanism slash reward signal. On the other hand, things I publish go through my social filter. Probably some balance to strike here.

Too much automation is bad, but I also want the system be my own

As I mentioned, I used to have a whole scaffolding of scripts on my old laptop which would do things like plot my net worth over time, try to mount and unmount an encrypted partition (mostly because of NSFW stuff in it), synchronize Anki decks with a human-readable Git repo of information, and such. And over time, this scaffolding tended to accumulate bugs, like when a service I was relying on changed its API.

I like to program, but time spent programming the system is time not spent being object-level awesome.

I have tried to reduce the custom scaffolding I used (partially out of necessity, because I just didn’t have access without a personal laptop), by supplementing with Keep and such, but I ended up not, for example, keeping up with writing my diary. The more I use general tools, the less will they fit my personal idea of ergonomy.

Also: I’m a programmer and I love programming a nice system. That’s both a blessing and a trap.

Taking up too much time is bad

The system I had in place some time after CFAR accumulated more and more small things, which I would check off every day. The final version was something like: every morning, do a boot-up with a few free-text questions, pick up ~5 Complice daily tasks and maybe put in a few extra Complice goals. Every evening, do a self-improvement round which lasted probably more than 1 hour. It included practicing TAPs, a kind of Murphy-jitsuing around possible problems in my daily routine (a free-form text exercise) and checking in with some of my parts (also free-form text).

I have kind of hard-committed into doing all of this daily, and when it crossed some threshold, I just stopped doing all of it at once.

A way I could have avoided this would be keeping the “mandatory” part constant-time. And also having a looser schedule, in which I could easily spend, say, a whole evening reflecting and improving the routine (as opposed to having very little time and so not feeling like I have the time for meta-stuff, so just chugging along the overly long routine). And by coming into peace with the fact that there are 24 hours in a day, and that if my self-improvement routine takes 2 hours, those are 2 hours less of sleep or fun or whatever.

That feels important. Time is a scarce resource.

It has to be okay to stop doing some things

If I have decided at some point that I want to, say, write a diary entry every day and I end up not doing that, that does not mean that I have failed. If I today decide that something is important, but tomorrow I no longer think it is, that’s okay. I have the right to revise what is important - in fact, revisions are good and welcome.

At one point, I felt that I could change a bunch of bad habits by doing TAPs. But later, the TAP practice routine became very long, but I felt bad about the prospect of just saying “this is no longer the thing to be doing and so I will stop doing that”. A more productive and less conflict-producing way to look at it would be: “I am not feeling like doing this TAP routine, and that’s okay. I deserve rest.”

Tactics are contingent on being useful.


  • Need a new system
    • System to do good things
    • System to keep me doing well and feeling well
    • System to enact long-term plans
  • Want system to recover from bumps in motivation/mood
  • Specific possible goals:
    • Weight
    • Unfinished master’s thesis
    • Internal conflicts
    • Feeling good
  • Known antipatterns:
    • System becomes a boring chore
    • System bloats over reasonable size
    • System stops tracking what’s the thing to be doing
    • Overly strict commitments
  • Ideas to leverage:
    • Writing
    • Automation, but not too much of it
    • Time is a scarce resource
    • Changes are good

Wall of text ends here

I have an idea of how could a software implementation of a part of the system look like for me - something like a versioned directed graph where nodes would be free text, ideas, tactical priorities, or resources like links to websites, or nodes grouping events or people. It would probably be very fun to implement. However, also time-consuming.

I’ll let my thoughts sit in my mind, and my next action is to sit down in a few days and think some more.

You can find more in the archive.