How To Be A Great Engineer

The myth of the 10x engineer is widely known in programming circles, and there are definitely people who seem to fit the description. Jeff Dean and John Carmack are some of the programmers I look up to as examples of 10x-ers, and there are also a few people I know personally that just seem to have crazy high output. How do you become like these people? Here are the qualities that I think make a great engineer.

  • Constant self-improvement
    • I once debugged a Rails app with someone on my Macbook Pro. He wasn’t familiar with OS X, but still tried to learn every keyboard shortcut to become more efficient. It doesn’t seem that big, but this was a revelation to me. I don’t think I would have ever thought of doing that. Imagine applying that mentality to everything you do. Skills and knowledge compound like interest, and it truly is worthwhile to stop to think about whether there’s a better of way of doing what you’re doing instead of mindlessly doing what you’ve always done because it’s easy and you’re used to it. Wow that was a long and complicated sentence but I’m glad you stuck through it.
  • Unstoppable drive
    • Great programmers don’t let anything stop them. Bug in the system library? Fix it. Some person or org blocking you? Find a workaround. Need to shave a giant fucking yak to finish the job? Just do it with the smallest amount of hassle and without any complaints. This is probably the most straightforward quality but also the hardest to achieve.
  • Deep empathy
    • Engineers don’t work in a vacuum, as much as I’d like to silo up and work on my magnum opus. You need to communicate with other people, whether it be about code, architecture, or product. In these situations, you want to be able to understand the other person’s concerns to know exactly what you should be working on, and why. I’ve definitely been caught in the pitfall of building things for the sake of building things, with the end result being lots of wasted work.
  • Sense of style
    • Good code is very ‘I know it when I see it’. There’s no objective metric that captures code quality, and it’s definitely not just about following PEP8. It’s all a tug-of-war between choosing the correct abstractions and finding flexibility. To write clean, elegant code, you absolutely need a sense of style. To this day, some of the nicest code I’ve seen has been written by Jae Wie, who has an habit of relentlessly destroying unnecessary code.

If you’re interested in this kind of stuff, you should probably read Khan Academy’s Engineering Principles and Career Development Guide.

I’m probably missing some more qualities, and I’d love to hear your opinion on what makes a great engineer. Hacker News discussion here.

Scaling Jarvis from 5 to 5000 users in forty-eight hours

Last Saturday, Daniel and I were ready to launch our side project, Jarvis, to the world. We had been building and dogfooding it for about three weeks and were excited for people other than our friends—real people—to try it out. We posted the link on reddit and Product Hunt and…not much happened. About 15 or so people briefly messaged Jarvis and then left. We went home, dejected, and slept.

Two days later, Jarvis had over 5000 users and got featured on some cool sites like these. We have now sent over 13000 reminders to over 8000 users, and migrated our parser to wit.ai. But in the crucial first forty-eight hours, here’s how things went down.

Hours 0-12 I woke up in the middle of the night and checked Product Hunt and we were right there, trending on the front page. People were actually trying Jarvis out. I checked the Heroku logs. Holy shit. Logs were scrolling like the Matrix. All was well and good, until we stopped responding to messages. I checked the logs again. We were 500’ing all over the place. Someone sent some weird unicode that we didn’t decode properly. I woke Daniel up and he got to debugging the issue while I messaged all our new users to tell them that we were having issues and could they please hold on for a second. 15 minutes later, he pushed a fix. Thank god. We could rest easy…for about five minutes, when something else broke. We ran into our Google Maps Geocoding API limit of 2,500 requests per day. We quickly generated a new API key but Google said it could take a while. Meanwhile, a ton of people were messaging Jarvis and probably bouncing because he was unresponsive. My palms were sweaty but my knees and arms were alright. This was the most exciting time of my life, except for the time I played this really good guy at Smash 64 at a tournament. 10 minutes later we were seeing 200s again. Luckily, Facebook buffers any requests that didn’t 200 so we could still backfill the missed requests. We went back to sleep soon after that little incident.

Lesson learned: Be careful around Unicode.

Hours 12-24 We grabbed brunch nearby and started talking about our future billions. I upgraded our dynos and went home and created a test framework continued writing good tests like any good software developer who always writes good tests.

Lesson learned: Nothing much, really. Things were pretty good at this point.

Hours 24-36 Monday morning. I woke up and everything was on fire. I checked the logs and we were down again. Goddammit. We were getting hit with 150 requests per minute thanks to the Lifehacker article. Luckily, the fix was super straightforward (we had missed a unicode conversion) and I pushed it like five minutes after waking up #mlgskills #realtalk. Jarvis was down for a total of about 30 minutes at this point, and we probably lost a bunch of traffic. Oh well, not much we can do about that. Then our Redis cluster started getting really slow. Like, 1 second per query slow. It may have had something to do with us just stuffing all our data into two keys. Then we lost connection to the cluster and dropped some users. This was not good. I contacted RedisToGo and they told me they were investigating it. Eventually they managed to recover the cluster and upgrade the memory but we had lost about a thousand users during that time. I started building a Postgres backend because this Redis thing was clearly getting unsustainable.

Lesson learned: Use Redis for caching or message brokering and a real database for database things. Don’t stuff all your data into a Redis list and iterate through the list to find things. This is inefficient.

Hours 36-48 Things are getting stable. It looked like we were going to hit our API limit with Google again but if you attach your credit card to your developer account you can pay 50 cents for every extra 1000 requests. This is a pretty good deal, so we did it. We also finished the migration from Redis to Postgres with very few problems. We were still getting a shitton of users, my ex started talking to me again, and things are looking pretty good.

Lesson learned: Make tiny, immediately useful things to validate your market.

I measured my heartrate at Coachella

I wore my Fitbit to Coachella. Here’s the data I scraped from it afterwards. Give it some time to load — at 12 samples per minute, there are about 17,280 data points per graph! Also click on the artist titles for a neat preview.

Friday

Saturday

Sunday

Hacking 4 Yeezys

This week, Slam Jam Socialism ran an online arcade for a chance to win some Yeezy Boost 350s. My friend Marc sent me the link and asked if I could hack the leaderboards and cop some shoes. Here’s how I did it.

The game was a single-player version of Pong, written in plain JavaScript using <canvas>. Each time the ball hits a paddle, you get a point and it speeds up until you inevitably lose. While playing the game, I had Chrome’s Network tab open to see how it reported actions to the server.

So: every time you lose, the game sends a POST request to the site at /callback.php along with some parameters. Most notably, pointz. So you can just send a new request and set pointz to 99999, right? As I soon discovered, it’s not that easy. When I tried it, I got an error message: "Sorry, please try again". Hmm, my request was almost identical to the legitimate one, though. But what’s sec? It looks like a random hash…

Turns out that sec stands for secret, and the score is one of the inputs to the hash. If the server’s hash doesn’t match yours, then your score gets rejected. The hash must be generated client-side, so I just need to look through the source to find out where the POST request is sent:

    var _0x4d74=["...","\x73\x70\x6C\x69\x74","...","\x66\x72\x6F\x6D\x43\x68\x61\x72\x43\x6F\x64\x65","\x72\x65\x70\x6C\x61\x63\x65","\x5C\x77\x2B","\x5C\x62","\x67"];eval(function(_0x3e69x1,_0x3e69x2,_0x3e69x3,_0x3e69x4,_0x3e69x5,_0x3e69x6){_0x3e69x5=function(_0x3e69x3){return (_0x3e69x3<_0x3e69x2?_0x4d74[4]:_0x3e69x5(parseInt(_0x3e69x3/_0x3e69x2)))+((_0x3e69x3=_0x3e69x3%_0x3e69x2)>35?String[_0x4d74[5]](_0x3e69x3+29):_0x3e69x3.toString(36))};if(!_0x4d74[4][_0x4d74[6]](/^/,String)){while(_0x3e69x3--){_0x3e69x6[_0x3e69x5(_0x3e69x3)]=_0x3e69x4[_0x3e69x3]||_0x3e69x5(_0x3e69x3)};_0x3e69x4=[function(_0x3e69x5){return _0x3e69x6[_0x3e69x5]}];_0x3e69x5=function(){return _0x4d74[7]};_0x3e69x3=1};while(_0x3e69x3--){if(_0x3e69x4[_0x3e69x3]){_0x3e69x1=_0x3e69x1[_0x4d74[6]]( new RegExp(_0x4d74[8]+_0x3e69x5(_0x3e69x3)+_0x4d74[8],_0x4d74[9]),_0x3e69x4[_0x3e69x3])}};return _0x3e69x1}(_0x4d74[0],62,436,_0x4d74[3][_0x4d74[2]](_0x4d74[1]),0,{}))

The game is somehow in hex. No, wait—they’re variables but they just look like memory addresses. That eval is the code that gets executed. We can just console.log what is getting evaled:

$(document).ready(function(){window.addEventListener("load",function(){window.scrollTo(0,0)});var fbb=navigator.appVersion.indexOf("FBAN");if(fbb>-1){var fbbh=44}else{var fbbh=0}var canvas=document.getElementById("canvas"),ctx=canvas.getContext("2d"),W=$(window).width(),H=$(window).height()-fbbh,particles=[],ball={},paddles=[2],mouse={},points=0,fps=60,particlesCount=3,flag=0,particlePos={},multipler=1,startBtn={},restartBtn={},over=0,init,paddleHit;var mq=window.matchMedia("(min-width: 768px)");if(mq.matches){var offset=0}else{var offset=50}function player(){window.requestAnimFrame=(function(){return window.requestAnimationFrame||window.webkitRequestAnimationFrame||window.mozRequestAnimationFrame||window.oRequestAnimationFrame||window.msRequestAnimationFrame||function(callback){return window.setTimeout(callback,1000/60)}})();window.cancelRequestAnimFrame=(function(){return window.cancelAnimationFrame||window.webkitCancelRequestAnimationFrame||window.mozCancelRequestAnimationFrame||window.oCancelRequestAnimationFrame||window.msCancelRequestAnimationFrame||clearTimeout})();canvas.addEventListener("mousemove",trackPosition,true);canvas.addEventListener("mousedown",btnClick,true);canvas.addEventListener("touchmove",trackPositionT,true);canvas.addEventListener("touchend",btnClick,true);collision=document.getElementById("collide");collisionWall=document.getElementById("collide_wall");canv...

This is a lot better, but it’s still minified. Luckily, there’s jsnice to prettify it for us:

  // some code above...
  function start() {
  // some code here...
    over = 1;
    $.ajax({
      method : "POST",
      url : "callback-ism.php",
      data : {
        action : "setpoint",
        email : email,
        pointz : y,
        sec : s(email, y, token)
      }
    }).done(function(opt_classNames) {
	// more code below...

Well that worked way better than I expected. So sec is generated from a function s, which takes in email, y (which is pointz), and token. What’s s?

function s(email, val, v) {
  md5_iterations = parseInt(v.substring(0, 1));
  multiplier = parseInt(v.substring(4, 5));
  divider = parseInt(v.substring(17, 18));
  hash_calc = Math.ceil(val * multiplier / divider) + v + email;
  md5_iteration = 1;
  for (;md5_iteration <= md5_iterations;md5_iteration++) {
    hash_calc = m(hash_calc);
  }
  return hash_calc;
}

The hash is produced from iteratively hashing substrings of my email, the points I scored, and a token generated on each session that’s set server-side. Now all that’s left to do is calculate the hash of the number of points I want, and replace the parameters in the curl request with my new score and hash.

rekt

I still didn’t win the Yeezys though

Finding the edit distance between two strings

The edit distance between two strings refers to the minimum number of character insertions, deletions, and substitutions required to change one string to the other. For example, the edit distance between “kitten” and “sitting” is three: substitute the “k” for “s”, substitute the “e” for “i”, and append a “g”. Our particular metric, which allows insertions, deletions, and substitutions is called the Levenshtein distance (other variants exist, such as the LCS (longest common subsequence) distance, which disallows substitution). Edit distance is useful for measuring similarity between strings. Superficial similarity is implied here, as in: the content and meaning of the words don’t matter; for semantic similarity, look at Jaccard’s similarity coefficient. Let’s implement the Wagner-Fischer algorithm, which computes the edit distance between two strings.

The key insight required to understand this algorithm is – as usual – recursive. Notice that if we have the edit distance d between two strings minus the first character, then the actual edit distance will be d + 1 only if the first characters of both strings are not the same. Take the words “jewels” and “mogwai”: assuming we have the edit distance between “ewels” and “ogwai” (5), then the actual distance will be 6, since “j” and “m” are different. We can simply apply this rule until we hit the end of one (or both) of the strings. Then the edit distance will just be the length of the remaining string, as we just can add the requisite characters. The astute reader will notice that there is a mistake in our analysis, however: we’ve only covered the substitution rule. Evaluating “godspeed” and “speed” using this rule would result in an edit distance of 8, which is clearly wrong, since we should be able to do it in 3 moves by deleting “god” from “godspeed”. We can support suffix alignment by computing our first rule, then the edit distance between each string and the last n-1 characters of the other plus one, and then taking the minimum between all of these computations. This essentially ends up trying each path to see whether we can find a common fragment in both strings that isn’t necessarily aligned.

Here’s my implementation of our rule in Python:

1
2
3
4
5
6
7
8
def levenshtein(s1, s2):
    if len(s1) == 0 or len(s2) == 0:
        return max(len(s1), len(s2))

    return min(levenshtein(s1[1:], s2) + 1,
               levenshtein(s1, s2[1:]) + 1,
               levenshtein(s1[1:], s2[1:]) if s1[0] == s2[0]
               else levenshtein(s1[1:], s2[1:]) + 1)

However, this is really inefficient because of all the repeated subcomputations. Running this on the first two sentences of lorem ipsum doesn’t even terminate in a reasonable amount of time. We can memoize, essentially caching our subcomputations, which is a top-down dynamic programming strategy:

1
2
3
4
5
6
7
8
9
10
11
12
memo = {}
def levenshtein(s1, s2):
    if len(s1) == 0 or len(s2) == 0:
        return max(len(s1), len(s2))

    if (s1, s2) not in memo:
        memo[(s1, s2)] = min(levenshtein(s1[1:], s2) + 1,
                levenshtein(s1, s2[1:]) + 1,
                levenshtein(s1[1:], s2[1:]) if s1[0] == s2[0]
                else levenshtein(s1[1:], s2[1:]) + 1)

    return memo[(s1, s2)]

That’s easy, but boring. Plus, there’s still a lot of memory overhead from the stack frames generated from all the recursion. Let’s take a bottom-up approach to the problem instead. The actual Wagner-Fischer algorithm uses a two-dimensional array (or matrix) to hold the edit distances between prefixes of each string. The array (let’s call it A) has size m by n, where m and n are one plus the lengths of the first and second strings s1 and s2, respectively. What is stored at each index i, j is the edit distance between s1[:i] and s2[:j]. Then, from our work above, we have the following relation:

\[A[i, j] = \cases{ i & \text{if } j = 0\cr j & \text{if } i= 0\cr A[i - 1, j - 1] & \text{if } s1[i] = s2[j]\cr min(A[i-1, j], A[i, j-1], A[i - 1, j - 1]) + 1 & \text{if } s1[i] \neq s2[j] }\]

Notice that the recursive part of the relation depends on the values directly above, left, and upper left. This means we have to fill in the matrix in a certain order. My solution works like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
def levenshtein(s1, s2):
    x = len(s1) + 1 # the length of the x-coordinate
    y = len(s2) + 1 # the length of the y-coordinate

    A = [[-1 for i in range(x)] for j in range(y)]
    for i in range(x):
        A[0][i] = i

    for j in range(y):
        A[j][0] = j

    for i in range(1, y):
        for j in range(1, x):
            if s1[j- 1] == s2[i - 1]:
                A[i][j] = A[i - 1][j - 1]
            else:
                A[i][j] = min(
                        A[i - 1][j] + 1,
                        A[i][j - 1] + 1,
                        A[i - 1][j - 1] + 1
                        )
    return A[y - 1][x - 1] # return the edit distance between the two strings

And now you know how to compute the Levenshtein distance! As an exercise, try reconstructing the actual transformations to go from s1 to s2.