Yuvi Panda

JupyterHub | MyBinder | Kubernetes | Open Culture

2017

I haven’t done a ‘year in retrospective’ publicly for a long time, but after reading Alice Goldfuss’ 2017 year in review decided to do one for me too!

This is a very filtered view - there are lots of important people & events in 2017 that are not contained here, and that is ok.

Professional

  • New Job

    I finished around 6-ish years at the Wikimedia Foundation, and joined UC Berkeley’s Data Science Division early in the year. I grew immensely as a person & programmer in that time. The new job gives me a lot more responsibility and it is quite fun.

    At Berkeley, I build infrastructure for students to dive into writing code to solve their own problems in their own fields without having to navigate the accidental complexities of software installation & configuration as much as possible. This is in line with my previous work like Quarry or PAWS, except it’s my main paid-for job now rather than a side project, which is great! It lets me work full time in realizing some of the ideas from my talk on democratizing programming. I’m happy with the kind of work I’m doing, the people I am doing it with, the scale I am doing it at and the impact I think it is having. I feel lucky & privileged to be able to do it!

    Wherever I go, whatever I do - good or bad - Wikimedia will always be partially responsible for that :)

  • Working closer to users

    At my Wikimedia Job, I was partially responsible for maintaining the Tool Labs infrastructure. Others (mostly volunteers) built the tools that end users actually used. While this was still good, it made me one step removed from the actual end users. At Berkeley, end users (both students & faculty) directly use the infrastructure I build This increase in directness has given me a lot of joy, happiness & confidence about the impact of the work I’m doing.

  • MyBinder

    I helped rewrite & redeploy mybinder.org as part of the mybinder team, which was one of the high points of the year! It has had the most public facing impact of all the projects I’ve worked on this year - even got a glowing review from Juilia Evans! We’re now temporarily funded via a grant from the Moore Foundation, and need to find long term sustainable solutions. We have a lot of low hanging fruit to take on in the next year, so I am super excited for it!

  • Academia

    I’m now sort-of accidentally ‘inside’ Academia as defined in the US, which is a strange and surreal experience. I’m ‘staff’, which seems to be a distinct and different track than the grad student -> post grad -> faculty track. From the inside, it is many moving parts than one behemoth - some move fast, some slow & super cool stuff / tension at the intersections. I don’t fully understand my place in it yet, but maybe someday I will!

  • Teams

    At Wikimedia, I was in a team of (otherwise amazing!) operations folks that was mostly white and male. Now, I’m in multiple diverse & multi-disciplinary teams, and it is amazing. I find it easier to do more impactful work, grow technically & professionally, build consensus and have fun. Hard to go back!

  • Intersections

    I spend time at the Berkeley Institute for Data Science, with the interesting variety of people who are there. They’re all very smart in different fields than I am in, and the intersection is great. I walk away from every conversation with anyone feeling both dumber & smarter for the new knowledge of things I now knew I didn’t know! Cool (and sometimes uncomfortable) things happen at intersections, and I want to make sure I keep being in those spaces.

Community

  • I am a Maintainer

    With enough involvement in the Jupyter community, I have now found myself to be an actual Maintainer of open source projects in ways I was not when I was at Wikimedia. Took me a while to realize this comes with a lot of responsibility and work that’s not just ‘sit and write code’. I am still coming to terms with it, and it’s not entirely fully clear to me what the responsibilities I now have are. Thankfully I’m not a solo maintainer but have wonderful people who have a lot of experience in this kinda stuff doing it with me!

  • Talks

    I was involved in 3 talks ( 1 2 3 ) and 1 tutorial at JupyterCon this year, which was a mistake I shall not make again. I also gave one talk at KubeCon NA 2017. I am a little out of practice in giving good talks - while these were okay, I know I can do better. I gave a number of talks to smaller internal audiences at UC Berkeley & ran a number of JupyterHub related workshops - I quite enjoyed those and will try to do more of that :)

  • Documentation

    I finally understood how little I had valued writing good documentation for my projects and spent time correcting it this year. I still have a long way to go, but the Jupyter community in general has helped me understand and get better at it.

Technical

  • Python skills

    I’ve started working on python projects again, rather than just scripts. Some of my skills here have rusted over years of not being heavily used. I got into writing better tests and found lots of value in them. This is another place where being part of the Jupyter ecosystem has made it pretty awesome for me.

  • Autonomous systems

    This year I’ve had far more operational responsibilities than I had at Wikimedia, and it has forced me to both learn more about automation / autonomous systems & implement several of them. It’s been an intense personal growth spurt. I also have the ability to work with public clouds & a lot of personal freedom on technology choices (as long as I can support them!), and it’s been liberating. It will be hard for me to go back to working at a place that’s automated a lot less.

  • Performance analysis + fixing

    I did a lot of performance analysis of JupyterHub, in a ‘profile -> fix -> repeat’ loop. We got it from failing at around 600ish active users to about 4k-5k now, which is great. I also learnt a lot about profiling in the process!

  • Container internals

    I learnt a lot about how containers work at the kernel level. Liz Rice’s talk Building a container from scratch made me realize that yes I could also understand containers internally! LWN’s series of articles on cgroups and namespaces helped a lot too. I feel better understanding the hype & figuring out what is actually useful to me :) It pairs well with the kubernetes knowledge I gained from 2016.

Personal

Lots happened here that I can not talk about publicly, but here is some!

  • Election & Belonging

    The 2016 US Elections were very tough on me, causing a lot of emotional turmoil. I participated in some protests, became disillusioned with current political systems, despondent about possible new ones & generally just sad. I feel a bit more resilient, but know even less than before if the US will be a good long term place for me. I’d like it to be, and am currently operating on the assumption that the Nov 2018 elections in the US will turn better, and I can continue living here. But I am starting German classes in a week just in case :)

  • Visa situation

    My visa situation has stabilized somewhat. Due to wonderful efforts of many people at UC Berkeley, I am possibly going to start my Green Card process soon. My visa is getting renewed, and I’ll have to go back to India in a few months to get it sorted. It’s a lot more stable than it was last year this time!

  • Traveling

    I did not travel out of the country much this year. I had the best Fried Chicken of my life in New Orlean’s, and good Chicken 65 (!!!) in Austin. I also did my first ever ‘road trip’, from the Bay Area to Seattle! I spent a bit of time in New York, Portland & Seattle as well - not enough though. Paying bay area rents does not help with travel :(

  • Cooking

    I cooked a lot more of the food I ate! I can make it as spicy or sweet as I want, and it is still healthy if I make it at home (right?). Other people even actually liked some of the food I made.

  • Health

    I haven’t fully recovered from a knee injury I had in 2016 :( It made me realize how much I had taken my body for granted. I am taking better care of it now, and shall continue to. I’m doing weights at home, having admitted I won’t have the discipline to actually go to a gym regularly when it is more than a 3 minute walk…

  • Hair

    It’s been mostly red this year! I might just stick to red from now on. I switched out my profile picture from random stick figure to a smiling selfie that I actually like, and it seems to have generally improved my mood.

In conclusion

  • My primary community is now the Jupyter community, rather than the Wikimedia community. This has had a lot of good cascading changes.
  • Lots of personal changes, many I can’t publicly talk about.
  • The world is an bleaker & more hopeful place than I had imagined.

‘18!

Why repo2docker? Why not s2i?

https://xkcd.com/927/

The wonderful Graham Dumpleton asked on twitter why we built an entirely new tool (repo2docker) instead of using OpenShift’s cool source2image tool.

This is a very good question, and not a decision we made lightly. This post lays out some history, and explains the reasons we decided to stop using s2i. s2i is still a great tool for most production use cases, and you should use it if you’re building anything like a PaaS!

Terminology

Before discussing, I want to clarify & define the various projects we are talking about.

  1. s2i is a nice tool from the OpenShift project that is used to build images out of git repositories. You can use heroku-like buildpacks to specify how the image should be built. It’s used in OpenShift, but can also be easily used standalone.
  2. BinderHub is the UI + scheduling component of Binder. This is what you see when you go to https://mybinder.org
  3. repo2docker is a standalone python application that takes a git repository & converts it into a docker image containing the environment that is specified in the repository. This heavily overlaps with functionality in s2i.

When repo2docker just wrapped s2i…

When we started building BinderHub, I looked around for a good heroku-like ‘repository to container image’ builder project. I first looked at Deis’ slugbuilder and dockerbuilder - they didn’t quite match our needs, and seemed a bit tied into Deis. I then found OpenShift’s source2image, and was very happy! It worked pretty well standalone, and #openshift on IRC was very responsive.

So until July 1, we actually used s2i under the hood! repo2docker was a wrapper that performed the following functions:

  1. Detect which s2i buildpack to use for a given repository
  2. Support building arbitrary Dockerfiles (s2i couldn’t do this)
  3. Support the Legacy Dockerfiles that were required under the old version of mybinder.org. The older version of mybinder.org munged these Dockerfiles, and so we needed to replicate that for compatibility.

@minrk did some wonderful work in allowing us to package the s2i binary into our python package, so users didn’t even need to download s2i separately. It worked great, and we were happy with it!

Moving off s2i

Sometime in July, we started adding support for Julia to binder/repo2docker. This brought up an interesting & vital issue - composability.

If a user had a requirements.txt in their repo and a REQUIRE file, then we’d have to provide both a Python3 and Julia environment. To support this in s2i, we’d have needed to make a python3-julia buildpack.

If it had a requirements.txt, a runtime.txt with contents python-2.7 and a REQUIRE file, we’d have to provide a Python3 environment, a Python2 environment, and a Julia environment. To support this in s2i, we’d have needed to make a python3-python2-julia buildpack.

If it had an environment.yml file and a REQUIRE file, we’d have to provide a conda environment and a Julia environment. To do this, we’d have to make a conda-julia buildpack.

As we add support for other languages (such as R), we’d need to keep expanding the set of buildpacks we had. It’d become a combinatorial explosion of buildpacks. This isn’t a requirement or a big deal for PaaS offerings - usually a container image should only contain one ‘application’, and those are usually built using only one language. If you use multiple languages, you just make them each into their own container & communicate over the network. However, Binder was building images that contained environments that people could explore and do things in, rather than specific applications. Since a lot of scientific computing uses multiple languages (looking at you, the people who do everything in R but scrape using Python), this was a core feature / requirement for Binder. So we couldn’t restrict people to single-language buildpacks.

So I decided that we can generate these combinatorial buildpacks in repo2docker. We can have a script that generates the buildpacks at build time, and then we can just check in the generated code. This would let us keep using s2i for doing image builds and pushes, and allow others using s2i to use our buildpacks. Win-win!

This had the following problems:

  1. I was generating bash from python. This was quite error prone, since the bash also needed to carefully support the various complex environment specifications we wanted to support.
  2. We needed to sometimes run assemble scripts as root (such as when there is an ‘apt.txt’ requiring package installs). This would require careful usage of sudo in the generated bash for security reasons.
  3. This was very ‘clever’ code, and after running into a few bugs here I was convinced this ‘generate bash with python’ idea was too clever for us to use reliably.

At this point I considered making the assemble script into Python, but then I’d be either generating Python from Python, or basically writing a full library that will be invoked from inside each buildpack. We’d still need to keep repo2docker around (for Dockerfile + Legacy Dockerfile support), and the s2i buildpacks will be quite complex. This would also affect Docker image layer caching, since all activities of assemble are cached as one layer. Since a lot of repositories have similar environments (or are just building successive versions of same repo), this gives up a good amount of caching.

So instead I decided that the right thing to do here is to dynamically generate a Dockerfile in python code, and build / push the image ourselves. S2I was great for generating a best-practices production container that runs one thing and does it well, but for binder we wanted to generate container images that captured complex environments without regard to what can run in them. Forcing s2i to do what we wanted seemed like trying to get a square peg into a round hole.

So in this heavily squashed commit I removed s2i, and repo2docker became stand alone. It was sad, since I really would have liked to not write extra code & keep leveraging s2i. But the code is cleaner, easier for people to understand and maintain, and the composing works pretty well in understandable ways after we removed it. So IMO it was the right thing to do!

I personally would be happy to go back to using s2i if we can find a clean way to support composability + caching there, but IMO that would make s2i too complex for its primary purpose of building images for a PaaS. I don’t see repo2docker and s2i as competitors, as much as tools of similar types in different domains. Lots of <3 to the s2i / openshift folks!

I hope this was a useful read!

TLDR

S2I was great for generating a best-practices production container that runs one thing and does it well, but for binder we wanted to generate container images that captured complex environments without regard to what can run in them. Forcing s2i to do what we wanted seemed like trying to get a square peg into a round hole.

Thanks to Chris Holgraf, MinRK and Carol Willing for helping read, reason about and edit this blog post

maintainerati 2017

I was at maintainerati today, which was super fun & quite intense! I highly appreciate GitHub & the individuals involved in making it happen!

Here’s my key takeaways from this (and several other conversations over the last few weeks leading up to this):

  1. I am now a maintainer, which is quite a different thing from a core contributor or just a contributor. The power dynamics are very different, and so are the responsibilities. I can not ostrich myself into thinking I can just keep writing code and not do anything else - that’s a disservice to not just other folks in the project, but also myself.
  2. Being a maintainer is quite hard emotionally & mentally. I’ve a lot more respect for long running OSS maintainers now than I did before. I have a lot of personal work to do before I become anything like a decent maintainer.
  3. Lots of people love Gerrit, and they also hate Gerrit :D Gerrit is very powerful, but the UX is so user hostile - I don’t think these are unrelated. I hope that some of the power of Gerrit transfers to GitHub, but at the same time GitHub does not become anything like Gerrit! Also, people have very strong opinions about how their git histories should look like - perhaps they spend a lot more time looking through it than I do?
  4. We are slowly developing better ways of dealing with Trolls in projects, but still have a long, long way to go. “Look for the helpers” here.

It was also great to go to a short, well organized (un)conference targeted at diverse group of people who are still like me in some sense! Would go again, A+++!

designing data intensive applications

I’ve been reading Designing Data Intensive Applications book & am using this post to keep notes!

I’ve picked up ideas on scaling systems through the years, but never actually sat down to actually study them semi-formally. This seems like a great start to it!

It’s a pretty big book, and it’s gonna take me a while to go through it :) Will update these notes as I go! Trying to do a chapter a week!

Chapter 1: Defining all the things

The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? Alan Kay, in interview with Dr Dobb’s Journal (2012)

I keep forgetting what an amazing marvel the internet is and how intensely (and mostly positively, thankfully) it has affected my life. This is a good reminder! However, perhaps to people who haven’t had the privileges I’ve had the Internet doesn’t feel like a natural resource? Unsure! Should ask them!

Lots of modern applications are data intensive, rather than CPU intensive. > Raw CPU power is rarely a limiting factor for these applications—bigger problems are usually the amount of data, the complexity of data, and the speed at which it is changing.

This has borne out in the infrastructure I’ve been setting up to help teach people data science - RAM is often the bottleneck, not CPU (barring machine-learning type stuff, but they want GPUs anyway).

Common building blocks for data intensive applications are:

  1. Store data so that they, or another application, can find it again later (databases)
  2. Remember the result of an expensive operation, to speed up reads (caches)
  3. Allow users to search data by keyword or filter it in various ways (search indexes)
  4. Send a message to another process, to be handled asynchronously (stream processing)
  5. Periodically crunch a large amount of accumulated data (batch processing)

These do seem to cover a large variety of bases! I feel fairly comfortable in operating, using and building on top of some of these (databases, caches) but not so much in most (never used a search index, batch processing, nor streams outside of redis). Partially I haven’t felt an intense need for these, but perhaps if I understand them more I’ll use them more? I’ve mostly strived to make everything stateless - but perhaps that’s causing me to shy away from problems that can only be solved with state? /me ponders.

Boundaries around ‘data systems’ are blurring - Redis is a cache but can be a message queue, Apache Kafka is a message queue that can have durability guarantees, etc. Lots of applications also need more than can be done with just one tool (aka a ‘pure LAMP’ stack is no longer good enough). Applications often have the job of making sure different data sources are in sync. Everyone is a ‘data designer’, and everyone is kinda fucked.

Talk about 3 things that are most important to any software system.

Reliability

Means ‘continue to work correctly, even when things go wrong’. Things that go wrong are ‘faults’, and systems need to be ‘fault-tolerent’ or ‘resilient’. Can’t be tolerant of all faults, so gotta define what faults we’re tolerant of.

Fault isn’t failure - fault is when a component of the system ‘deviates from its spec’, failure is when the system as a whole stops providing user server they want. Can’t reduce chances of fault to zero, but can work on reducing failures to zero.

Engineering is building reliable systems from unreliable parts.

Chaos monkeys are good, increase faults to find ways to reduce failure.

Hardware reliability - physical components fail. Nothing you can do about it. Fix it in software.

Hardware faults usually not corelated - one macine failing doesn’t cause another machine to fail. To truly fuck shit up you need software - can easily cause massive large scale failure! For example, a leap second bug! Or a runaway process that slowly kills every other process on the machine. One of the microservies that 50 of your microservices depend on is slow! Cascading failures! These bugs all lie dormant, until they suddenly aren’t and wreak havok. The software makes some assumption about its environment, which is true until it isn’t. No quick solution to systematic software faults.

Human error is worst error. The book offers some suggestions on how to prevent these.

  1. Minimize opportunities for errors - make it easy to do the right thing. But if it’s too restrictive, people will work around it - tricky balance.
  2. Provide full featured sandboxes so people can fuck around without fucking shit up.
  3. AUTOMATICALLY TEST EVERYTHING so when a human does fuck up, they know!
  4. Set up undo functionality, so when human does fuck up, they can roll back!

Learn about telemetry from other disciplines that have been doing this shit for far longer than us. Relevant XKCD

Reliability isn’t just for nukes & aircraft & election systems (haha). Imagine someone loses a video of their kid’s first ever step because you didn’t care. Fucking up is human and we all do it - what is important is that we care.

Sometimes you gotta sacrifice reliability, but make sure that is an explicit & conscious decision. Actually throw away your prototypes! Put FIXMEs in your code. Take a shower. Make sure hacks look, feel and sound hacky!

Scalability

System’s ability to adapt to increased ‘load’ along some axes.

Load is described with various load parameters, which depend on the system (req/s? active users? etc).

Carefully define what this means for your application, and explain your reasoning. You might have to scale in some aspects but not in other.

Once you have the load parameters for your app defined, figure out what happens when you increase load parameters but keep system resources unchanged. After that, try to figure out how much resources need to be increased.

Throughput - number of things that can be done per second. Latency is time it takes to serve a request. These are common things we care about when we move load parameters up and down.

You shouldn’t think of these as single numbers, since they vary a fair bit. Think of these as probability distributions. Learn some statistics! Use percentiles, rather than ‘average’ or ‘mean’.

High percentile latencies are especially important when you are a service that’s called by many other services - it can cascade down.

No magic scaling sauce - architecture that can scale is different for each application. But there are general purpose building blocks, so worry a little less!

Maintainability

Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.

Split into three major aspects.

Operability

Make it easy for people to operate your service! Help them monitor the health of the system, observe & debug problems, do capacity planning, keep the production environment stable, prevent single human points of failure (oh, only Chad knows about this system) and many other things!

Simplicity

Don’t make your software a big ball of mud. Take into account that new engineers will have to start working on your software, and they need to understand it quickly.

Use standard tools & approaches they have a higher likelihood of knowing - look around for standard tools before inventing your own!

Watch out for accidental complexity, and keep it to a minimum as much as possible. Abstractions are good, but abstractions are also leaky.

Evolvability

If your software is simple & has good abstractions, you can change it over time without wanting to pull all your hair out.

think os

Following a trail from a wonderful Julia Evans post led me to Allen Downey’s nice textbook manifesto. Also led me to the nice Think OS book, which seems like a super nice introduction to Operating System principles.

It is short enough (~100 pages) that I wanted to read through it. I’ve spent a good chunk of time absorbing how Operating Systems work by dint of diving into things and working through them, but it would be nice to get a refresher on the basics. There are clearly basic things I do not understand, and this seemed like a good way to explore.

This post is just a running series of notes from me reading it on a nice saturday morning.

Stack vs Heap

This is something that has always bugged me. I’ve understood just enough of this by being burnt with pointers when writing C (and primitive types in the CLR, etc), but was lacking a deep understanding of wtf was going on. The fact that these are just process program segments (like text or data) was quite a revelation :D This stackoverflow answer was also quite nice.

One interesting thing for me to investigate later from the book is how this program:

#include <stdio.h>
#include <stdlib.h>

int global;

int main() {
  int local = 5;
  void *p = malloc(128);

  printf("Address of main is %p\n", main);
  printf("Address of local is %p\n", &local);
  printf("Address of global is %p\n", &global);
  printf("Address of p is %p\n", p);

}

produces the following output for the author:

Address of main   is 0x      40057c
Address of local  is 0x7fffd26139c4
Address of global is 0x      60104c
Address of p      is 0x     1c3b010

but for me,

Address of main   is 0x5598fc64c740
Address of local  is 0x7ffeacfaf75c
Address of global is 0x5598fc84d014
Address of p      is 0x5598fc85b010

The point of the program was to demonstrate that text (main), static (global) and heap (p) are near beginning of memory and stack (local) is towards the end. While on my laptop it does seem to be the case too, the ‘start’ seems to be much farther out than on the author’s computer. Need to understand why this is the case. I’ve vaguely heard of address randomization & other security measures in OS kernels - maybe related? For another day!

Bit twiddling

I continue to find it hard to care about bit twiddling. Most things do use it of course, but it seems to be abstracted away pretty well without leaking too much (except for things that have their own nuances, like floating point representations).

malloc

Nice link to a paper about a common malloc implementation. I also know there are other malloc implementations that programs use (such as jemalloc). Something for me to dive into when I’ve more time.

tbc

I didn’t have time to finish it all, unfortunately. But shall come back to it whenever I can!

learning selinux and apparmor

I am trying to understand SELinux and AppArmor, and collecting resources here as I learn. k

SELinux for mere mortals (2014)

This was the first video I watched, and it helped me understanding what SELinux does at a fundamental basic level. It’s probably useless in a container-filled world (where I doubt Fedora shipes pre-configured SELinux rules for my containers), but it helped me think I understood types / labels, so that seems like a positive step?

The fact the presenter keeps saying things like ‘you being a good sysadmin, ssh into the server and edit the apache config file’ is freakin me out. If I’m constantly editing config files on servers manually that seems like a massive failure to me :D How times change!

Docker and SELinux (2014)

This one made a lot more sense to me as an answer to the following questions:

  1. Aren’t containers secure enough? (Partial answer)
  2. What does SELinux do for container security?

It’s convinced me that container -> host isolation and container <-> container isolation provided by SELinux is pretty simple and super useful, and should be turned on.

This talk also showed me this most wonderful coloring book that tries to explain SELinux. If this is all that is to SELinux, it seems pretty simple and useful (for the container use case).

Also, it looks like there are more recent versions of both these two talks - I should look ‘em up!

Securing Linux Applications with AppArmor (2007?)

This is me trying to understand AppArmor, which seems to have lower base of support (just Ubuntu? Maybe SUSE, but idk anyone who uses SUSE) but theoretically simpler (mostly file path based). The video seems to be shot with a potato, so the slides aren’t super clear - but the content is good enough to give me a super general overview.

The biggest thing against SELinux it talks about seems to be ‘SELinux is complex’, and not much else. I don’t know how much I buy that - but then again, I haven’t actually used SELinux anywhere :D

Unlike SELinux, I can actually see AppArmor rules on my local machine (since it is running Ubuntu). Seems fairly readable!

things to build

This is a running list of things I want to build!

There’s an analogous running list of things I want to learn. Things move between them :) I also have higher standards of documentation (other people should be able to use it) before marking these as complete.

  • kubernetes-login A helper to openssh that allows users to log in to a configurable user pod running on a kubernetes cluster. Should ideally support scp / sftp too. Helps get rid of SPOF login nodes

  • just-enough-containment A purely for-learning docker-ish container project written purely in python. Written for pedagogy and personal understanding rather than production use.

python gil resources

I was in a conversation about the Python GIL with friends a few days ago, and realized that my understanding of the specifics of the GIL problem were super hand-wavy & unstructured. So I spent some time collecting resources to learn more, and now have a better understanding!

Python’s Infamous GIL (Larry Hastings)

This was a great introduction to the history of the GIL, why it was necessary & reasons why getting rid of it is complicated.

Understanding the Python GIL (David Beazley)

This has wonderful visualizations that really helped me understand exactly why multi-threaded python behaves the way it does. Multithreading decreases performance, adding more cores decreases performance & disabling cores increases performance :) All of this made vague hand-wavy sense to me before, and make much more concrete sense now.

It isn’t easy to remove the GIL (Guido van Rossum)

A blog post from the BDFL of python, after yet another request to ‘just get rid of the GIL’.

It set the (pretty high) bar for inclusion of a GIL removal patch (that he makes clear he will not write) in Python:

I’d welcome a set of patches into Py3k only if the performance for a single-threaded program (and for a multi-threaded but I/O-bound program) does not decrease.

Not been met yet!

An Inside Look at the GIL Removal Patch of Lore (Dave Beazley)

There was an attempt in about 1999 to remove the GIL - the ‘freethreading’ patch. This is a wonderful analysis of that patch - what it tried to do, why it disappeared, what the performance costs of it were, etc. Something that really stood out to me and makes me feel not very hopeful about GIL removal in CPython was:

Despite removing the GIL, I was unable to produce any performance experiment that showed a noticeable improvement on multiple cores. Really, the only benefit (ignoring the horrible performance) seen in pure Python code, was having preemptible instructions.

This seems to be still true, even in the Gilectomy branch.

Gilectomy (Larry Hastings)

This is the only talk about a recent (~2016) GIL removal attempt.

It is amazing work, but doesn’t give me much hope. There’s been no new commits to the public git repo for about 5 months now, so am unsure what the state of it now is.

There’s probably many more - let me know if you know any, and I’ll update this when I find out more!

Gilectomy - 2017 (Larry Hastings)

PyCon 2017 just happened, and Larry Hastings gave another talk!

It seems to have had a lot of intense work done on it, and the wall clock time graph in it warms my heart! I’ve a little more hope now than I did after the 2016 talk :D

systemd simple containment for GUI applications & shells

I earlier had a vaguely working setup for making sure browsers, shells and other applications don’t eat all RAM / CPU on my machine with systemd + sudo + shell scripts.

It was a hacky solution, and also had complications when used to launch shells. It wasn’t passing in all the environment varialbes it should, causing interesting-to-debug issues. sudo rules were complex, and hard to do securely.

I had also been looking for an excuse to learn more Golang, so I ended up writing systemd-simple-containment or ssc.

It’s a simple golang application that produces a binary that can be setuid to root, and thus get around all our sudo complexity, at the price of having to be very, very careful about the code. Fortunately, it’s short enough (~100 lines) and systemd-run helps it keep the following invariants:

  1. It will never spawn any executable as any user other than the ‘real’ uid / gid of the user calling the binary.
  2. It doesn’t allow arbitrary systemd properties to be set, ensuring a more limited attack surface.

However, this is the first time I’m playing with setuid and with Go, so I probably fucked something up. I feel ok enough about my understanding of real and effective uids for now to use it myself, but not to recommend it to other people. Hopefully I’ll be confident enough say that soon :)

By using a real programming language, I also easily get commandline flags for sharing tty or not (so I can use the same program for launching GUI & interactive terminal applications), pass all environment variables through (which can’t be just standard child inheritence, since systemd-run doesn’t work that way) & the ability to setuid (you can’t do that easily to a script).

I was sure I’d hate writing go because of the constant if err != nil checks, but it hasn’t bothered me that much. I would like to write more Go, to get a better feel for it. This code is too short to like a language, but I definitely hate it less :)

Anyway, now I can launch GUI applications with ssc -tty=false -isolation=strict firefox and it does the right thing. I currently have available -isolation=strict and -isolation=relaxed, the former performing stronger sandboxing (NoNewPrivileges, PrivateTmp) than the latter (just MemoryMax). i’ll slowly add more protections here, but just keep two modes (ideally).

My Gnome Terminal shell command is now ssc -isolation=relaxed /bin/bash -i and it works great :)

I am pretty happy with ssc as it exists now. Only thing I now want to do is to be able to use it from the GNOME launcher (I am using GNOME3 with gnome-shell). Apparently shortcuts are no longer cool and hence pretty hard to create in modern desktop environments :| I shall keep digging!

systemd gui applications

Update: There’s a follow-up post with a simpler solution now.

Ever since I read Jessie Frazelle’s amazing setup (1, 2, 3) for running GUI applications in docker containers, I’ve wanted to do something similar. However, I want to install things on my computer - not in docker images. So what I wanted was just isolation (no more Chrome / Firefox freezing my laptop), not images. I’m also not as awesome (or knowledgeable!) as Jess, so will have to naturally settle for less…

So I am doing it in systemd!

Before proceeding, I want to warn y’all that I don’t entirely know what I am doing. Don’t take any of this as security advice, since I don’t entirely understand X’s security model. Works fine for me though!

GUI applications

I started out using a simple systemd templated service to launch GUI applications, but soon realized that systemd-run is probably the better way. So I’ve a simple script, /usr/local/bin/safeapp:

#!/bin/bash
exec sudo systemd-run  \
    -p CPUQuota=100% \
    -p MemoryMax=70% \
    -p WorkingDirectory=$(pwd) \
    -p PrivateTmp=yes \
    -p NoNewPrivileges=yes \
    --setenv DISPLAY=${DISPLAY} \
    --setenv DBUS_SESSION_BUS_ADDRESS=${DBUS_SESSION_BUS_ADDRESS} \
    --uid ${USER} \
    --gid ${USER} \
    --quiet \
    "$1"

I can run safeapp /opt/firefox/firefox now and it’ll start firefox inside a nice systemd unit with a 70% Memory usage cap and CPU usage of at most 1 CPU. There’s also other minimal security stuff applied - NoNewPrivileges being the most important one. I want to get ProtectSystem + ReadWriteDirectories going too, but there seems to be a bug in systemd-run that doesn’t let it parse ProtectSystem properly…

Also, there’s an annoying bug in systemd v231 (which is what my current system has) - you can’t set CPUQuotas over 100% (aka > 1 CPU core). This is annoying if you want to give each application 3 of your 4 cores (which is what I want). Next version of Ubuntu has v232, so my GUI applications will just have to do with an aggregate of 1 full core until then.

The two environment variables seem to be all that’s necessary for X applications to work.

And yes, this might ask you for your password. I’ll clean this up into a nice non-bash script hopefully soon, and make all of these better.

Anyway, it works! I can now open sketchy websites with scroll hijacking without fear it’ll kill my machine!

CLI

I wanted each tab in my terminal to be its own systemd service, so they all get equitable amount of CPU time & can’t crash machine by themselves with OOM.

So I’ve this script as /usr/local/bin/safeshell

`#!/bin/bash
exec sudo systemd-run \
    -p CPUQuota=100% \
    -p MemoryMax=70% \
    -p WorkingDirectory=$(pwd) \
    --uid yuvipanda \
    --gid yuvipanda \
    --quiet \
    --tty \
    /bin/bash -i

The --tty is magic here, and does the right things wrt passing the tty that GNOME terminal is passing in all the way to the shell. Now, my login command (set under profile preferences > command in gnome-terminal) is sudo /usr/local/bin/safeshell. In addition, I add the following line to /etc/sudoers:

%sudo ALL = (root) NOPASSWD:SETENV: /usr/local/bin/safeshell

This + just specifying the username directly in safeshell are both hacks that make me cringe a little. I need to either fully understand how sudo’s -E works, or use this as an opportunity to learn more Go and make a setuid binary.

To do

[ ] Generalize this to not need hacks (either with better sudo usage or a setuid binary) [ ] Investigate adding more security related options. [ ] Make these work with desktop / dock icons.

I’d normally have just never written this post, on account of ‘oh no, it is imperfect’ or something like that. However, that also seems to have come in the way of ability to find joy in learning simple things :D So I shall follow b0rk’s lead in spending time learning for fun again :)