Better, Faster, Stronger

After releasing our new coding interface, a number of users have had difficulty running code and loading exercises over the past week. I apologize for this. These issues are not related to the new interface, but instead are deeper issues with how we evaluate code submissions. I wanted to share the background of what is happening and what we are doing to build a stronger code evaluation platform going forward.


It is possible, but not at all easy to run languages like Ruby, Python, and PHP directly in the browser. In fact we used to do this by serving an interpreter for each language, compiled to JavaScript, to your browser!

We ultimately found this approach to be too fragile to use in production, and last summer we launched the Python language track on top of a server-side code evaluation service that we call Codex. Codex was originally designed to run Python code in a secure environment. As we added new language tracks like Ruby and PHP, we began to push it to its limit.

When we designed the new interface, we built it to be more responsive. Unfortunately this strained Codex in ways it wasn’t intended to be used, resulting in spotty performance. Making Codex more robust will let the new interface deliver on its promise.

What we’re doing about it

Codex has gone through some significant growing pains as we’ve rapidly expanded both our user base and the ways we use the service.

We pushed a series of fixes to take it off life-support and are doubling down our investment in it as the backbone of our coding experience. We are refactoring core aspects of the service to handle much larger loads and adding support for multi-process environments (i.e. running mongodb, node.js, and redis together in a single app).

This will ensure that Codex is prepared to grow with users as they tackle an ever increasing variety of topics and technologies. Thank you for your patience, and we’re excited to see what you build!

If you are experiencing these issues, please see our support article here.