Behind the Engine Cover

As Node.js gains continuous traction offering parallel and in many times superior qualities compared to traditional technologies, the question of optimizing our app/service becomes one of increased importance. Whether we’re building a monolith server or a microservices oriented solution, we’d like to squeeze more juice out of the underlying CPU/VM by running optimized code.

Putting aside mandatory yet trivial approaches (such as favoring for loops over using forEach with iterables) the next level is looking into the underlying V8 engine powering Node.js.

As of version 8.3.0, Node.js is now shipping with the Ignition and Turbofan execution pipeline, Ignition being the interpreter and TurboFan being the optimizing compiler.

In a nutshell, Ignition takes the parsed JS code and interprets it into V8 bytecode (if you’re keen on seeing it just run node –print-bytecode index.js), a platform independent machine code abstraction. During our app’s execution statistics are collected and when certain conditions are met (e.g. a function is considered “hot”) Turbofan kicks in, optimizes some part of the code and compiles it into actual machine code.

Some of the things that Turbofan does are outright mind blowing, such as continuing to run a loop from previous point after optimizing the code, on stack replacement (OSR) and other amazing stuff.

An important thing to keep in mind is that Turbofan makes speculative assumptions based on previous observations when running the code, so when these assumptions no longer hold true Turbofan backtracks and de-optimizes that code.

So why is all this important for us as Node.js app developers? since if we can peek into the process and detect if/what gets or doesn’t get optimized and when some hot code parts are de-optimized, we can rewrite that code part and help V8 run our code faster.

Our Tools

One of the first tools that come to mind for gaining insights on our code processing by V8 is Turbolizer, a globally installed npm package which produces precious information when running our app with the –trace-turbo flag, and even provides a web UI to navigate the results. The bad news is that currently optimization information is hardly produced and so for the time being Turbolizer is not particularly useful for gaining optimization/de-optimization insights.

We will look at a simple, lower-level alternative of running our app instrumented with Node.js natives syntax, a set of built in functions we can use to peek under the hood, simply by running node with –enable-natives-syntax flag. For the scope of this post we’re going to look at the following functions:

  • %GetOptimizationStatus(func)
  • %OptimizeFunctionOnNextCall(func) // forces Turbofan to mark func for optimization

Note: you can find the complete list of functions here.


This function returns an integer containing set of bit flags. The list of flags can be found here (currently around line number 760, search for “enum class OptimizationStatus”).

To easily interpret the returned optimization status, we can use the following utility functions:

function checkBit(test, bit) {
    return !!(test & (1 << bit));

function unpackStatus(optStatus) {
    return {
        IsFunction: checkBit(optStatus, 0),
        NeverOptimize: checkBit(optStatus, 1),
        AlwaysOptimize: checkBit(optStatus, 2),
        MaybeDeopted: checkBit(optStatus, 3),
        Optimized: checkBit(optStatus, 4),
        TurboFanned: checkBit(optStatus, 5),
        Interpreted: checkBit(optStatus, 6),
        MarkedForOptimization: checkBit(optStatus, 7),
        MarkedForConcurrentOptimization: checkBit(optStatus, 8),
        OptimizingConcurrently: checkBit(optStatus, 9),
        IsExecuting: checkBit(optStatus, 10),
        TopmostFrameIsTurboFanned: checkBit(optStatus, 11),


We will use it to ensure the function we’re interested in is marked for optimization early on, in case it is not called often in our test code (in which case Turbofan will never get to optimize it) or simply because it is not considered “hot” enough to optimize.

It is important to note that we need to call our function of interest at least once before invoking OptimizeFunctionOnNextCall.

Note that sometimes we might be interested to see when Turbofan decides to optimize a function, and so then we would not use OptimizeFunctionOnNext call and just let TurboFan execute normally. For example, we might be interested to know how many times the function is called before it is considered “hot”.

Checking Optimization Status

Equipped with this knowledge let’s run a simple app and and gain some insights.
function sumArguments(...args) {
    return args.reduce((acc, cur) => acc + cur, 0);

function optimizationTest() {
    for (let i=0; i<1000000; i++) {
        sumArguments(42, 1, 99, 123, i);
        const optStatus = %GetOptimizationStatus(sumArguments);
        const unpackedStatus = unpackStatus(optStatus);
        if (unpackedStatus.Optimized) {
            console.log(`sumArguments has been optimized after ${i} iterations`, unpackedStatus);

Running the above app with the --allow-natives-syntax flag yields the following printout:

sumArguments has been optimized after 2054 iterations 
{ IsFunction: true,
  NeverOptimize: false,
  AlwaysOptimize: false,
  MaybeDeopted: false,
  Optimized: true,
  TurboFanned: true,
  Interpreted: false,
  MarkedForOptimization: false,
  MarkedForConcurrentOptimization: false,
  OptimizingConcurrently: false,
  IsExecuting: false,
  TopmostFrameIsTurboFanned: false }
We can clearly see that the function has been optimized by TurboFan after 928 iterations. If we further add i as the last argument to the sumArguments call we’d get a similar output but this time it would take 2,634 iterations to optimize the sumArguments function.

Marking for Optimization

Sometimes we’d like to have our code optimized early on, without waiting for TurboFan to decide the function is hot. In such cases we’ll use the OptimizeFunctionOnNextCall native function.

Adding the following couple of lines just before running our “app” will do just that:



This yields the following output:

sumArguments has been optimized after 0 iterations 
{ ...
  Optimized: true,
  TurboFanned: true,
We can now see optimization occurred immediately instead of waiting 2,054 iterations.

Note sumArguments must be called at least once sometime before marking it for optimization, otherwise this would not work.


It is well known that try-catch statements in JS are bad common practice. Let’s use this as an example and see how it affects the optimization of our code.

function isFunctionOptimized(func) {
    const optStatus = %GetOptimizationStatus(func);
    const unpackedStatus = unpackStatus(optStatus);
    return unpackedStatus.Optimized;

function sumArguments(...args) {
    let total = args.reduce((acc, cur) => acc + cur, 0);
    if (args.some(val => val > 1000000)) {
        throw new Error('arg values limited to 100K');
    return total;

let wasOptimized = false;

function optimizationTest() {
    for (let i=0; i<10000000; i++) {
        try {
            sumArguments(42, 1, 99, 123, i);
        } catch (err) {}

        isOptimized = isFunctionOptimized(sumArguments);
        if (isOptimized && !wasOptimized) {
            console.log(`sumArguments is now optimized (${i} iterations)`);
            wasOptimized = true;
        } else if (wasOptimized && !isOptimized) {
            console.log(`sumArguments is now de-optimized (${i} iterations)`);

The output we get is:

sumArguments is now optimized (5784 iterations)
sumArguments is now de-optimized (1000002 iterations)
As we can see, our function was optimized after 5,784 iterations, however as soon as it started throwing exceptions it was de-optimized.

It is interesting to see that TurboFan was more conservative optimizing this time (5,784 iterations compared to 2,054 previously) however as it observed the exception was not thrown for a while it did optimize our function. Then when the observation/assumption was no longer correct (the function started throwing) it was de-optimized.

What Now?

The above is by no means a definitive guide to optimizing Node.js apps. For example you could run any node app with the --trace-deopt --trace-opt flags and get information pertaining to the de/optimization decision (location, reason).

Happy Optimizing!