What the research is:
Recent research has demonstrated that binary optimization is important for achieving peak performance for various applications. For instance, the state-of-the-art BOLT binary optimizer developed at Meta, which is part of the LLVM Compiler Project, significantly improves the performance of highly optimized binaries produced using compilers’ most aggressive optimizations, such as profile-guided and link-time optimizations.
In this research, we propose a novel approach to apply binary optimization without the need to profile the application. Our technique, called Vintage ESP Amended (VESPA), builds on top of a previous technique called evidence-based static prediction (ESP), which applies machine learning techniques to statically infer the direction of branch instructions in a program.
VESPA expands on ESP in several ways to make it useful in the context of binary optimizers. VESPA increases the scope where binary optimizers can be used, thus enhancing the range of applications that can leverage these tools to improve their performance. Our work also enables higher performance and better user experience for many software applications that were out of the reach of binary optimizers, such as end-user mobile applications.
How it works:
VESPA is useful for obtaining profile information to feed binary optimizers like BOLT statically, i.e., with no need to execute the target application to produce profile data. To achieve this, VESPA employs machine learning techniques. First, during a training phase, VESPA is provided with a set of applications and corresponding dynamic profiles. Using these, VESPA trains a neural network model that learns the probability that branch instructions in the programs will be taken based on various program characteristics (e.g., the condition code of the branch or whether the target block is a loop header).
After this model is produced, it can be used to infer the probability that branches from other programs will be taken. VESPA then transforms these probabilities into code frequencies, or estimates of how often each individual piece of the program will execute, similar to the information that a binary optimizer normally requires from dynamic profiles obtained by executing an application. Once the static profile data produced by VESPA is injected into a binary optimizer, this tool can proceed with its optimization steps as usual, completely oblivious to how the profile data was computed. VESPA, therefore, can very easily be integrated into existing binary optimizers, which we demonstrated by integrating it into Meta’s BOLT binary optimizer.
Compared to the seminal ESP technique that inspired our work, VESPA provides three main improvements:
- An enhanced neural-network model
- New program features to improve the model’s accuracy
- A technique to derive code frequencies required for binary optimizations instead of simply branch directions
Why it matters:
BOLT can provide performance speedups of about 20 percent not only for many of Meta’s widely deployed server workloads, but also for other widely used open source applications such as compilers (e.g., GCC and Clang) and database systems (e.g., MySQL and PostgreSQL). To achieve these results, BOLT relies on very accurate dynamic profile data collected from executing the target applications on representative inputs. Unfortunately, collecting these profile data adds complexity and overheads to applications’ build processes, and sometimes it is not even possible — for example, in the case of mobile applications executing on user devices.
Using VESPA to derive static profiles for the BOLT binary optimizer, our work demonstrates that a 6 percent speedup can be achieved on top of highly optimized binaries built with Clang -O3 without the need for dynamic profiling the application. As such, our research demonstrates that binary optimizations can be beneficial even in scenarios where dynamic profiling is prohibitive or impossible, thus opening new opportunities for binary optimizers, such as end-user mobile applications.