To understand BigPipe, it’s helpful to take a look at the problems with the existing dynamic web page serving system, which dates back to the early days of the World Wide Web and has not changed much since then. Modern websites have become dramatically more dynamic and interactive than 10 years ago, and the traditional page serving model has not kept up with the speed requirements of today’s Internet. In the traditional model, the life cycle of a user request is the following:
- Browser sends an HTTP request to web server.
- Web server parses the request, pulls data from storage tier then formulates an HTML document and sends it to the client in an HTTP response.
- HTTP response is transferred over the Internet to browser.
- After downloading CSS resources, browser parses them and applies them to the DOM tree.
How BigPipe works
To exploit the parallelism between web server and browser, BigPipe first breaks web pages into multiple chunks called pagelets. Just as a pipelining microprocessor divides an instruction’s life cycle into multiple stages (such as “instruction fetch”, “instruction decode”, “execution”, “register write back” etc.), BigPipe breaks the page generation process into several stages:
- Request parsing: web server parses and sanity checks the HTTP request.
- Data fetching: web server fetches data from storage tier.
- Markup generation: web server generates HTML markup for the response.
- Network transport: the response is transferred from web server to browser.
- CSS downloading: browser downloads CSS required by the page.
- DOM tree construction and CSS styling: browser constructs DOM tree of the document, and then applies CSS rules on it.
Changhao Jiang is a Research Scientist at Facebook who enjoys making the site faster in innovative ways.