The general idea is to not let the headless browser do any extra work. This might include loading images, applying CSS rules, firing XHR requests, etc.
As with other tools, optimization of Puppeteer depends on the exact use case, so keep in mind that some of these ideas might not be suitable for your project. For instance, if we had avoided loading images in our first example, our screenshot might not have looked how we wanted.
Anyway, these optimizations can be accomplished either by caching the assets on the first request, or canceling the HTTP requests outright as they are initiated by the website.
Let’s see how caching works first.
You should be aware that when you launch a new headless browser instance, Puppeteer creates a temporary directory for its profile. It is removed when the browser is closed and is not available for use when you fire up a new instance—thus all the images, CSS, cookies, and other objects stored will not be accessible anymore.
We can force Puppeteer to use a custom path for storing data like cookies and cache, which will be reused every time we run it again—until they expire or are manually deleted.
constbrowser =
awaitpuppeteer.launch({
userDataDir
:
'./data',
});
This should give us a nice bump in performance, as lots of CSS and images will be cached in the data directory upon the first request, and Chrome won’t need to download them again and again.
However, those assets will still be used when rendering the page. In our scraping needs of Y Combinator news articles, we don’t really need to worry about any visuals, including the images. We only care about bare HTML output, so let’s try to block every request.
Luckily, Puppeteer is pretty cool to work with, in this case, because it comes with support for custom hooks. We can provide an interceptor on every request and cancel the ones we don’t really need.
The interceptor can be defined in the following way:
awaitpage.setRequestInterception(
true);
page.on(
'request', (request) => {
if
(request.resourceType ===
'document') {
request.continue();
}
else{
request.abort();
}
});
As you can see, we have full control over the requests that get initiated. We can write custom logic to allow or abort specific requests based on their resourceType
. We also have access to lots of other data like request.url
so we can block only specific URLs if we want.
In the above example, we only allow requests with the resource type of "document"
to get through our filter, meaning that we will block all images, CSS, and everything else besides the original HTML response.
Here’s our final code:
constpuppeteer =
require(
'puppeteer');
function run (pagesToScrape) {
return
new
Promise
(
async(resolve, reject) => {
try
{
if
(!pagesToScrape) {
pagesToScrape =
1;
}
const
browser =
awaitpuppeteer.launch();
const
page =
awaitbrowser.newPage();
await
page.setRequestInterception(
true);
page.on(
'request', (request) => {
if
(request.resourceType() ===
'document') {
request.continue();
}
else{
request.abort();
}
});
await
page.goto(
"https://news.ycombinator.com/");
let
currentPage =
1;
let
urls = [];
while
(currentPage <= pagesToScrape) {
await
page.waitForSelector(
'a.storylink');
let
newUrls =
awaitpage.evaluate(
() =>{
let
results = [];
let
items =
document.querySelectorAll(
'a.storylink');
items.forEach(
(item) =>{
results.push({
url
: item.getAttribute(
'href'),
text
: item.innerText,
});
});
return
results;
});
urls = urls.concat(newUrls);
if
(currentPage < pagesToScrape) {
await
Promise
.all([
await
page.waitForSelector(
'a.morelink'),
await
page.click(
'a.morelink'),
await
page.waitForSelector(
'a.storylink')
])
}
currentPage++;
}
browser.close();
return
resolve(urls);
}
catch(e) {
return
reject(e);
}
})
}
run(
5).then(
console.log).catch(
console.error);