Name Last Update
..
examples Loading commit data...
lib Loading commit data...
node_modules Loading commit data...
tests Loading commit data...
vendor Loading commit data...
vi Loading commit data...
.coveralls.yml Loading commit data...
.eslintrc.js Loading commit data...
.travis.yml Loading commit data...
CHANGELOG.md Loading commit data...
CNAME Loading commit data...
Dockerfile Loading commit data...
LICENSE.txt Loading commit data...
README.md Loading commit data...
_config.yml Loading commit data...
crawler_primary.jpg Loading commit data...
crawler_primary.png Loading commit data...
package.json Loading commit data...
release.sh Loading commit data...
tmp Loading commit data...

Node.js

# npm package

build status Coverage Status Dependency Status NPM download NPM quality Gitter

Most powerful, popular and production crawling/scraping package for Node, happy hacking :)

Features:

  • server-side DOM & automatic jQuery insertion with Cheerio (default) or JSDOM
  • Configurable pool size and retries
  • Control rate limit
  • Priority queue of requests
  • forceUTF8 mode to let crawler deal for you with charset detection and conversion
  • Compatible with 4.x or newer version

Here is the CHANGELOG

Thanks to Authuir, we have a Chinese docs. Other languages are welcomed!

Table of Contents

Get started

Install

$ npm install crawler

Basic usage

var Crawler = require("crawler");

var c = new Crawler({
    maxConnections : 10,
    // This will be called for each crawled page
    callback : function (error, res, done) {
        if(error){
            console.log(error);
        }else{
            var $ = res.$;
            // $ is Cheerio by default
            //a lean implementation of core jQuery designed specifically for the server
            console.log($("title").text());
        }
        done();
    }
});

// Queue just one URL, with default callback
c.queue('http://www.amazon.com');

// Queue a list of URLs
c.queue(['http://www.google.com/','http://www.yahoo.com']);

// Queue URLs with custom callbacks & parameters
c.queue([{
    uri: 'http://parishackers.org/',
    jQuery: false,

    // The global callback won't be called
    callback: function (error, res, done) {
        if(error){
            console.log(error);
        }else{
            console.log('Grabbed', res.body.length, 'bytes');
        }
        done();
    }
}]);

// Queue some HTML code directly without grabbing (mostly for tests)
c.queue([{
    html: '<p>This is a <strong>test</strong></p>'
}]);

Slow down

Use rateLimit to slow down when you are visiting web sites.

var Crawler = require("crawler");

var c = new Crawler({
    rateLimit: 1000, // `maxConnections` will be forced to 1
    callback: function(err, res, done){
        console.log(res.$("title").text());
        done();
    }
});

c.queue(tasks);//between two tasks, minimum time gap is 1000 (ms)

Custom parameters

Sometimes you have to access variables from previous request/response session, what should you do is passing parameters as same as options:

c.queue({
    uri:"http://www.google.com",
    parameter1:"value1",
    parameter2:"value2",
    parameter3:"value3"
})

then access them in callback via res.options

console.log(res.options.parameter1);

Crawler picks options only needed by request, so don't worry about the redundance.

Raw body

If you are downloading files like image, pdf, word etc, you have to save the raw response body which means Crawler shouldn't convert it to string. To make it happen, you need to set encoding to null

var Crawler = require("crawler");
var fs = require('fs');

var c = new Crawler({
    encoding:null,
    jQuery:false,// set false to suppress warning message.
    callback:function(err, res, done){
        if(err){
            console.error(err.stack);
        }else{
            fs.createWriteStream(res.options.filename).write(res.body);
        }

        done();
    }
});

c.queue({
    uri:"https://nodejs.org/static/images/logos/nodejs-1920x1200.png",
    filename:"nodejs-1920x1200.png"
});

preRequest

If you want to do something either synchronously or asynchronously before each request, you can try the code below. Note that direct requests won't trigger preRequest.

var c = new Crawler({
    preRequest: function(options, done) {
        // 'options' here is not the 'options' you pass to 'c.queue', instead, it's the options that is going to be passed to 'request' module 
        console.log(options);
    // when done is called, the request will start
    done();
    },
    callback: function(err, res, done) {
        if(err) {
        console.log(err)
    } else {
        console.log(res.statusCode)
    }
    }
});

c.queue({
    uri: 'http://www.google.com',
    // this will override the 'preRequest' defined in crawler
    preRequest: function(options, done) {
        setTimeout(function() {
        console.log(options);
        done();
    }, 1000)
    }
});

Advanced

Send request directly

In case you want to send a request directly without going through the scheduler in Crawler, try the code below. direct takes the same options as queue, please refer to options for detail. The difference is when calling direct, callback must be defined explicitly, with two arguments error and response, which are the same as that of callback of method queue.

crawler.direct({
    uri: 'http://www.google.com',
    skipEventRequest: false, // default to true, direct requests won't trigger Event:'request'
    callback: function(error, response) {
        if(error) {
            console.log(error)
        } else {
            console.log(response.statusCode);
        }
    }
});

Work with Http2

Node-crawler now supports http request. Proxy functionality for http2 request does not be included now. It will be added in the future.

crawler.queue({
    //unit test work with httpbin http2 server. It could be used for test
    uri: 'https://nghttp2.org/httpbin/status/200',
    method: 'GET',
    http2: true, //set http2 to be true will make a http2 request
    callback: (error, response, done) => {
        if(error) {
            console.error(error);
            return done();
        }

        console.log(`inside callback`);
        console.log(response.body);
        return done();
    }
})

Work with bottleneck

Control rate limit for with limiter. All tasks submit to a limiter will abide the rateLimit and maxConnections restrictions of the limiter. rateLimit is the minimum time gap between two tasks. maxConnections is the maximum number of tasks that can be running at the same time. Limiters are independent of each other. One common use case is setting different limiters for different proxies. One thing is worth noticing, when rateLimit is set to a non-zero value, maxConnections will be forced to 1.

var crawler = require('crawler');

var c = new Crawler({
    rateLimit: 2000,
    maxConnections: 1,
    callback: function(error, res, done) {
        if(error) {
            console.log(error)
        } else {
            var $ = res.$;
            console.log($('title').text())
        }
        done();
    }
})

// if you want to crawl some website with 2000ms gap between requests
c.queue('http://www.somewebsite.com/page/1')
c.queue('http://www.somewebsite.com/page/2')
c.queue('http://www.somewebsite.com/page/3')

// if you want to crawl some website using proxy with 2000ms gap between requests for each proxy
c.queue({
    uri:'http://www.somewebsite.com/page/1',
    limiter:'proxy_1',
    proxy:'proxy_1'
})
c.queue({
    uri:'http://www.somewebsite.com/page/2',
    limiter:'proxy_2',
    proxy:'proxy_2'
})
c.queue({
    uri:'http://www.somewebsite.com/page/3',
    limiter:'proxy_3',
    proxy:'proxy_3'
})
c.queue({
    uri:'http://www.somewebsite.com/page/4',
    limiter:'proxy_1',
    proxy:'proxy_1'
})

Normally, all limiter instances in limiter cluster in crawler are instantiated with options specified in crawler constructor. You can change property of any limiter by calling the code below. Currently, we only support changing property 'rateLimit' of limiter. Note that the default limiter can be accessed by c.setLimiterProperty('default', 'rateLimit', 3000). We strongly recommend that you leave limiters unchanged after their instantiation unless you know clearly what you are doing.

var c = new Crawler({});
c.setLimiterProperty('limiterName', 'propertyName', value)

Class:Crawler

Event: 'schedule'

Emitted when a task is being added to scheduler.

crawler.on('schedule',function(options){
    options.proxy = "http://proxy:port";
});

Event: 'limiterChange'

Emitted when limiter has been changed.

Event: 'request'

Emitted when crawler is ready to send a request.

If you are going to modify options at last stage before requesting, just listen on it.

crawler.on('request',function(options){
    options.qs.timestamp = new Date().getTime();
});

Event: 'drain'

Emitted when queue is empty.

crawler.on('drain',function(){
    // For example, release a connection to database.
    db.end();// close connection to MySQL
});

crawler.queue(uri|options)

Enqueue a task and wait for it to be executed.

crawler.queueSize

Size of queue, read-only

Options reference

You can pass these options to the Crawler() constructor if you want them to be global or as items in the queue() calls if you want them to be specific to that item (overwriting global options)

This options list is a strict superset of mikeal's request options and will be directly passed to the request() method.

Basic request options

Callbacks

  • callback(error, res, done): Function that will be called after a request was completed
    • error: Error
    • res: http.IncomingMessage A response of standard IncomingMessage includes $ and options
      • res.statusCode: Number HTTP status code. E.G.200
      • res.body: Buffer | String HTTP response content which could be a html page, plain text or xml document e.g.
      • res.headers: Object HTTP response headers
      • res.request: Request An instance of Mikeal's Request instead of http.ClientRequest
        • res.request.uri: urlObject HTTP request entity of parsed url
        • res.request.method: String HTTP request method. E.G. GET
        • res.request.headers: Object HTTP request headers
      • res.options: Options of this task
      • $: jQuery Selector A selector for html or xml document.
    • done: Function It must be called when you've done your work in callback.

Schedule options

  • options.maxConnections: Number Size of the worker pool (Default 10).
  • options.rateLimit: Number Number of milliseconds to delay between each requests (Default 0).
  • options.priorityRange: Number Range of acceptable priorities starting from 0 (Default 10).
  • options.priority: Number Priority of this request (Default 5). Low values have higher priority.

Retry options

  • options.retries: Number Number of retries if the request fails (Default 3),
  • options.retryTimeout: Number Number of milliseconds to wait before retrying (Default 10000),

Server-side DOM options

  • options.jQuery: Boolean|String|Object Use cheerio with default configurations to inject document if true or "cheerio". Or use customized cheerio if an object with Parser options. Disable injecting jQuery selector if false. If you have memory leak issue in your project, use "whacko", an alternative parser,to avoid that. (Default true)

Charset encoding

  • options.forceUTF8: Boolean If true crawler will get charset from HTTP headers or meta tag in html and convert it to UTF8 if necessary. Never worry about encoding anymore! (Default true),
  • options.incomingEncoding: String With forceUTF8: true to set encoding manually (Default null) so that crawler will not have to detect charset by itself. For example, incomingEncoding : 'windows-1255'. See all supported encodings

Cache

  • options.skipDuplicates: Boolean If true skips URIs that were already crawled, without even calling callback() (Default false). This is not recommended, it's better to handle outside Crawler use seenreq

Http headers

  • options.rotateUA: Boolean If true, userAgent should be an array and rotate it (Default false)
  • options.userAgent: String|Array, If rotateUA is false, but userAgent is an array, crawler will use the first one.
  • options.referer: String If truthy sets the HTTP referer header
  • options.removeRefererHeader: Boolean If true preserves the set referer during redirects
  • options.headers: Object Raw key-value of http headers

Http2

  • options.http2: Boolean If true, request will be sent in http2 protocol (Default false)

Https socks5

const Agent = require('socks5-https-client/lib/Agent');
//...
var c = new Crawler({
    // rateLimit: 2000,
    maxConnections: 20,
    agentClass: Agent, //adding socks5 https agent
    method: 'GET',
    strictSSL: true,
    agentOptions: {
        socksHost: 'localhost',
        socksPort: 9050
    },
    // debug: true,
    callback: function (error, res, done) {
        if (error) {
            console.log(error);
        } else {
            //
        }
        done();
    }
}); 

Work with Cheerio or JSDOM

Crawler by default use Cheerio instead of JSDOM. JSDOM is more robust, if you want to use JSDOM you will have to require it require('jsdom') in your own script before passing it to crawler.

Working with Cheerio

jQuery: true //(default)
//OR
jQuery: 'cheerio'
//OR
jQuery: {
    name: 'cheerio',
    options: {
        normalizeWhitespace: true,
        xmlMode: true
    }
}

These parsing options are taken directly from htmlparser2, therefore any options that can be used in htmlparser2 are valid in cheerio as well. The default options are:

{
    normalizeWhitespace: false,
    xmlMode: false,
    decodeEntities: true
}

For a full list of options and their effects, see this and htmlparser2's options. source

Work with JSDOM

In order to work with JSDOM you will have to install it in your project folder npm install jsdom, and pass it to crawler.

var jsdom = require('jsdom');
var Crawler = require('crawler');

var c = new Crawler({
    jQuery: jsdom
});

How to test

Crawler uses nock to mock http request, thus testing no longer relying on http server.

$ npm install
$ npm test
$ npm run cover # code coverage

Alternative: Docker

After installing Docker, you can run:

# Builds the local test environment
$ docker build -t node-crawler .

# Runs tests
$ docker run node-crawler sh -c "npm install && npm test"

# You can also ssh into the container for easier debugging
$ docker run -i -t node-crawler bash

Rough todolist

  • Introducing zombie to deal with page with complex ajax
  • Refactoring the code to be more maintainable
  • Make Sizzle tests pass (JSDOM bug? https://github.com/tmpvar/jsdom/issues#issue/81)
  • Promise support
  • Commander support
  • Middleware support