brozzler - distributed browser-based web crawler
Find a file
2016-06-16 13:53:28 -05:00
bin Fix warcprox_meta's default value and json import 2016-05-17 13:51:42 +10:00
brozzler copy over fec.gov behavior from umbra master 2016-06-16 13:53:28 -05:00
docker honor crawl job stop requests 2016-03-08 00:18:54 +00:00
no-docker honor crawl job stop requests 2016-03-08 00:18:54 +00:00
webconsole switch flask requirement to recent release, suggest gunicorn for running the app 2016-06-15 22:00:39 +00:00
.gitignore update readme, s/umbra/brozzler/ in most places, delete non-brozzler stuff 2015-07-13 17:09:39 -07:00
.gitmodules un-hardcode some stuff in webconsole, load from environment variables instead 2016-04-19 18:51:14 +00:00
brozzler.svg fix brozzler.svg symlink 2016-04-25 20:03:02 +00:00
license.txt apache license 2014-09-02 16:10:00 -07:00
README.rst proxy is not supposed to have http:// prefix 2016-05-17 16:20:38 +10:00
setup.py bump version so we can upload to pypi and fix the readme 2016-05-11 12:10:23 -07:00

.. |logo| image:: https://cdn.rawgit.com/nlevitt/brozzler/d1158ab2242815b28fe7bb066042b5b5982e4627/webconsole/static/brozzler.svg
   :width: 7%

brozzler |logo|
===============

"browser" \| "crawler" = "brozzler"

Brozzler is a distributed web crawler (爬虫) that uses a real browser
(chrome or chromium) to fetch pages and embedded urls and to extract
links. It also uses `youtube-dl <https://github.com/rg3/youtube-dl>`__
to enhance media capture capabilities.

It is forked from https://github.com/internetarchive/umbra.

Brozzler is designed to work in conjunction with
`warcprox <https://github.com/internetarchive/warcprox>`__ for web
archiving.

Installation
------------

::

    # set up virtualenv if desired
    pip install brozzler

Brozzler also requires a rethinkdb deployment.

Usage
-----

Launch one or more workers:

::

    brozzler-worker -e chromium

Submit jobs:

::

    brozzler-new-job myjob.yaml

Job Configuration
-----------------

Jobs are defined using yaml files. Options may be specified either at the
top-level or on individual seeds. A job id and at least one seed url
must be specified, everything else is optional.

::

    id: myjob
    time_limit: 60 # seconds
    proxy: 127.0.0.1:8000 # point at warcprox for archiving
    ignore_robots: false
    enable_warcprox_features: false
    warcprox_meta: null
    metadata: {}
    seeds:
      - url: http://one.example.org/
      - url: http://two.example.org/
        time_limit: 30
      - url: http://three.example.org/
        time_limit: 10
        ignore_robots: true
        scope:
          surt: http://(org,example,

Fonts (for decent screenshots)
------------------------------

On ubuntu 14.04 trusty I installed these packages:

xfonts-base ttf-mscorefonts-installer fonts-arphic-bkai00mp
fonts-arphic-bsmi00lp fonts-arphic-gbsn00lp fonts-arphic-gkai00mp
fonts-arphic-ukai fonts-farsiweb fonts-nafees fonts-sil-abyssinica
fonts-sil-ezra fonts-sil-padauk fonts-unfonts-extra fonts-unfonts-core
ttf-indic-fonts fonts-thai-tlwg fonts-lklug-sinhala

License
-------

Copyright 2015-2016 Internet Archive

Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this software except in compliance with the License. You may
obtain a copy of the License at

::

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.