Noah Levitt fdc2f87a0e Merge branch 'master' into qa
* master:
  pass behavior template parameters on to behavior - fixes umbra's ability to log in with parameters received from amqp
  Changing EnvironmentError to OSError
  Fix naming conventions.
  Create cookie directory if it doesn't exist. Add debug messages for cookie db read/write.
  Read/Write Cookie DB file when creating and stopping browser instance.
  brozzler[easy] requires warcprox>=2.0b1
  look for a sensible default chromium/chrome executable
  tweak thread names
  convert domain specific rule url prefixes to our style of surt
  have pywb support loading warc records from warc files still being written (look for foo.warc.gz.open)
  install flash plugin for chromium
  make state dumping signal handler more robust (now you can kill -QUIT a thousand times in a row without causing problems)
  handle case where websocket connection is unexpectedly closed during the post-behavior phase
  implement timeout and retries to work around issue where sometimes we receive no result message after requesting outlinks
  forgot to commit easy.py, add pywb.py with support for pywb rethinkdb index, and make brozzler-easy also run pywb
  working on brozzler-easy, single process with brozzler-worker and warcprox working together (pywb to be added)
  twirldown for site yaml on site page
  give master a version number considered later than the one up on pypi (1.1b3.dev45 > 1.1b2)
  in vagrant/ansible, install brozzler from this checkout instead of from github master
  option to save list of outlinks (categorized as "accepted", "blocked" (by robots), or "rejected") per page in rethinkdb (to be used by archive-it for out-of-scope reporting)
  oops didn't mean to leave that windows-only subprocess flag
  remove accidentally committed playbook.retry
  vagrant setup (unfinished)
  do not send more than one SIGTERM when shutting down browser process, because on recent chromium on linux, the second sigterm abruptly ends the process, and sometimes leaves orphan subprocesses; also send TERM/KILL signals to the whole process group, another measure to avoid orphans; and adjust logging levels for captured chrome output
  command line utility brozzler-ensure-tables, creates rethinkdb tables if they don't already exist... brozzler normally creates them on demand at startup, but if multiple instances are starting up at the same time, you can end up with duplicate broken tables, so it's a good idea to use this utility when spinning up a cluster
2016-07-26 19:47:50 -05:00
2016-07-26 19:47:50 -05:00
2016-07-13 15:23:50 -05:00
2016-06-27 20:01:35 +00:00
2014-09-02 16:10:00 -07:00

.. |logo| image:: https://cdn.rawgit.com/nlevitt/brozzler/d1158ab2242815b28fe7bb066042b5b5982e4627/webconsole/static/brozzler.svg
   :width: 7%

brozzler |logo|
===============

"browser" \| "crawler" = "brozzler"

Brozzler is a distributed web crawler (爬虫) that uses a real browser
(chrome or chromium) to fetch pages and embedded urls and to extract
links. It also uses `youtube-dl <https://github.com/rg3/youtube-dl>`__
to enhance media capture capabilities.

It is forked from https://github.com/internetarchive/umbra.

Brozzler is designed to work in conjunction with
`warcprox <https://github.com/internetarchive/warcprox>`__ for web
archiving.

Installation
------------

Brozzler requires python 3.4 or later.

::

    # set up virtualenv if desired
    pip install brozzler

Brozzler also requires a rethinkdb deployment.

Usage
-----

Launch one or more workers:

::

    brozzler-worker -e chromium

Submit jobs:

::

    brozzler-new-job myjob.yaml

Job Configuration
-----------------

Jobs are defined using yaml files. Options may be specified either at the
top-level or on individual seeds. A job id and at least one seed url
must be specified, everything else is optional.

::

    id: myjob
    time_limit: 60 # seconds
    proxy: 127.0.0.1:8000 # point at warcprox for archiving
    ignore_robots: false
    enable_warcprox_features: false
    warcprox_meta: null
    metadata: {}
    seeds:
      - url: http://one.example.org/
      - url: http://two.example.org/
        time_limit: 30
      - url: http://three.example.org/
        time_limit: 10
        ignore_robots: true
        scope:
          surt: http://(org,example,

Submit a Site to Crawl Without Configuring a Job
------------------------------------------------

::

    brozzler-new-site --proxy=localhost:8000 --enable-warcprox-features \
        --time-limit=600 http://example.com/

Brozzler Web Console
--------------------

Brozzler comes with a rudimentary web application for viewing crawl job status.
To install the brozzler with dependencies required to run this app, run

::

    pip install brozzler[webconsole]


To start the app, run

::

    brozzler-webconsole


XXX configuration stuff

Fonts (for decent screenshots)
------------------------------

On ubuntu 14.04 trusty I installed these packages:

xfonts-base ttf-mscorefonts-installer fonts-arphic-bkai00mp
fonts-arphic-bsmi00lp fonts-arphic-gbsn00lp fonts-arphic-gkai00mp
fonts-arphic-ukai fonts-farsiweb fonts-nafees fonts-sil-abyssinica
fonts-sil-ezra fonts-sil-padauk fonts-unfonts-extra fonts-unfonts-core
ttf-indic-fonts fonts-thai-tlwg fonts-lklug-sinhala

License
-------

Copyright 2015-2016 Internet Archive

Licensed under the Apache License, Version 2.0 (the "License"); you may
not use this software except in compliance with the License. You may
obtain a copy of the License at

::

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

Description
brozzler - distributed browser-based web crawler
Readme 7.4 MiB
Languages
Python 84.4%
JavaScript 6.3%
HTML 5.5%
Jinja 3.3%
Makefile 0.3%
Other 0.2%