From e2cde6b83fc369175953b0a072526b873b3ca2ce Mon Sep 17 00:00:00 2001 From: raffaele messuti Date: Mon, 12 Nov 2018 20:20:39 +0000 Subject: [PATCH] new tools: crawl and wasp (#54) --- README.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/README.md b/README.md index ab18563..7bc10d7 100644 --- a/README.md +++ b/README.md @@ -71,6 +71,8 @@ This list of tools and software is intended to briefly describe some of the most * [Brozzler](https://github.com/internetarchive/brozzler) (Stable) - A distributed web crawler (爬虫) that uses a real browser (chrome or chromium) to fetch pages and embedded urls and to extract links. +* [Crawl](https://git.autistici.org/ale/crawl) (Stable) - A simple web crawler in Golang. + * [crocoite](https://github.com/promyloph/crocoite) (In Development) - Crawl websites using headless Google Chrome/Chromium and save resources, static DOM snapshot and page screenshots to WARC files. * [F(b)arc](https://github.com/justinlittman/fbarc) (Stable) - A commandline tool and Python library for archiving data from [Facebook](https://www.facebook.com/) using the [Graph API](https://developers.facebook.com/docs/graph-api). @@ -131,6 +133,8 @@ This list of tools and software is intended to briefly describe some of the most * [Warclight](https://github.com/archivesunleashed/warclight) (In Development) - A Project Blacklight based Rails engine that supports the discovery of web archives held in the WARC and ARC formats. + * [Wasp](https://github.com/webis-de/wasp) (In Development) - A fully functional prototype of a personal [web archive and search system](http://desires.dei.unipd.it/papers/paper4.pdf). + * Other possible options for builting a front-end are listed on in the `webarchive-discovery` wiki, [here](https://github.com/ukwa/webarchive-discovery/wiki/Front-ends). #### Utilities