README.md |
Awesome Web Archiving
Introduction
An Awesome List for getting started with web archiving. Inspired by the awesome list.
Table of Contents
Contribute
Please ensure your pull request adheres to the following guidelines:
- Use the following format:
[Name](link)
(Status: Stable or In Development) - Brief Description of what the module does
- Make an individual pull request for each new item.
- Link additions should be inserted alphabetically to the relavant category.
- New categories or improvements to the existing categorization are welcome.
- Check your spelling and grammar.
- The pull request and commit should have a useful title.
License
To the extent possible under law, the owner has waived all copyright and related or neighboring rights to this work.
The List
Training/Documentation
Tools & Software
Acquisition
-
ArchiveFacebook (Stable) - A Mozilla Firefox add-on for individuals to archive their Facebook accounts.
-
Brozzler (Stable) - A distributed web crawler (爬虫) that uses a real browser (chrome or chromium) to fetch pages and embedded urls and to extract links.
-
F(b)arc (Stable) - A commandline tool and Python library for archiving data from Facebook using the Graph API.
-
Heritrix (Stable) - An open source, extensible, web-scale, archival quality web crawler.
-
grab-site (Stable) - The archivist's web crawler: WARC output, dashboard for all crawls, dynamic ignore patterns.
-
HTTrack (Stable) - An open source website copying utility.
-
Lentil (Stable) - A Ruby on Rails Engine that supports the harvesting of images from Instagram and provides several browsing views, mechanisms for sharing, tools for users to select their favorite images, an administrative interface for moderating images, and a system for harvesting images and submitting donor agreements in preparation of ingest into external repositories.
-
SiteStory (Stable) - A transactional archive that selectively captures and stores transactions that take place between a web client (browser) and a web server.
-
twarc (Stable) - A command line tool and Python library for archiving Twitter JSON data.
-
WARCreate (Stable) - A Google Chrome extension for archiving an individual webpage or website to a WARC file.
-
WAIL (Stable) - A graphical user interface (GUI) atop multiple web archiving tools intended to be used as an easy way for anyone to preserve and replay web pages; Python, Electron.
-
Webrecorder (Stable) - Create high-fidelity, interactive recordings of any web site you browse.
-
Wget (Stable) - An open source file retrieval utility that of version 1.14 supports writing warcs.
-
Wget-lua (Stable) - Wget with Lua extension.
-
Wpull (Stable) - A Wget-compatible (or remake/clone/replacement/alternative) web downloader and crawler.
-
Warcat - Tool and library for handling Web ARChive (WARC) files.
Replay
-
PyWb (Stable) - A Python (2 and 3) implementation of web archival replay tools, sometimes also known as 'Wayback Machine'.
-
OpenWayback (Stable) - The open source project aimed to develop Wayback Machine, the key software used by web archives worldwide to play back archived websites in the user's browser.
Utilities
-
Jwat (Stable) - Libraries and tools for reading/writting/validating WARC/ARC/GZIP files.
-
Warcat (Stable) - Tool and library for handling Web ARChive (WARC) files.
Analysis
-
ArchiveSpark (Stable) - An Apache Spark framework (not only) for Web Archives that enables easy data processing, extraction as well as derivation.
-
warcbase (Stable) - Warcbase is an open-source platform for managing analyzing web archives.
Community Resources
Mailing Lists
Slack
- Ask @netpreserve for access to the IIPC Slack
Deprecated
-
pywb Wayback Web Recorder (Archiver) (Sunsetted) - A bare-bones example of how to create a simple web recording and replay system.
-
Warrick (Unknown) - An open source downloadable tool or web service for reconstructing websites from web archives, using Memento.