From af8c5bbc19e452e023c0bc9e9a11977732b26f83 Mon Sep 17 00:00:00 2001 From: Ross Spencer Date: Tue, 11 Feb 2025 19:43:13 +0100 Subject: [PATCH] Add Community Archive (Twitter Archive and API) (#160) Co-authored-by: Gabriel Chartier --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index b0e4bbc..ef2ee94 100644 --- a/README.md +++ b/README.md @@ -73,6 +73,7 @@ This list of tools and software is intended to briefly describe some of the most * [Brozzler](https://github.com/internetarchive/brozzler) - A distributed web crawler (爬虫) that uses a real browser (Chrome or Chromium) to fetch pages and embedded urls and to extract links. *(Stable)* * [Cairn](https://github.com/wabarc/cairn) - A npm package and CLI tool for saving webpages. *(Stable)* * [Chronicler](https://github.com/CGamesPlay/chronicler) - Web browser with record and replay functionality. *(In Development)* +* [Community Archive](https://www.community-archive.org/) - Open Twitter Database and API with tools and resources for building on archived Twitter data. * [crau](https://github.com/turicas/crau) - crau is the way (most) Brazilians pronounce crawl, it's the easiest command-line tool for archiving the Web and playing archives: you just need a list of URLs. *(Stable)* * [Crawl](https://git.autistici.org/ale/crawl) - A simple web crawler in Golang. *(Stable)* * [crocoite](https://github.com/promyloph/crocoite) - Crawl websites using headless Google Chrome/Chromium and save resources, static DOM snapshot and page screenshots to WARC files. *(In Development)*