1
0
mirror of https://github.com/iv-org/invidious.git synced 2024-12-24 06:49:23 -05:00

Merge branch 'master' into patch-1

This commit is contained in:
Perflyst 2020-11-12 17:06:38 +01:00 committed by GitHub
commit bb7d8735cb
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
123 changed files with 5663 additions and 4379 deletions

View File

@ -1,5 +1,20 @@
dist: bionic dist: bionic
# Work around broken Travis Crystal image
addons:
apt:
packages:
- gcc
- pkg-config
- git
- tzdata
- libpcre3-dev
- libevent-dev
- libyaml-dev
- libgmp-dev
- libssl-dev
- libxml2-dev
jobs: jobs:
include: include:
- stage: build - stage: build
@ -9,6 +24,7 @@ jobs:
language: crystal language: crystal
crystal: latest crystal: latest
before_install: before_install:
- crystal --version
- shards update - shards update
- shards install - shards install
install: install:
@ -28,7 +44,4 @@ jobs:
- docker-compose build - docker-compose build
script: script:
- docker-compose up -d - docker-compose up -d
- sleep 15 # Wait for cluster to become ready, TODO: do not sleep - while curl -Isf http://localhost:3000; do sleep 1; done
- HEADERS="$(curl -I -s http://localhost:3000/)"
- STATUS="$(echo $HEADERS | head -n1)"
- if [[ "$STATUS" != *"200 OK"* ]]; then echo "$HEADERS"; exit 1; fi

View File

@ -400,7 +400,7 @@ An `/api/v1/stats` endpoint has been added with [#356](https://github.com/omarro
## For Developers ## For Developers
`/api/v1/channels/:ucid` now provides an `autoGenerated` tag, which returns true for [topic channels](https://www.youtube.com/channel/UCE80FOXpJydkkMo-BYoJdEg), and larger [genre channels](https://www.youtube.com/channel/UC-9-kyTW8ZkZNDHQJ6FgpwQ) generated by YouTube. These channels don't have any videos of their own, so `latestVideos` will be empty. It is recommended instead to display a list of playlists generated by YouTube. `/api/v1/channels/:ucid` now provides an `autoGenerated` tag, which returns true for topic channels, and larger genre channels generated by YouTube. These channels don't have any videos of their own, so `latestVideos` will be empty. It is recommended instead to display a list of playlists generated by YouTube.
You can now pull a list of playlists from a channel with `/api/v1/channels/playlists/:ucid`. Supported options are documented in the [wiki](https://github.com/omarroth/invidious/wiki/API#get-apiv1channelsplaylistsucid-apiv1channelsucidplaylists). Pagination is handled with a `continuation` token, which is generated on each call. Of note is that auto-generated channels currently have one page of results, and subsequent calls will be empty. You can now pull a list of playlists from a channel with `/api/v1/channels/playlists/:ucid`. Supported options are documented in the [wiki](https://github.com/omarroth/invidious/wiki/API#get-apiv1channelsplaylistsucid-apiv1channelsucidplaylists). Pagination is handled with a `continuation` token, which is generated on each call. Of note is that auto-generated channels currently have one page of results, and subsequent calls will be empty.

167
README.md
View File

@ -1,14 +1,18 @@
# Invidious # Invidious
[![Build Status](https://travis-ci.org/omarroth/invidious.svg?branch=master)](https://travis-ci.org/omarroth/invidious) [![Build Status](https://travis-ci.org/iv-org/invidious.svg?branch=master)](https://travis-ci.org/github/iv-org/invidious) [![Translation Status](https://hosted.weblate.org/widgets/invidious/-/translations/svg-badge.svg)](https://hosted.weblate.org/engage/invidious/)
## Invidious is an alternative front-end to YouTube ## Invidious is an alternative front-end to YouTube
## Invidious instances:
[Public Invidious instances are listed here.](https://github.com/iv-org/invidious/wiki/Invidious-Instances)
## Invidious features:
- [Copylefted libre software](https://github.com/iv-org/invidious) (AGPLv3+ licensed)
- Audio-only mode (and no need to keep window open on mobile) - Audio-only mode (and no need to keep window open on mobile)
- [Free software](https://github.com/omarroth/invidious) (AGPLv3 licensed) - Lightweight (the homepage is ~4 KB compressed)
- No ads
- No need to create a Google account to save subscriptions
- Lightweight (homepage is ~4 KB compressed)
- Tools for managing subscriptions: - Tools for managing subscriptions:
- Only show unseen videos - Only show unseen videos
- Only show latest (or latest unseen) video from each channel - Only show latest (or latest unseen) video from each channel
@ -18,37 +22,33 @@
- Dark mode - Dark mode
- Embed support - Embed support
- Set default player options (speed, quality, autoplay, loop) - Set default player options (speed, quality, autoplay, loop)
- Does not require JS to play videos - Support for Reddit comments in place of YouTube comments
- Support for Reddit comments in place of YT comments
- Import/Export subscriptions, watch history, preferences - Import/Export subscriptions, watch history, preferences
- [Developer API](https://github.com/iv-org/invidious/wiki/API)
- Does not use any of the official YouTube APIs - Does not use any of the official YouTube APIs
- Developer [API](https://github.com/omarroth/invidious/wiki/API) - Does not require JavaScript to play videos
- No need to create a Google account to save subscriptions
- No ads
- No CoC
- No CLA
- [Multilingual](https://hosted.weblate.org/projects/invidious/#languages) (translated into many languages)
Liberapay: https://liberapay.com/omarroth ## Screenshots:
BTC: 356DpZyMXu6rYd55Yqzjs29n79kGKWcYrY
BCH: qq4ptclkzej5eza6a50et5ggc58hxsq5aylqut2npk
## Invidious Instances
See [Invidious Instances](https://github.com/omarroth/invidious/wiki/Invidious-Instances) for a full list of publicly available instances.
### Official Instances
- [invidio.us](https://invidio.us) 🇺🇸
Issuer: Let's Encrypt, [SSLLabs Verification](https://www.ssllabs.com/ssltest/analyze.html?d=invidio.us)
- [kgg2m7yk5aybusll.onion](http://kgg2m7yk5aybusll.onion)
- [axqzx4s6s54s32yentfqojs3x5i7faxza6xo3ehd4bzzsg2ii4fv2iid.onion](http://axqzx4s6s54s32yentfqojs3x5i7faxza6xo3ehd4bzzsg2ii4fv2iid.onion)
## Screenshots
| Player | Preferences | Subscriptions | | Player | Preferences | Subscriptions |
| ----------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- | | ----------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- |
| [<img src="screenshots/01_player.png?raw=true" height="140" width="280">](screenshots/01_player.png?raw=true) | [<img src="screenshots/02_preferences.png?raw=true" height="140" width="280">](screenshots/02_preferences.png?raw=true) | [<img src="screenshots/03_subscriptions.png?raw=true" height="140" width="280">](screenshots/03_subscriptions.png?raw=true) | | [<img src="screenshots/01_player.png?raw=true" height="140" width="280">](screenshots/01_player.png?raw=true) | [<img src="screenshots/02_preferences.png?raw=true" height="140" width="280">](screenshots/02_preferences.png?raw=true) | [<img src="screenshots/03_subscriptions.png?raw=true" height="140" width="280">](screenshots/03_subscriptions.png?raw=true) |
| [<img src="screenshots/04_description.png?raw=true" height="140" width="280">](screenshots/04_description.png?raw=true) | [<img src="screenshots/05_preferences.png?raw=true" height="140" width="280">](screenshots/05_preferences.png?raw=true) | [<img src="screenshots/06_subscriptions.png?raw=true" height="140" width="280">](screenshots/06_subscriptions.png?raw=true) | | [<img src="screenshots/04_description.png?raw=true" height="140" width="280">](screenshots/04_description.png?raw=true) | [<img src="screenshots/05_preferences.png?raw=true" height="140" width="280">](screenshots/05_preferences.png?raw=true) | [<img src="screenshots/06_subscriptions.png?raw=true" height="140" width="280">](screenshots/06_subscriptions.png?raw=true) |
## Installation ## Installation:
See [Invidious-Updater](https://github.com/tmiland/Invidious-Updater) for a self-contained script that can automatically install and update Invidious. To manually compile invidious you need at least 2GB of RAM. If you have less you can setup SWAP to have a combined amount of 2 GB or use Docker instead.
After installation take a look at the [Post-install steps](#post-install-configuration).
### Automated installation:
[Invidious-Updater](https://github.com/tmiland/Invidious-Updater) is a self-contained script that can automatically install and update Invidious.
### Docker: ### Docker:
@ -58,7 +58,7 @@ See [Invidious-Updater](https://github.com/tmiland/Invidious-Updater) for a self
$ docker-compose up $ docker-compose up
``` ```
And visit `localhost:3000` in your browser. Then visit `localhost:3000` in your browser.
#### Rebuild cluster: #### Rebuild cluster:
@ -73,9 +73,11 @@ $ docker volume rm invidious_postgresdata
$ docker-compose build $ docker-compose build
``` ```
### Manual installation:
### Linux: ### Linux:
#### Install dependencies #### Install the dependencies
```bash ```bash
# Arch Linux # Arch Linux
@ -88,23 +90,22 @@ $ curl -sSL https://dist.crystal-lang.org/apt/setup.sh | sudo bash
$ curl -sL "https://keybase.io/crystal/pgp_keys.asc" | sudo apt-key add - $ curl -sL "https://keybase.io/crystal/pgp_keys.asc" | sudo apt-key add -
$ echo "deb https://dist.crystal-lang.org/apt crystal main" | sudo tee /etc/apt/sources.list.d/crystal.list $ echo "deb https://dist.crystal-lang.org/apt crystal main" | sudo tee /etc/apt/sources.list.d/crystal.list
$ sudo apt-get update $ sudo apt-get update
$ sudo apt install crystal libssl-dev libxml2-dev libyaml-dev libgmp-dev libreadline-dev postgresql librsvg2-bin libsqlite3-dev $ sudo apt install crystal libssl-dev libxml2-dev libyaml-dev libgmp-dev libreadline-dev postgresql librsvg2-bin libsqlite3-dev zlib1g-dev
``` ```
#### Add invidious user and clone repository #### Add an Invidious user and clone the repository
```bash ```bash
$ useradd -m invidious $ useradd -m invidious
$ sudo -i -u invidious $ sudo -i -u invidious
$ git clone https://github.com/omarroth/invidious $ git clone https://github.com/iv-org/invidious
$ exit $ exit
``` ```
#### Setup PostgresSQL #### Set up PostgresSQL
```bash ```bash
$ sudo systemctl enable postgresql $ sudo systemctl enable --now postgresql
$ sudo systemctl start postgresql
$ sudo -i -u postgres $ sudo -i -u postgres
$ psql -c "CREATE USER kemal WITH PASSWORD 'kemal';" # Change 'kemal' here to a stronger password, and update `password` in config/config.yml $ psql -c "CREATE USER kemal WITH PASSWORD 'kemal';" # Change 'kemal' here to a stronger password, and update `password` in config/config.yml
$ createdb -O kemal invidious $ createdb -O kemal invidious
@ -115,10 +116,12 @@ $ psql invidious kemal < /home/invidious/invidious/config/sql/users.sql
$ psql invidious kemal < /home/invidious/invidious/config/sql/session_ids.sql $ psql invidious kemal < /home/invidious/invidious/config/sql/session_ids.sql
$ psql invidious kemal < /home/invidious/invidious/config/sql/nonces.sql $ psql invidious kemal < /home/invidious/invidious/config/sql/nonces.sql
$ psql invidious kemal < /home/invidious/invidious/config/sql/annotations.sql $ psql invidious kemal < /home/invidious/invidious/config/sql/annotations.sql
$ psql invidious kemal < /home/invidious/invidious/config/sql/playlists.sql
$ psql invidious kemal < /home/invidious/invidious/config/sql/playlist_videos.sql
$ exit $ exit
``` ```
#### Setup Invidious #### Set up Invidious
```bash ```bash
$ sudo -i -u invidious $ sudo -i -u invidious
@ -130,23 +133,36 @@ $ ./invidious # stop with ctrl c
$ exit $ exit
``` ```
#### systemd service #### Systemd service:
```bash ```bash
$ sudo cp /home/invidious/invidious/invidious.service /etc/systemd/system/invidious.service $ sudo cp /home/invidious/invidious/invidious.service /etc/systemd/system/invidious.service
$ sudo systemctl enable invidious.service $ sudo systemctl enable --now invidious.service
$ sudo systemctl start invidious.service
``` ```
### OSX: #### Logrotate:
```bash
$ sudo echo "/home/invidious/invidious/invidious.log {
rotate 4
weekly
notifempty
missingok
compress
minsize 1048576
}" | tee /etc/logrotate.d/invidious.logrotate
$ sudo chmod 0644 /etc/logrotate.d/invidious.logrotate
```
### MacOS:
```bash ```bash
# Install dependencies # Install dependencies
$ brew update $ brew update
$ brew install shards crystal postgres imagemagick librsvg $ brew install shards crystal postgres imagemagick librsvg
# Clone repository and setup postgres database # Clone the repository and set up a PostgreSQL database
$ git clone https://github.com/omarroth/invidious $ git clone https://github.com/iv-org/invidious
$ cd invidious $ cd invidious
$ brew services start postgresql $ brew services start postgresql
$ psql -c "CREATE ROLE kemal WITH PASSWORD 'kemal';" # Change 'kemal' here to a stronger password, and update `password` in config/config.yml $ psql -c "CREATE ROLE kemal WITH PASSWORD 'kemal';" # Change 'kemal' here to a stronger password, and update `password` in config/config.yml
@ -158,15 +174,30 @@ $ psql invidious kemal < config/sql/users.sql
$ psql invidious kemal < config/sql/session_ids.sql $ psql invidious kemal < config/sql/session_ids.sql
$ psql invidious kemal < config/sql/nonces.sql $ psql invidious kemal < config/sql/nonces.sql
$ psql invidious kemal < config/sql/annotations.sql $ psql invidious kemal < config/sql/annotations.sql
$ psql invidious kemal < config/sql/privacy.sql
$ psql invidious kemal < config/sql/playlists.sql
$ psql invidious kemal < config/sql/playlist_videos.sql
# Setup Invidious # Set up Invidious
$ shards update && shards install $ shards update && shards install
$ crystal build src/invidious.cr --release $ crystal build src/invidious.cr --release
``` ```
## Post-install configuration:
Detailed configuration available in the [configuration guide](https://github.com/iv-org/invidious/wiki/Configuration).
If you use a reverse proxy, you **must** configure invidious to properly serve request through it:
`https_only: true` : if your are serving your instance via https, set it to true
`domain: domain.ext`: if you are serving your instance via a domain name, set it here
`external_port: 443`: if your are serving your instance via https, set it to 443
## Update Invidious ## Update Invidious
You can see how to update Invidious [here](https://github.com/omarroth/invidious/wiki/Updating). Instructions are available in the [updating guide](https://github.com/iv-org/invidious/wiki/Updating).
## Usage: ## Usage:
@ -197,39 +228,55 @@ $ ./sentry
## Documentation ## Documentation
[Documentation](https://github.com/omarroth/invidious/wiki) can be found in the wiki. [Documentation](https://github.com/iv-org/invidious/wiki) can be found in the wiki.
## Extensions ## Extensions
[Extensions](https://github.com/omarroth/invidious/wiki/Extensions) can be found in the wiki, as well as documentation for integrating it into other projects. [Extensions](https://github.com/iv-org/invidious/wiki/Extensions) can be found in the wiki, as well as documentation for integrating it into other projects.
## Made with Invidious ## Made with Invidious
- [FreeTube](https://github.com/FreeTubeApp/FreeTube): An Open Source YouTube app for privacy. - [FreeTube](https://github.com/FreeTubeApp/FreeTube): A libre software YouTube app for privacy.
- [CloudTube](https://cadence.moe/cloudtube/subscriptions): A JS-rich alternate YouTube player - [CloudTube](https://cadence.moe/cloudtube/subscriptions): A JavaScript-rich alternate YouTube player
- [PeerTubeify](https://gitlab.com/Ealhad/peertubeify): On YouTube, displays a link to the same video on PeerTube, if it exists. - [PeerTubeify](https://gitlab.com/Cha_deL/peertubeify): On YouTube, displays a link to the same video on PeerTube, if it exists.
- [MusicPiped](https://github.com/deep-gaurav/MusicPiped): A materialistic music player that streams music from YouTube. - [MusicPiped](https://github.com/deep-gaurav/MusicPiped): A material design music player that streams music from YouTube.
- [LapisTube](https://github.com/blubbll/lapis-tube): A fancy and advanced (experimental) YouTube front-end. Combined streams & custom YT features.
- [HoloPlay](https://github.com/stephane-r/HoloPlay): Funny Android application connecting on Invidious API's with search, playlists and favoris.
## Contributing ## Contributing
1. Fork it ( https://github.com/omarroth/invidious/fork ) 1. Fork it ( https://github.com/iv-org/invidious/fork )
2. Create your feature branch (git checkout -b my-new-feature) 2. Create your feature branch (git checkout -b my-new-feature)
3. Commit your changes (git commit -am 'Add some feature') 3. Commit your changes (git commit -am 'Add some feature')
4. Push to the branch (git push origin my-new-feature) 4. Push to the branch (git push origin my-new-feature)
5. Create a new Pull Request 5. Create a new pull request
#### Translation
- Log in with an account you have elsewhere, or register an account and start translating at [Hosted Weblate](https://hosted.weblate.org/engage/invidious/).
## Donate:
Liberapay: https://liberapay.com/iv-org/
## Contact ## Contact
Feel free to send an email to omarroth@protonmail.com or join our [Matrix Server](https://riot.im/app/#/room/#invidious:matrix.org), or #invidious on Freenode. Feel free to join our [Matrix room](https://matrix.to/#/#invidious:matrix.org), or #invidious on freenode. Both platforms are bridged together.
You can also view release notes on the [releases](https://github.com/omarroth/invidious/releases) page or in the CHANGELOG.md included in the repository. ## Liability
## License We take no responsibility for the use of our tool, or external instances provided by third parties. We strongly recommend you abide by the valid official regulations in your country. Furthermore, we refuse liability for any inappropriate use of Invidious, such as illegal downloading. This tool is provided to you in the spirit of free, open software.
[![GNU AGPLv3 Image](https://www.gnu.org/graphics/agplv3-155x51.png)](http://www.gnu.org/licenses/agpl-3.0.en.html) You may view the LICENSE in which this software is provided to you [here](./LICENSE).
Invidious is Free Software: You can use, study share and improve it at your > 16. Limitation of Liability.
will. Specifically you can redistribute and/or modify it under the terms of the >
[GNU Affero General Public License](https://www.gnu.org/licenses/agpl.html) as > IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
published by the Free Software Foundation, either version 3 of the License, or WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
(at your option) any later version. THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.

1
TRANSLATION Normal file
View File

@ -0,0 +1 @@
https://hosted.weblate.org/projects/invidious/

View File

@ -60,6 +60,22 @@ body {
color: rgb(255, 0, 0); color: rgb(255, 0, 0);
} }
.feed-menu {
display: flex;
justify-content: center;
flex-wrap: wrap;
}
.feed-menu-item {
text-align: center;
}
@media screen and (max-width: 640px) {
.feed-menu-item {
flex: 0 0 40%;
}
}
.h-box { .h-box {
padding-left: 1em; padding-left: 1em;
padding-right: 1em; padding-right: 1em;

10
assets/css/embed.css Normal file
View File

@ -0,0 +1,10 @@
#player {
position: fixed;
right: 0;
bottom: 0;
min-width: 100%;
min-height: 100%;
width: auto;
height: auto;
z-index: -100;
}

View File

@ -0,0 +1,3 @@
.video-js .vjs-vtt-thumbnail-display {
max-width: 158px;
}

View File

@ -1,3 +1,5 @@
var community_data = JSON.parse(document.getElementById('community_data').innerHTML);
String.prototype.supplant = function (o) { String.prototype.supplant = function (o) {
return this.replace(/{([^{}]*)}/g, function (a, b) { return this.replace(/{([^{}]*)}/g, function (a, b) {
var r = o[b]; var r = o[b];

View File

@ -1,3 +1,5 @@
var video_data = JSON.parse(document.getElementById('video_data').innerHTML);
function get_playlist(plid, retries) { function get_playlist(plid, retries) {
if (retries == undefined) retries = 5; if (retries == undefined) retries = 5;

3
assets/js/global.js Normal file
View File

@ -0,0 +1,3 @@
// Disable Web Workers. Fixes Video.js CSP violation (created by `new Worker(objURL)`):
// Refused to create a worker from 'blob:http://host/id' because it violates the following Content Security Policy directive: "worker-src 'self'".
window.Worker = undefined;

144
assets/js/handlers.js Normal file
View File

@ -0,0 +1,144 @@
'use strict';
(function () {
var n2a = function (n) { return Array.prototype.slice.call(n); };
var video_player = document.getElementById('player_html5_api');
if (video_player) {
video_player.onmouseenter = function () { video_player['data-title'] = video_player['title']; video_player['title'] = ''; };
video_player.onmouseleave = function () { video_player['title'] = video_player['data-title']; video_player['data-title'] = ''; };
video_player.oncontextmenu = function () { video_player['title'] = video_player['data-title']; };
}
// For dynamically inserted elements
document.addEventListener('click', function (e) {
if (!e || !e.target) { return; }
e = e.target;
var handler_name = e.getAttribute('data-onclick');
switch (handler_name) {
case 'jump_to_time':
var time = e.getAttribute('data-jump-time');
player.currentTime(time);
break;
case 'get_youtube_replies':
var load_more = e.getAttribute('data-load-more') !== null;
get_youtube_replies(e, load_more);
break;
case 'toggle_parent':
toggle_parent(e);
break;
default:
break;
}
});
n2a(document.querySelectorAll('[data-mouse="switch_classes"]')).forEach(function (e) {
var classes = e.getAttribute('data-switch-classes').split(',');
var ec = classes[0];
var lc = classes[1];
var onoff = function (on, off) {
var cs = e.getAttribute('class');
cs = cs.split(off).join(on);
e.setAttribute('class', cs);
};
e.onmouseenter = function () { onoff(ec, lc); };
e.onmouseleave = function () { onoff(lc, ec); };
});
n2a(document.querySelectorAll('[data-onsubmit="return_false"]')).forEach(function (e) {
e.onsubmit = function () { return false; };
});
n2a(document.querySelectorAll('[data-onclick="mark_watched"]')).forEach(function (e) {
e.onclick = function () { mark_watched(e); };
});
n2a(document.querySelectorAll('[data-onclick="mark_unwatched"]')).forEach(function (e) {
e.onclick = function () { mark_unwatched(e); };
});
n2a(document.querySelectorAll('[data-onclick="add_playlist_video"]')).forEach(function (e) {
e.onclick = function () { add_playlist_video(e); };
});
n2a(document.querySelectorAll('[data-onclick="add_playlist_item"]')).forEach(function (e) {
e.onclick = function () { add_playlist_item(e); };
});
n2a(document.querySelectorAll('[data-onclick="remove_playlist_item"]')).forEach(function (e) {
e.onclick = function () { remove_playlist_item(e); };
});
n2a(document.querySelectorAll('[data-onclick="revoke_token"]')).forEach(function (e) {
e.onclick = function () { revoke_token(e); };
});
n2a(document.querySelectorAll('[data-onclick="remove_subscription"]')).forEach(function (e) {
e.onclick = function () { remove_subscription(e); };
});
n2a(document.querySelectorAll('[data-onclick="notification_requestPermission"]')).forEach(function (e) {
e.onclick = function () { Notification.requestPermission(); };
});
n2a(document.querySelectorAll('[data-onrange="update_volume_value"]')).forEach(function (e) {
var cb = function () { update_volume_value(e); }
e.oninput = cb;
e.onchange = cb;
});
function update_volume_value(element) {
document.getElementById('volume-value').innerText = element.value;
}
function revoke_token(target) {
var row = target.parentNode.parentNode.parentNode.parentNode.parentNode;
row.style.display = 'none';
var count = document.getElementById('count');
count.innerText = count.innerText - 1;
var referer = window.encodeURIComponent(document.location.href);
var url = '/token_ajax?action_revoke_token=1&redirect=false' +
'&referer=' + referer +
'&session=' + target.getAttribute('data-session');
var xhr = new XMLHttpRequest();
xhr.responseType = 'json';
xhr.timeout = 10000;
xhr.open('POST', url, true);
xhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
if (xhr.status != 200) {
count.innerText = parseInt(count.innerText) + 1;
row.style.display = '';
}
}
}
var csrf_token = target.parentNode.querySelector('input[name="csrf_token"]').value;
xhr.send('csrf_token=' + csrf_token);
}
function remove_subscription(target) {
var row = target.parentNode.parentNode.parentNode.parentNode.parentNode;
row.style.display = 'none';
var count = document.getElementById('count');
count.innerText = count.innerText - 1;
var referer = window.encodeURIComponent(document.location.href);
var url = '/subscription_ajax?action_remove_subscriptions=1&redirect=false' +
'&referer=' + referer +
'&c=' + target.getAttribute('data-ucid');
var xhr = new XMLHttpRequest();
xhr.responseType = 'json';
xhr.timeout = 10000;
xhr.open('POST', url, true);
xhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
if (xhr.status != 200) {
count.innerText = parseInt(count.innerText) + 1;
row.style.display = '';
}
}
}
var csrf_token = target.parentNode.querySelector('input[name="csrf_token"]').value;
xhr.send('csrf_token=' + csrf_token);
}
})();

View File

@ -1,3 +1,5 @@
var notification_data = JSON.parse(document.getElementById('notification_data').innerHTML);
var notifications, delivered; var notifications, delivered;
function get_subscriptions(callback, retries) { function get_subscriptions(callback, retries) {

View File

@ -1,3 +1,6 @@
var player_data = JSON.parse(document.getElementById('player_data').innerHTML);
var video_data = JSON.parse(document.getElementById('video_data').innerHTML);
var options = { var options = {
preload: 'auto', preload: 'auto',
liveui: true, liveui: true,
@ -35,7 +38,7 @@ var shareOptions = {
title: player_data.title, title: player_data.title,
description: player_data.description, description: player_data.description,
image: player_data.thumbnail, image: player_data.thumbnail,
embedCode: "<iframe id='ivplayer' type='text/html' width='640' height='360' src='" + embed_url + "' frameborder='0'></iframe>" embedCode: "<iframe id='ivplayer' width='640' height='360' src='" + embed_url + "' style='border:none;'></iframe>"
} }
var player = videojs('player', options); var player = videojs('player', options);
@ -146,7 +149,8 @@ if (!video_data.params.listen && video_data.params.quality === 'dash') {
} }
player.vttThumbnails({ player.vttThumbnails({
src: location.origin + '/api/v1/storyboards/' + video_data.id + '?height=90' src: location.origin + '/api/v1/storyboards/' + video_data.id + '?height=90',
showTimestamp: true
}); });
// Enable annotations // Enable annotations
@ -228,11 +232,24 @@ function set_time_percent(percent) {
player.currentTime(newTime); player.currentTime(newTime);
} }
function play() {
player.play();
}
function pause() {
player.pause();
}
function stop() {
player.pause();
player.currentTime(0);
}
function toggle_play() { function toggle_play() {
if (player.paused()) { if (player.paused()) {
player.play(); play();
} else { } else {
player.pause(); pause();
} }
} }
@ -338,9 +355,22 @@ window.addEventListener('keydown', e => {
switch (decoratedKey) { switch (decoratedKey) {
case ' ': case ' ':
case 'k': case 'k':
case 'MediaPlayPause':
action = toggle_play; action = toggle_play;
break; break;
case 'MediaPlay':
action = play;
break;
case 'MediaPause':
action = pause;
break;
case 'MediaStop':
action = stop;
break;
case 'ArrowUp': case 'ArrowUp':
if (isPlayerFocused) { if (isPlayerFocused) {
action = increase_volume.bind(this, 0.1); action = increase_volume.bind(this, 0.1);
@ -357,9 +387,11 @@ window.addEventListener('keydown', e => {
break; break;
case 'ArrowRight': case 'ArrowRight':
case 'MediaFastForward':
action = skip_seconds.bind(this, 5); action = skip_seconds.bind(this, 5);
break; break;
case 'ArrowLeft': case 'ArrowLeft':
case 'MediaTrackPrevious':
action = skip_seconds.bind(this, -5); action = skip_seconds.bind(this, -5);
break; break;
case 'l': case 'l':
@ -391,9 +423,11 @@ window.addEventListener('keydown', e => {
break; break;
case 'N': case 'N':
case 'MediaTrackNext':
action = next_video; action = next_video;
break; break;
case 'P': case 'P':
case 'MediaTrackPrevious':
// TODO: Add support to play back previous video. // TODO: Add support to play back previous video.
break; break;

View File

@ -1,3 +1,29 @@
var playlist_data = JSON.parse(document.getElementById('playlist_data').innerHTML);
function add_playlist_video(target) {
var select = target.parentNode.children[0].children[1];
var option = select.children[select.selectedIndex];
var url = '/playlist_ajax?action_add_video=1&redirect=false' +
'&video_id=' + target.getAttribute('data-id') +
'&playlist_id=' + option.getAttribute('data-plid');
var xhr = new XMLHttpRequest();
xhr.responseType = 'json';
xhr.timeout = 10000;
xhr.open('POST', url, true);
xhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');
xhr.onreadystatechange = function () {
if (xhr.readyState == 4) {
if (xhr.status == 200) {
option.innerText = '✓' + option.innerText;
}
}
}
xhr.send('csrf_token=' + playlist_data.csrf_token);
}
function add_playlist_item(target) { function add_playlist_item(target) {
var tile = target.parentNode.parentNode.parentNode.parentNode.parentNode; var tile = target.parentNode.parentNode.parentNode.parentNode.parentNode;
tile.style.display = 'none'; tile.style.display = 'none';

File diff suppressed because one or more lines are too long

View File

@ -1,3 +1,5 @@
var subscribe_data = JSON.parse(document.getElementById('subscribe_data').innerHTML);
var subscribe_button = document.getElementById('subscribe'); var subscribe_button = document.getElementById('subscribe');
subscribe_button.parentNode['action'] = 'javascript:void(0)'; subscribe_button.parentNode['action'] = 'javascript:void(0)';

View File

@ -28,6 +28,27 @@ window.addEventListener('load', function () {
update_mode(window.localStorage.dark_mode); update_mode(window.localStorage.dark_mode);
}); });
var darkScheme = window.matchMedia('(prefers-color-scheme: dark)');
var lightScheme = window.matchMedia('(prefers-color-scheme: light)');
darkScheme.addListener(scheme_switch);
lightScheme.addListener(scheme_switch);
function scheme_switch (e) {
// ignore this method if we have a preference set
if (localStorage.getItem('dark_mode')) {
return;
}
if (e.matches) {
if (e.media.includes("dark")) {
set_mode(true);
} else if (e.media.includes("light")) {
set_mode(false);
}
}
}
function set_mode (bool) { function set_mode (bool) {
document.getElementById('dark_theme').media = !bool ? 'none' : ''; document.getElementById('dark_theme').media = !bool ? 'none' : '';
document.getElementById('light_theme').media = bool ? 'none' : ''; document.getElementById('light_theme').media = bool ? 'none' : '';

File diff suppressed because one or more lines are too long

View File

@ -1,3 +1,5 @@
var video_data = JSON.parse(document.getElementById('video_data').innerHTML);
String.prototype.supplant = function (o) { String.prototype.supplant = function (o) {
return this.replace(/{([^{}]*)}/g, function (a, b) { return this.replace(/{([^{}]*)}/g, function (a, b) {
var r = o[b]; var r = o[b];

View File

@ -1,3 +1,5 @@
var watched_data = JSON.parse(document.getElementById('watched_data').innerHTML);
function mark_watched(target) { function mark_watched(target) {
var tile = target.parentNode.parentNode.parentNode.parentNode.parentNode; var tile = target.parentNode.parentNode.parentNode.parentNode.parentNode;
tile.style.display = 'none'; tile.style.display = 'none';

View File

@ -0,0 +1,19 @@
#!/bin/sh
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN title CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN views CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN likes CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN dislikes CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN wilson_score CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN published CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN description CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN language CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN author CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN ucid CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN allowed_regions CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN is_family_friendly CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN genre CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN genre_url CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN license CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN sub_count_text CASCADE"
psql invidious kemal -c "ALTER TABLE videos DROP COLUMN author_thumbnail CASCADE"

View File

@ -1,3 +1,14 @@
-- Type: public.privacy
-- DROP TYPE public.privacy;
CREATE TYPE public.privacy AS ENUM
(
'Public',
'Unlisted',
'Private'
);
-- Table: public.playlists -- Table: public.playlists
-- DROP TABLE public.playlists; -- DROP TABLE public.playlists;

View File

@ -1,10 +0,0 @@
-- Type: public.privacy
-- DROP TYPE public.privacy;
CREATE TYPE public.privacy AS ENUM
(
'Public',
'Unlisted',
'Private'
);

View File

@ -7,23 +7,6 @@ CREATE TABLE public.videos
id text NOT NULL, id text NOT NULL,
info text, info text,
updated timestamp with time zone, updated timestamp with time zone,
title text,
views bigint,
likes integer,
dislikes integer,
wilson_score double precision,
published timestamp with time zone,
description text,
language text,
author text,
ucid text,
allowed_regions text[],
is_family_friendly boolean,
genre text,
genre_url text,
license text,
sub_count_text text,
author_thumbnail text,
CONSTRAINT videos_pkey PRIMARY KEY (id) CONSTRAINT videos_pkey PRIMARY KEY (id)
); );

View File

@ -1,12 +1,16 @@
version: '3' version: '3'
services: services:
postgres: postgres:
build: image: postgres:10
context: .
dockerfile: docker/Dockerfile.postgres
restart: unless-stopped restart: unless-stopped
volumes: volumes:
- postgresdata:/var/lib/postgresql/data - postgresdata:/var/lib/postgresql/data
- ./config/sql:/config/sql
- ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
environment:
POSTGRES_DB: invidious
POSTGRES_PASSWORD: kemal
POSTGRES_USER: kemal
healthcheck: healthcheck:
test: ["CMD", "pg_isready", "-U", "postgres"] test: ["CMD", "pg_isready", "-U", "postgres"]
invidious: invidious:
@ -16,6 +20,21 @@ services:
restart: unless-stopped restart: unless-stopped
ports: ports:
- "127.0.0.1:3000:3000" - "127.0.0.1:3000:3000"
environment:
# Adapted from ./config/config.yml
INVIDIOUS_CONFIG: |
channel_threads: 1
check_tables: true
feed_threads: 1
db:
user: kemal
password: kemal
host: postgres
port: 5432
dbname: invidious
full_refresh: false
https_only: false
domain:
depends_on: depends_on:
- postgres - postgres

View File

@ -1,27 +1,20 @@
FROM alpine:edge AS builder FROM crystallang/crystal:0.35.1-alpine AS builder
RUN apk add --no-cache crystal shards libc-dev \ RUN apk add --no-cache curl sqlite-static
yaml-dev libxml2-dev sqlite-dev zlib-dev openssl-dev \
sqlite-static zlib-static openssl-libs-static
WORKDIR /invidious WORKDIR /invidious
COPY ./shard.yml ./shard.yml COPY ./shard.yml ./shard.yml
RUN shards update && shards install RUN shards update && shards install && \
RUN apk add --no-cache curl && \ # TODO: Document build instructions
curl -Lo /etc/apk/keys/omarroth.rsa.pub https://github.com/omarroth/boringssl-alpine/releases/download/1.1.0-r0/omarroth.rsa.pub && \ # See https://github.com/omarroth/boringssl-alpine/blob/master/APKBUILD,
curl -Lo boringssl-dev.apk https://github.com/omarroth/boringssl-alpine/releases/download/1.1.0-r0/boringssl-dev-1.1.0-r0.apk && \ # https://github.com/omarroth/lsquic-alpine/blob/master/APKBUILD,
curl -Lo lsquic.apk https://github.com/omarroth/lsquic-alpine/releases/download/2.6.3-r0/lsquic-2.6.3-r0.apk && \ # https://github.com/omarroth/lsquic.cr/issues/1#issuecomment-631610081
tar -xf boringssl-dev.apk && \ # for details building static lib
tar -xf lsquic.apk curl -Lo ./lib/lsquic/src/lsquic/ext/liblsquic.a https://omar.yt/lsquic/liblsquic-v2.18.1.a
RUN mv ./usr/lib/libcrypto.a ./lib/lsquic/src/lsquic/ext/libcrypto.a && \
mv ./usr/lib/libssl.a ./lib/lsquic/src/lsquic/ext/libssl.a && \
mv ./usr/lib/liblsquic.a ./lib/lsquic/src/lsquic/ext/liblsquic.a
COPY ./src/ ./src/ COPY ./src/ ./src/
# TODO: .git folder is required for building this is destructive. # TODO: .git folder is required for building this is destructive.
# See definition of CURRENT_BRANCH, CURRENT_COMMIT and CURRENT_VERSION. # See definition of CURRENT_BRANCH, CURRENT_COMMIT and CURRENT_VERSION.
COPY ./.git/ ./.git/ COPY ./.git/ ./.git/
RUN crystal build ./src/invidious.cr \ RUN crystal build ./src/invidious.cr \
--static --warnings all --error-on-warnings \ --static --warnings all \
# TODO: Remove next line, see https://github.com/crystal-lang/crystal/issues/7946
-Dmusl \
--link-flags "-lxml2 -llzma" --link-flags "-lxml2 -llzma"
FROM alpine:latest FROM alpine:latest
@ -30,10 +23,11 @@ WORKDIR /invidious
RUN addgroup -g 1000 -S invidious && \ RUN addgroup -g 1000 -S invidious && \
adduser -u 1000 -S invidious -G invidious adduser -u 1000 -S invidious -G invidious
COPY ./assets/ ./assets/ COPY ./assets/ ./assets/
COPY ./config/config.yml ./config/config.yml COPY --chown=invidious ./config/config.yml ./config/config.yml
RUN sed -i 's/host: \(127.0.0.1\|localhost\)/host: postgres/' config/config.yml
COPY ./config/sql/ ./config/sql/ COPY ./config/sql/ ./config/sql/
COPY ./locales/ ./locales/ COPY ./locales/ ./locales/
RUN sed -i 's/host: \(127.0.0.1\|localhost\)/host: postgres/' config/config.yml
COPY --from=builder /invidious/invidious . COPY --from=builder /invidious/invidious .
USER invidious USER invidious
CMD [ "/invidious/invidious" ] CMD [ "/invidious/invidious" ]

View File

@ -1,9 +0,0 @@
FROM postgres:10
ENV POSTGRES_USER postgres
ADD ./config/sql /config/sql
ADD ./docker/entrypoint.postgres.sh /entrypoint.sh
ENTRYPOINT [ "/entrypoint.sh" ]
CMD [ "postgres" ]

View File

@ -1,31 +0,0 @@
#!/usr/bin/env bash
CMD="$@"
if [ ! -f /var/lib/postgresql/data/setupFinished ]; then
echo "### first run - setting up invidious database"
/usr/local/bin/docker-entrypoint.sh postgres &
sleep 10
until runuser -l postgres -c 'pg_isready' 2>/dev/null; do
>&2 echo "### Postgres is unavailable - waiting"
sleep 5
done
>&2 echo "### importing table schemas"
su postgres -c 'createdb invidious'
su postgres -c 'psql -c "CREATE USER kemal WITH PASSWORD '"'kemal'"'"'
su postgres -c 'psql invidious kemal < config/sql/channels.sql'
su postgres -c 'psql invidious kemal < config/sql/videos.sql'
su postgres -c 'psql invidious kemal < config/sql/channel_videos.sql'
su postgres -c 'psql invidious kemal < config/sql/users.sql'
su postgres -c 'psql invidious kemal < config/sql/session_ids.sql'
su postgres -c 'psql invidious kemal < config/sql/nonces.sql'
su postgres -c 'psql invidious kemal < config/sql/annotations.sql'
su postgres -c 'psql invidious kemal < config/sql/playlists.sql'
su postgres -c 'psql invidious kemal < config/sql/playlist_videos.sql'
su postgres -c 'psql invidious kemal < config/sql/privacy.sql'
touch /var/lib/postgresql/data/setupFinished
echo "### invidious database setup finished"
exit
fi
echo "running postgres /usr/local/bin/docker-entrypoint.sh $CMD"
exec /usr/local/bin/docker-entrypoint.sh $CMD

16
docker/init-invidious-db.sh Executable file
View File

@ -0,0 +1,16 @@
#!/bin/bash
set -eou pipefail
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER postgres;
EOSQL
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" < config/sql/channels.sql
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" < config/sql/videos.sql
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" < config/sql/channel_videos.sql
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" < config/sql/users.sql
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" < config/sql/session_ids.sql
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" < config/sql/nonces.sql
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" < config/sql/annotations.sql
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" < config/sql/playlists.sql
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" < config/sql/playlist_videos.sql

1
kubernetes/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
/charts/*.tgz

6
kubernetes/Chart.lock Normal file
View File

@ -0,0 +1,6 @@
dependencies:
- name: postgresql
repository: https://kubernetes-charts.storage.googleapis.com/
version: 8.3.0
digest: sha256:1feec3c396cbf27573dc201831ccd3376a4a6b58b2e7618ce30a89b8f5d707fd
generated: "2020-02-07T13:39:38.624846+01:00"

22
kubernetes/Chart.yaml Normal file
View File

@ -0,0 +1,22 @@
apiVersion: v2
name: invidious
description: Invidious is an alternative front-end to YouTube
version: 1.1.0
appVersion: 0.20.1
keywords:
- youtube
- proxy
- video
- privacy
home: https://invidio.us/
icon: https://raw.githubusercontent.com/omarroth/invidious/05988c1c49851b7d0094fca16aeaf6382a7f64ab/assets/favicon-32x32.png
sources:
- https://github.com/omarroth/invidious
maintainers:
- name: Leon Klingele
email: mail@leonklingele.de
dependencies:
- name: postgresql
version: ~8.3.0
repository: "https://kubernetes-charts.storage.googleapis.com/"
engine: gotpl

41
kubernetes/README.md Normal file
View File

@ -0,0 +1,41 @@
# Invidious Helm chart
Easily deploy Invidious to Kubernetes.
## Installing Helm chart
```sh
# Build Helm dependencies
$ helm dep build
# Add PostgreSQL init scripts
$ kubectl create configmap invidious-postgresql-init \
--from-file=../config/sql/channels.sql \
--from-file=../config/sql/videos.sql \
--from-file=../config/sql/channel_videos.sql \
--from-file=../config/sql/users.sql \
--from-file=../config/sql/session_ids.sql \
--from-file=../config/sql/nonces.sql \
--from-file=../config/sql/annotations.sql \
--from-file=../config/sql/playlists.sql \
--from-file=../config/sql/playlist_videos.sql
# Install Helm app to your Kubernetes cluster
$ helm install invidious ./
```
## Upgrading
```sh
# Upgrading is easy, too!
$ helm upgrade invidious ./
```
## Uninstall
```sh
# Get rid of everything (except database)
$ helm delete invidious
# To also delete the database, remove all invidious-postgresql PVCs
```

View File

@ -0,0 +1,16 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "invidious.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
*/}}
{{- define "invidious.fullname" -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}

View File

@ -0,0 +1,11 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "invidious.fullname" . }}
labels:
app: {{ template "invidious.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
data:
INVIDIOUS_CONFIG: |
{{ toYaml .Values.config | indent 4 }}

View File

@ -0,0 +1,61 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "invidious.fullname" . }}
labels:
app: {{ template "invidious.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "invidious.name" . }}
release: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ template "invidious.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
spec:
securityContext:
runAsUser: {{ .Values.securityContext.runAsUser }}
runAsGroup: {{ .Values.securityContext.runAsGroup }}
fsGroup: {{ .Values.securityContext.fsGroup }}
initContainers:
- name: wait-for-postgresql
image: postgres
args:
- /bin/sh
- -c
- until pg_isready -h {{ .Values.config.db.host }} -p {{ .Values.config.db.port }} -U {{ .Values.config.db.user }}; do echo waiting for database; sleep 2; done;
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 3000
env:
- name: INVIDIOUS_CONFIG
valueFrom:
configMapKeyRef:
key: INVIDIOUS_CONFIG
name: {{ template "invidious.fullname" . }}
securityContext:
allowPrivilegeEscalation: {{ .Values.securityContext.allowPrivilegeEscalation }}
capabilities:
drop:
- ALL
resources:
{{ toYaml .Values.resources | indent 10 }}
readinessProbe:
httpGet:
port: 3000
path: /
livenessProbe:
httpGet:
port: 3000
path: /
initialDelaySeconds: 15
restartPolicy: Always

View File

@ -0,0 +1,18 @@
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: {{ template "invidious.fullname" . }}
labels:
app: {{ template "invidious.name" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ template "invidious.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
targetCPUUtilizationPercentage: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}

View File

@ -0,0 +1,20 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "invidious.fullname" . }}
labels:
app: {{ template "invidious.name" . }}
chart: {{ .Chart.Name }}
release: {{ .Release.Name }}
spec:
type: {{ .Values.service.type }}
ports:
- name: http
port: {{ .Values.service.port }}
targetPort: 3000
selector:
app: {{ template "invidious.name" . }}
release: {{ .Release.Name }}
{{- if .Values.service.loadBalancerIP }}
loadBalancerIP: {{ .Values.service.loadBalancerIP }}
{{- end }}

56
kubernetes/values.yaml Normal file
View File

@ -0,0 +1,56 @@
name: invidious
image:
repository: omarroth/invidious
tag: latest
pullPolicy: Always
replicaCount: 1
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 16
targetCPUUtilizationPercentage: 50
service:
type: clusterIP
port: 3000
#loadBalancerIP:
resources: {}
#requests:
# cpu: 100m
# memory: 64Mi
#limits:
# cpu: 800m
# memory: 512Mi
securityContext:
allowPrivilegeEscalation: false
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
# See https://github.com/helm/charts/tree/master/stable/postgresql
postgresql:
postgresqlUsername: kemal
postgresqlPassword: kemal
postgresqlDatabase: invidious
initdbUsername: kemal
initdbPassword: kemal
initdbScriptsConfigMap: invidious-postgresql-init
# Adapted from ../config/config.yml
config:
channel_threads: 1
feed_threads: 1
db:
user: kemal
password: kemal
host: invidious-postgresql
port: 5432
dbname: invidious
full_refresh: false
https_only: false
domain:

View File

@ -1,7 +1,7 @@
{ {
"`x` subscribers": "`x` Abonnenten", "`x` subscribers": "`x` Abonnenten",
"`x` videos": "`x` Videos", "`x` videos": "`x` Videos",
"`x` playlists": "", "`x` playlists": "`x` Wiedergabelisten",
"LIVE": "LIVE", "LIVE": "LIVE",
"Shared `x` ago": "Vor `x` geteilt", "Shared `x` ago": "Vor `x` geteilt",
"Unsubscribe": "Abbestellen", "Unsubscribe": "Abbestellen",
@ -127,17 +127,17 @@
"View JavaScript license information.": "Javascript Lizenzinformationen anzeigen.", "View JavaScript license information.": "Javascript Lizenzinformationen anzeigen.",
"View privacy policy.": "Datenschutzerklärung einsehen.", "View privacy policy.": "Datenschutzerklärung einsehen.",
"Trending": "Trending", "Trending": "Trending",
"Public": "", "Public": "Öffentlich",
"Unlisted": "Nicht aufgeführt", "Unlisted": "Nicht aufgeführt",
"Private": "", "Private": "Privat",
"View all playlists": "", "View all playlists": "Alle Wiedergabelisten anzeigen",
"Updated `x` ago": "", "Updated `x` ago": "Aktualisiert `x` vor",
"Delete playlist `x`?": "", "Delete playlist `x`?": "Wiedergabeliste löschen `x`?",
"Delete playlist": "", "Delete playlist": "Wiedergabeliste löschen",
"Create playlist": "", "Create playlist": "Wiedergabeliste erstellen",
"Title": "", "Title": "Titel",
"Playlist privacy": "", "Playlist privacy": "Vertrauliche Wiedergabeliste",
"Editing playlist `x`": "", "Editing playlist `x`": "Wiedergabeliste bearbeiten `x`",
"Watch on YouTube": "Video auf YouTube ansehen", "Watch on YouTube": "Video auf YouTube ansehen",
"Hide annotations": "Anmerkungen ausblenden", "Hide annotations": "Anmerkungen ausblenden",
"Show annotations": "Anmerkungen anzeigen", "Show annotations": "Anmerkungen anzeigen",

View File

@ -8,7 +8,7 @@
"": "`x` videos" "": "`x` videos"
}, },
"`x` playlists": { "`x` playlists": {
"(\\D|^)1(\\D|$)": "`x` playlist", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` playlist",
"": "`x` playlists" "": "`x` playlists"
}, },
"LIVE": "LIVE", "LIVE": "LIVE",
@ -177,7 +177,7 @@
"View YouTube comments": "View YouTube comments", "View YouTube comments": "View YouTube comments",
"View more comments on Reddit": "View more comments on Reddit", "View more comments on Reddit": "View more comments on Reddit",
"View `x` comments": { "View `x` comments": {
"(\\D|^)1(\\D|$)": "View `x` comment", "([^.,0-9]|^)1([^.,0-9]|$)": "View `x` comment",
"": "View `x` comments" "": "View `x` comments"
}, },
"View Reddit comments": "View Reddit comments", "View Reddit comments": "View Reddit comments",

View File

@ -1,13 +1,13 @@
{ {
"`x` subscribers": "`x` harpidedun", "`x` subscribers": "`x` harpidedun",
"`x` videos": "`x` bideo", "`x` videos": "`x` bideo",
"`x` playlists": "", "`x` playlists": "`x` erreprodukzio-zerrenda",
"LIVE": "ZUZENEAN", "LIVE": "ZUZENEAN",
"Shared `x` ago": "Duela `x` partekatua", "Shared `x` ago": "Duela `x` partekatua",
"Unsubscribe": "Harpidetza kendu", "Unsubscribe": "Harpidetza kendu",
"Subscribe": "Harpidetu", "Subscribe": "Harpidetu",
"View channel on YouTube": "Ikusi kanala YouTuben", "View channel on YouTube": "Ikusi kanala YouTuben",
"View playlist on YouTube": "", "View playlist on YouTube": "Ikusi erreprodukzio-zerrenda YouTuben",
"newest": "berrienak", "newest": "berrienak",
"oldest": "zaharrenak", "oldest": "zaharrenak",
"popular": "ospetsuenak", "popular": "ospetsuenak",
@ -16,66 +16,66 @@
"Previous page": "Aurreko orria", "Previous page": "Aurreko orria",
"Clear watch history?": "Garbitu ikusitakoen historia?", "Clear watch history?": "Garbitu ikusitakoen historia?",
"New password": "Pasahitz berria", "New password": "Pasahitz berria",
"New passwords must match": "", "New passwords must match": "Pasahitza berriek bat egin behar dute",
"Cannot change password for Google accounts": "", "Cannot change password for Google accounts": "Ezin da pasahitza aldatu Google kontuetan",
"Authorize token?": "", "Authorize token?": "Baimendu tokena?",
"Authorize token for `x`?": "", "Authorize token for `x`?": "",
"Yes": "Bai", "Yes": "Bai",
"No": "Ez", "No": "Ez",
"Import and Export Data": "Datuak inportatu eta esportatu", "Import and Export Data": "Datuak inportatu eta esportatu",
"Import": "Inportatu", "Import": "Inportatu",
"Import Invidious data": "Invidiouseko datuak inportatu", "Import Invidious data": "Inportatu Invidiouseko datuak",
"Import YouTube subscriptions": "YouTubeko harpidetzak inportatu", "Import YouTube subscriptions": "Inportatu YouTubeko harpidetzak",
"Import FreeTube subscriptions (.db)": "FreeTubeko harpidetzak inportatu (.db)", "Import FreeTube subscriptions (.db)": "Inportatu FreeTubeko harpidetzak (.db)",
"Import NewPipe subscriptions (.json)": "NewPipeko harpidetzak inportatu (.json)", "Import NewPipe subscriptions (.json)": "Inportatu NewPipeko harpidetzak (.json)",
"Import NewPipe data (.zip)": "NewPipeko datuak inportatu (.zip)", "Import NewPipe data (.zip)": "Inportatu NewPipeko datuak (.zip)",
"Export": "Esportatu", "Export": "Esportatu",
"Export subscriptions as OPML": "Esportatu harpidetzak OPML bezala", "Export subscriptions as OPML": "Esportatu harpidetzak OPML bezala",
"Export subscriptions as OPML (for NewPipe & FreeTube)": "Harpidetzak OPML bezala esportatu (NewPipe eta FreeTuberako)", "Export subscriptions as OPML (for NewPipe & FreeTube)": "Esportatu harpidetzak OPML bezala (NewPipe eta FreeTuberako)",
"Export data as JSON": "Datuak JSON bezala esportatu", "Export data as JSON": "Esportatu datuak JSON bezala",
"Delete account?": "Kontua ezabatu?", "Delete account?": "Kontua ezabatu?",
"History": "Historia", "History": "Historia",
"An alternative front-end to YouTube": "YouTuberako interfaze alternatibo bat", "An alternative front-end to YouTube": "YouTuberako interfaze alternatibo bat",
"JavaScript license information": "JavaScript lizentzia informazioa", "JavaScript license information": "JavaScript lizentzia informazioa",
"source": "iturburua", "source": "iturburua",
"Log in": "Saioa hasi", "Log in": "Saioa hasi",
"Log in/register": "Saioa hasi/Izena eman", "Log in/register": "Hasi saioa / Eman izena",
"Log in with Google": "Googlekin hasi saioa", "Log in with Google": "Hasi saioa Googlekin",
"User ID": "Erabiltzaile IDa", "User ID": "Erabiltzaile IDa",
"Password": "Pasahitza", "Password": "Pasahitza",
"Time (h:mm:ss):": "Denbora (o:mm:ss):", "Time (h:mm:ss):": "Denbora (h:mm:ss):",
"Text CAPTCHA": "Testu CAPTCHA", "Text CAPTCHA": "CAPTCHA testua",
"Image CAPTCHA": "Irudi CAPTCHA", "Image CAPTCHA": "CAPTCHA irudia",
"Sign In": "", "Sign In": "Hasi saioa",
"Register": "", "Register": "Eman izena",
"E-mail": "", "E-mail": "E-posta",
"Google verification code": "", "Google verification code": "",
"Preferences": "", "Preferences": "Hobespenak",
"Player preferences": "", "Player preferences": "Erreproduzigailuaren hobespenak",
"Always loop: ": "", "Always loop: ": "",
"Autoplay: ": "", "Autoplay: ": "Automatikoki erreproduzitu: ",
"Play next by default: ": "", "Play next by default: ": "",
"Autoplay next video: ": "", "Autoplay next video: ": "Erreproduzitu automatikoki hurrengo bideoa: ",
"Listen by default: ": "", "Listen by default: ": "",
"Proxy videos: ": "", "Proxy videos: ": "",
"Default speed: ": "", "Default speed: ": "",
"Preferred video quality: ": "", "Preferred video quality: ": "Hobetsitako bideoaren kalitatea: ",
"Player volume: ": "", "Player volume: ": "Erreproduzigailuaren bolumena: ",
"Default comments: ": "", "Default comments: ": "Lehenetsitako iruzkinak: ",
"youtube": "", "youtube": "youtube",
"reddit": "", "reddit": "reddit",
"Default captions: ": "", "Default captions: ": "Lehenetsitako azpitituluak: ",
"Fallback captions: ": "", "Fallback captions: ": "",
"Show related videos: ": "", "Show related videos: ": "Erakutsi erlazionatutako bideoak: ",
"Show annotations by default: ": "", "Show annotations by default: ": "Erakutsi oharrak modu lehenetsian: ",
"Visual preferences": "", "Visual preferences": "Hobespen bisualak",
"Player style: ": "", "Player style: ": "Erreproduzigailu mota: ",
"Dark mode: ": "", "Dark mode: ": "Gai iluna: ",
"Theme: ": "", "Theme: ": "Gaia: ",
"dark": "", "dark": "iluna",
"light": "", "light": "argia",
"Thin mode: ": "", "Thin mode: ": "",
"Subscription preferences": "", "Subscription preferences": "Harpidetzen hobespenak",
"Show annotations by default for subscribed channels: ": "", "Show annotations by default for subscribed channels: ": "",
"Redirect homepage to feed: ": "", "Redirect homepage to feed: ": "",
"Number of videos shown in feed: ": "", "Number of videos shown in feed: ": "",

335
locales/hu-HU.json Normal file
View File

@ -0,0 +1,335 @@
{
"`x` subscribers": "`x` feliratkozó",
"`x` videos": "`x` videó",
"`x` playlists": "`x` playlist",
"LIVE": "ÉLŐ",
"Shared `x` ago": "`x` óta megosztva",
"Unsubscribe": "Leiratkozás",
"Subscribe": "Feliratkozás",
"View channel on YouTube": "Csatokrna megtekintése a YouTube-on",
"View playlist on YouTube": "Playlist megtekintése a YouTube-on",
"newest": "legújabb",
"oldest": "legrégibb",
"popular": "népszerű",
"last": "utolsó",
"Next page": "Következő oldal",
"Previous page": "Előző oldal",
"Clear watch history?": "Megtekintési napló törlése?",
"New password": "Új jelszó",
"New passwords must match": "Az új jelszavaknak egyezniük kell",
"Cannot change password for Google accounts": "Google fiók jelszavát nem lehet cserélni",
"Authorize token?": "Token felhatalmazása?",
"Authorize token for `x`?": "Token felhatalmazása `x`-ra?",
"Yes": "Igen",
"No": "Nem",
"Import and Export Data": "Adatok importálása és exportálása",
"Import": "Importálás",
"Import Invidious data": "Invidious adatainak importálása",
"Import YouTube subscriptions": "YouTube feliratkozások importálása",
"Import FreeTube subscriptions (.db)": "FreeTube feliratkozások importálása (.db)",
"Import NewPipe subscriptions (.json)": "NewPipe feliratkozások importálása (.json)",
"Import NewPipe data (.zip)": "NewPipe adatainak importálása (.zip)",
"Export": "Exportálás",
"Export subscriptions as OPML": "Feliratkozások exportálása OPML-ként",
"Export subscriptions as OPML (for NewPipe & FreeTube)": "Feliratkozások exportálása OPML-ként (NewPipe és FreeTube számára)",
"Export data as JSON": "Adat exportálása JSON-ként",
"Delete account?": "Fiók törlése?",
"History": "Megtekintési napló",
"An alternative front-end to YouTube": "Alternatív YouTube front-end",
"JavaScript license information": "JavaScript licensz információ",
"source": "forrás",
"Log in": "Bejelentkezés",
"Log in/register": "Bejelentkezés/Regisztráció",
"Log in with Google": "Bejelentkezés Google fiókkal",
"User ID": "Felhasználó-ID",
"Password": "Jelszó",
"Time (h:mm:ss):": "Idő (h:mm:ss):",
"Text CAPTCHA": "Szöveg-CAPTCHA",
"Image CAPTCHA": "Kép-CAPTCHA",
"Sign In": "Bejelentkezés",
"Register": "Regisztráció",
"E-mail": "E-mail",
"Google verification code": "Google verifikációs kód",
"Preferences": "Beállítások",
"Player preferences": "Lejátszó beállítások",
"Always loop: ": "Mindig loop-ol: ",
"Autoplay: ": "Automatikus lejátszás: ",
"Play next by default: ": "Következő lejátszása alapértelmezésben: ",
"Autoplay next video: ": "Következő automatikus lejátszása: ",
"Listen by default: ": "Hallgatás alapértelmezésben: ",
"Proxy videos: ": "Proxy videók: ",
"Default speed: ": "Alapértelmezett sebesség: ",
"Preferred video quality: ": "Kívánt video minőség: ",
"Player volume: ": "Hangerő: ",
"Default comments: ": "Alapértelmezett kommentek: ",
"youtube": "YouTube",
"reddit": "Reddit",
"Default captions: ": "Alapértelmezett feliratok: ",
"Fallback captions: ": "Másodlagos feliratok: ",
"Show related videos: ": "Kapcsolódó videók mutatása: ",
"Show annotations by default: ": "Annotációk mutatása alapértelmetésben: ",
"Visual preferences": "Vizuális preferenciák",
"Player style: ": "Lejátszó stílusa: ",
"Dark mode: ": "Sötét mód: ",
"Theme: ": "Téma: ",
"dark": "Sötét",
"light": "Világos",
"Thin mode: ": "Vékony mód: ",
"Subscription preferences": "Feliratkozási beállítások",
"Show annotations by default for subscribed channels: ": "Annotációk mutatása alapértelmezésben feliratkozott csatornák esetében: ",
"Redirect homepage to feed: ": "Kezdő oldal átirányitása a feed-re: ",
"Number of videos shown in feed: ": "Feed-ben mutatott videók száma: ",
"Sort videos by: ": "Videók sorrendje: ",
"published": "közzétéve",
"published - reverse": "közzétéve (ford.)",
"alphabetically": "ABC sorrend",
"alphabetically - reverse": "ABC sorrend (ford.)",
"channel name": "csatorna neve",
"channel name - reverse": "csatorna neve (ford.)",
"Only show latest video from channel: ": "Csak a legutolsó videó mutatása a csatornából: ",
"Only show latest unwatched video from channel: ": "Csak a legutolsó nem megtekintett videó mutatása a csatornából: ",
"Only show unwatched: ": "Csak a nem megtekintettek mutatása: ",
"Only show notifications (if there are any): ": "Csak értesítések mutatása (ha van): ",
"Enable web notifications": "Web értesítések bekapcsolása",
"`x` uploaded a video": "`x` feltöltött egy videót",
"`x` is live": "`x` élő",
"Data preferences": "Adat beállítások",
"Clear watch history": "Megtekintési napló törlése",
"Import/export data": "Adat Import/Export",
"Change password": "Jelszócsere",
"Manage subscriptions": "Feliratkozások kezelése",
"Manage tokens": "Tokenek kezelése",
"Watch history": "Megtekintési napló",
"Delete account": "Fiók törlése",
"Administrator preferences": "Adminisztrátor beállítások",
"Default homepage: ": "Alapértelmezett honlap: ",
"Feed menu: ": "Feed menü: ",
"Top enabled: ": "Top lista engedélyezve: ",
"CAPTCHA enabled: ": "CAPTCHA engedélyezve: ",
"Login enabled: ": "Bejelentkezés engedélyezve: ",
"Registration enabled: ": "Registztráció engedélyezve: ",
"Report statistics: ": "Statisztikák gyűjtése: ",
"Save preferences": "Beállítások mentése",
"Subscription manager": "Feliratkozás kezelő",
"Token manager": "Token kezelő",
"Token": "Token",
"`x` subscriptions": "`x` feliratkozás",
"`x` tokens": "`x` token",
"Import/export": "Import/export",
"unsubscribe": "leiratkozás",
"revoke": "visszavonás",
"Subscriptions": "Feliratkozások",
"`x` unseen notifications": "`x` kimaradt érdesítés",
"search": "keresés",
"Log out": "Kijelentkezés",
"Released under the AGPLv3 by Omar Roth.": "Omar Roth által release-elve AGPLv3 licensz alatt.",
"Source available here.": "Forrás elérhető itt.",
"View JavaScript license information.": "JavaScript licensz inforkációk megtekintése.",
"View privacy policy.": "Adatvédelem irányelv megtekintése.",
"Trending": "Trending",
"Public": "Nyilvános",
"Unlisted": "Nem nyilvános",
"Private": "Privát",
"View all playlists": "Minden playlist megtekintése",
"Updated `x` ago": "Frissitve `x`",
"Delete playlist `x`?": "`x` playlist törlése?",
"Delete playlist": "Playlist törlése",
"Create playlist": "Playlist létrehozása",
"Title": "Címe",
"Playlist privacy": "Playlist láthatósága",
"Editing playlist `x`": "`x` playlist szerkesztése",
"Watch on YouTube": "Megtekintés a YouTube-on",
"Hide annotations": "Annotációk elrejtése",
"Show annotations": "Annotációk mutatása",
"Genre: ": "Zsáner: ",
"License: ": "Licensz: ",
"Family friendly? ": "Családbarát? ",
"Wilson score: ": "Wilson-ponstszém: ",
"Engagement: ": "Engagement: ",
"Whitelisted regions: ": "Engedélyezett régiók: ",
"Blacklisted regions: ": "Tiltott régiók: ",
"Shared `x`": "Megosztva `x`",
"`x` views": "`x` megtekintés",
"Premieres in `x`": "Premier `x`",
"Premieres `x`": "Premier `x`",
"Hi! Looks like you have JavaScript turned off. Click here to view comments, keep in mind they may take a bit longer to load.": "Hi! Looks like you have JavaScript turned off. Click here to view comments, keep in mind they may take a bit longer to load.",
"View YouTube comments": "YouTube kommentek megtekintése",
"View more comments on Reddit": "További Reddit kommentek megtekintése",
"View `x` comments": "`x` komment megtekintése",
"View Reddit comments": "Reddit kommentek megtekintése",
"Hide replies": "Válaszok elrejtése",
"Show replies": "Válaszok mutatása",
"Incorrect password": "Helytelen jelszó",
"Quota exceeded, try again in a few hours": "Kvóta túllépve, próbálkozz pár órával később",
"Unable to log in, make sure two-factor authentication (Authenticator or SMS) is turned on.": "Sikertelen belépés, győződj meg róla hogy a 2FA (Authenticator vagy SMS) engedélyezve van.",
"Login failed. This may be because two-factor authentication is not turned on for your account.": "Sikertelen belépés, győződj meg róla hogy a 2FA (Authenticator vagy SMS) engedélyezve van.",
"Wrong answer": "Rossz válasz",
"Erroneous CAPTCHA": "Hibás CAPTCHA",
"CAPTCHA is a required field": "A CAPTCHA kötelező",
"User ID is a required field": "A felhasználó-ID kötelező",
"Password is a required field": "A jelszó kötelező",
"Wrong username or password": "Rossz felhasználónév vagy jelszó",
"Please sign in using 'Log in with Google'": "Kérem, jelentkezzen be a \"Bejelentkezés Google-el\"",
"Password cannot be empty": "A jelszó nem lehet üres",
"Password cannot be longer than 55 characters": "A jelszó nem lehet hosszabb 55 betűnél",
"Please log in": "Kérem lépjen be",
"Invidious Private Feed for `x`": "`x` Invidious privát feed-je",
"channel:`x`": "`x` csatorna",
"Deleted or invalid channel": "Törölt vagy nemlétező csatorna",
"This channel does not exist.": "Ez a csatorna nem létezik.",
"Could not get channel info.": "Nem megszerezhető a csatorna információ.",
"Could not fetch comments": "Nem megszerezhetőek a kommentek",
"View `x` replies": "`x` válasz megtekintése",
"`x` ago": "`x` óta",
"Load more": "További betöltése",
"`x` points": "`x` pont",
"Could not create mix.": "Nem tudok mix-et készíteni.",
"Empty playlist": "Üres playlist",
"Not a playlist.": "Nem playlist.",
"Playlist does not exist.": "Nem létező playlist.",
"Could not pull trending pages.": "Nem tudom letölteni a trendek adatait.",
"Hidden field \"challenge\" is a required field": "A rejtett \"challenge\" mező kötelező",
"Hidden field \"token\" is a required field": "A rejtett \"token\" mező kötelező",
"Erroneous challenge": "Hibás challenge",
"Erroneous token": "Hibás token",
"No such user": "Nincs ilyen felhasználó",
"Token is expired, please try again": "Lejárt token, kérem próbáld újra",
"English": "",
"English (auto-generated)": "English (auto-genererat)",
"Afrikaans": "",
"Albanian": "",
"Amharic": "",
"Arabic": "",
"Armenian": "",
"Azerbaijani": "",
"Bangla": "",
"Basque": "",
"Belarusian": "",
"Bosnian": "",
"Bulgarian": "",
"Burmese": "",
"Catalan": "",
"Cebuano": "",
"Chinese (Simplified)": "",
"Chinese (Traditional)": "",
"Corsican": "",
"Croatian": "",
"Czech": "",
"Danish": "",
"Dutch": "",
"Esperanto": "",
"Estonian": "",
"Filipino": "",
"Finnish": "",
"French": "",
"Galician": "",
"Georgian": "",
"German": "",
"Greek": "",
"Gujarati": "",
"Haitian Creole": "",
"Hausa": "",
"Hawaiian": "",
"Hebrew": "",
"Hindi": "",
"Hmong": "",
"Hungarian": "",
"Icelandic": "",
"Igbo": "",
"Indonesian": "",
"Irish": "",
"Italian": "",
"Japanese": "",
"Javanese": "",
"Kannada": "",
"Kazakh": "",
"Khmer": "",
"Korean": "",
"Kurdish": "",
"Kyrgyz": "",
"Lao": "",
"Latin": "",
"Latvian": "",
"Lithuanian": "",
"Luxembourgish": "",
"Macedonian": "",
"Malagasy": "",
"Malay": "",
"Malayalam": "",
"Maltese": "",
"Maori": "",
"Marathi": "",
"Mongolian": "",
"Nepali": "",
"Norwegian Bokmål": "",
"Nyanja": "",
"Pashto": "",
"Persian": "",
"Polish": "",
"Portuguese": "",
"Punjabi": "",
"Romanian": "",
"Russian": "",
"Samoan": "",
"Scottish Gaelic": "",
"Serbian": "",
"Shona": "",
"Sindhi": "",
"Sinhala": "",
"Slovak": "",
"Slovenian": "",
"Somali": "",
"Southern Sotho": "",
"Spanish": "",
"Spanish (Latin America)": "",
"Sundanese": "",
"Swahili": "",
"Swedish": "",
"Tajik": "",
"Tamil": "",
"Telugu": "",
"Thai": "",
"Turkish": "",
"Ukrainian": "",
"Urdu": "",
"Uzbek": "",
"Vietnamese": "",
"Welsh": "",
"Western Frisian": "",
"Xhosa": "",
"Yiddish": "",
"Yoruba": "",
"Zulu": "",
"`x` years": "`x` év",
"`x` months": "`x` hónap",
"`x` weeks": "`x` hét",
"`x` days": "`x` nap",
"`x` hours": "`x` óra",
"`x` minutes": "`x` perc",
"`x` seconds": "`x` másodperc",
"Fallback comments: ": "Másodlagos kommentek: ",
"Popular": "Népszerű",
"Top": "Top",
"About": "Leírás",
"Rating: ": "Besorolás: ",
"Language: ": "Nyelv: ",
"View as playlist": "Megtekintés playlist-ként",
"Default": "Alapértelmezett",
"Music": "Zene",
"Gaming": "Játékok",
"News": "Hírek",
"Movies": "Filmek",
"Download": "Letöltés",
"Download as: ": "Letöltés mint: ",
"%A %B %-d, %Y": "",
"(edited)": "(szerkesztve)",
"YouTube comment permalink": "YouTube komment permalink",
"permalink": "permalink",
"`x` marked it with a ❤": "`x` jelölte ❤-vel",
"Audio mode": "Audio mód",
"Video mode": "Video mód",
"Videos": "Videók",
"Playlists": "Playlistek",
"Community": "Közösség",
"Current version: ": "Jelenlegi verzió: "
}

View File

@ -1,13 +1,13 @@
{ {
"`x` subscribers": { "`x` subscribers.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` iscritto", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` iscritto",
"": "`x` iscritti" "": "`x` iscritti."
}, },
"`x` videos": { "`x` videos.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` video", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` video",
"": "`x` video" "": "`x` video."
}, },
"`x` playlists": "", "`x` playlists": "`x` playlist",
"LIVE": "IN DIRETTA", "LIVE": "IN DIRETTA",
"Shared `x` ago": "Condiviso `x` fa", "Shared `x` ago": "Condiviso `x` fa",
"Unsubscribe": "Disiscriviti", "Unsubscribe": "Disiscriviti",
@ -75,9 +75,9 @@
"Show related videos: ": "Mostra video correlati: ", "Show related videos: ": "Mostra video correlati: ",
"Show annotations by default: ": "Mostra le annotazioni in modo predefinito: ", "Show annotations by default: ": "Mostra le annotazioni in modo predefinito: ",
"Visual preferences": "Preferenze grafiche", "Visual preferences": "Preferenze grafiche",
"Player style: ": "Stile riproduttore", "Player style: ": "Stile riproduttore: ",
"Dark mode: ": "Tema scuro: ", "Dark mode: ": "Tema scuro: ",
"Theme: ": "Tema", "Theme: ": "Tema: ",
"dark": "scuro", "dark": "scuro",
"light": "chiaro", "light": "chiaro",
"Thin mode: ": "Modalità per connessioni lente: ", "Thin mode: ": "Modalità per connessioni lente: ",
@ -110,7 +110,7 @@
"Administrator preferences": "Preferenze amministratore", "Administrator preferences": "Preferenze amministratore",
"Default homepage: ": "Pagina principale predefinita: ", "Default homepage: ": "Pagina principale predefinita: ",
"Feed menu: ": "Menu iscrizioni: ", "Feed menu: ": "Menu iscrizioni: ",
"Top enabled: ": "", "Top enabled: ": "Top abilitato: ",
"CAPTCHA enabled: ": "CAPTCHA attivati: ", "CAPTCHA enabled: ": "CAPTCHA attivati: ",
"Login enabled: ": "Accesso attivato: ", "Login enabled: ": "Accesso attivato: ",
"Registration enabled: ": "Registrazione attivata: ", "Registration enabled: ": "Registrazione attivata: ",
@ -119,40 +119,40 @@
"Subscription manager": "Gestione delle iscrizioni", "Subscription manager": "Gestione delle iscrizioni",
"Token manager": "Gestione dei gettoni", "Token manager": "Gestione dei gettoni",
"Token": "Gettone", "Token": "Gettone",
"`x` subscriptions": { "`x` subscriptions.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` iscrizione", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` iscrizione",
"": "`x` iscrizioni" "": "`x` iscrizioni."
}, },
"`x` tokens": { "`x` tokens.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` gettone", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` gettone",
"": "`x` gettoni" "": "`x` gettoni."
}, },
"Import/export": "Importa/esporta", "Import/export": "Importa/esporta",
"unsubscribe": "disiscriviti", "unsubscribe": "disiscriviti",
"revoke": "revoca", "revoke": "revoca",
"Subscriptions": "Iscrizioni", "Subscriptions": "Iscrizioni",
"`x` unseen notifications": { "`x` unseen notifications.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` notifica non visualizzata", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` notifica non visualizzata",
"": "`x` notifiche non visualizzate" "": "`x` notifiche non visualizzate."
}, },
"search": "Cerca", "search": "Cerca",
"Log out": "Esci", "Log out": "Esci",
"Released under the AGPLv3 by Omar Roth.": "Pubblicato con licenza AGPLv3 da Omar Roth.", "Released under the AGPLv3 by Omar Roth.": "Pubblicato con licenza AGPLv3 da Omar Roth.",
"Source available here.": "Codice sorgente.", "Source available here.": "Codice sorgente.",
"View JavaScript license information.": "Guarda le informazioni di licenza del codice JavaScript.", "View JavaScript license information.": "Guarda le informazioni di licenza del codice JavaScript.",
"View privacy policy.": "Vedi la politica sulla privacy", "View privacy policy.": "Vedi la politica sulla privacy.",
"Trending": "Tendenze", "Trending": "Tendenze",
"Public": "", "Public": "Pubblico",
"Unlisted": "Non elencati", "Unlisted": "Non elencati",
"Private": "", "Private": "Privato",
"View all playlists": "", "View all playlists": "Visualizza tutte le playlist",
"Updated `x` ago": "", "Updated `x` ago": "Aggiornato `x` fa",
"Delete playlist `x`?": "", "Delete playlist `x`?": "Eliminare la playlist `x`?",
"Delete playlist": "", "Delete playlist": "Elimina playlist",
"Create playlist": "", "Create playlist": "Crea playlist",
"Title": "", "Title": "Titolo",
"Playlist privacy": "", "Playlist privacy": "Privacy playlist",
"Editing playlist `x`": "", "Editing playlist `x`": "Modificando la playlist `x`",
"Watch on YouTube": "Guarda su YouTube", "Watch on YouTube": "Guarda su YouTube",
"Hide annotations": "Nascondi annotazioni", "Hide annotations": "Nascondi annotazioni",
"Show annotations": "Mostra annotazioni", "Show annotations": "Mostra annotazioni",
@ -164,12 +164,12 @@
"Whitelisted regions: ": "Regioni in lista bianca: ", "Whitelisted regions: ": "Regioni in lista bianca: ",
"Blacklisted regions: ": "Regioni in lista nera: ", "Blacklisted regions: ": "Regioni in lista nera: ",
"Shared `x`": "Condiviso `x`", "Shared `x`": "Condiviso `x`",
"`x` views": { "`x` views.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` visualizzazione", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` visualizzazione",
"": "`x` visualizzazioni" "": "`x` visualizzazioni."
}, },
"Premieres in `x`": "", "Premieres in `x`": "In anteprima in `x`",
"Premieres `x`": "", "Premieres `x`": "In anteprima `x`",
"Hi! Looks like you have JavaScript turned off. Click here to view comments, keep in mind they may take a bit longer to load.": "Ciao! Sembra che tu abbia disattivato JavaScript. Clicca qui per visualizzare i commenti. Considera che potrebbe volerci più tempo.", "Hi! Looks like you have JavaScript turned off. Click here to view comments, keep in mind they may take a bit longer to load.": "Ciao! Sembra che tu abbia disattivato JavaScript. Clicca qui per visualizzare i commenti. Considera che potrebbe volerci più tempo.",
"View YouTube comments": "Visualizza i commenti da YouTube", "View YouTube comments": "Visualizza i commenti da YouTube",
"View more comments on Reddit": "Visualizza più commenti su Reddit", "View more comments on Reddit": "Visualizza più commenti su Reddit",
@ -198,15 +198,15 @@
"This channel does not exist.": "Questo canale non esiste.", "This channel does not exist.": "Questo canale non esiste.",
"Could not get channel info.": "Impossibile ottenere le informazioni del canale.", "Could not get channel info.": "Impossibile ottenere le informazioni del canale.",
"Could not fetch comments": "Impossibile recuperare i commenti", "Could not fetch comments": "Impossibile recuperare i commenti",
"View `x` replies": { "View `x` replies.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "Visualizza `x` risposta", "([^.,0-9]|^)1([^.,0-9]|$)": "Visualizza `x` risposta",
"": "Visualizza `x` risposte" "": "Visualizza `x` risposte."
}, },
"`x` ago": "`x` fa", "`x` ago": "`x` fa",
"Load more": "Carica altro", "Load more": "Carica altro",
"`x` points": { "`x` points.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` punto", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` punto",
"": "`x` punti" "": "`x` punti."
}, },
"Could not create mix.": "Impossibile creare il mix.", "Could not create mix.": "Impossibile creare il mix.",
"Empty playlist": "Playlist vuota", "Empty playlist": "Playlist vuota",
@ -325,33 +325,33 @@
"Yiddish": "Yiddish", "Yiddish": "Yiddish",
"Yoruba": "Yoruba", "Yoruba": "Yoruba",
"Zulu": "Zulu", "Zulu": "Zulu",
"`x` years": { "`x` years.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` anno", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` anno",
"": "`x` anni" "": "`x` anni."
}, },
"`x` months": { "`x` months.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` mese", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` mese",
"": "`x` mesi" "": "`x` mesi."
}, },
"`x` weeks": { "`x` weeks.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` settimana", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` settimana",
"": "`x` settimane" "": "`x` settimane."
}, },
"`x` days": { "`x` days.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` giorno", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` giorno",
"": "`x` giorni" "": "`x` giorni."
}, },
"`x` hours": { "`x` hours.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` ora", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` ora",
"": "`x` ore" "": "`x` ore."
}, },
"`x` minutes": { "`x` minutes.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` minuto", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` minuto",
"": "`x` minuti" "": "`x` minuti."
}, },
"`x` seconds": { "`x` seconds.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` secondo", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` secondo",
"": "`x` secondi" "": "`x` secondi."
}, },
"Fallback comments: ": "Commenti alternativi: ", "Fallback comments: ": "Commenti alternativi: ",
"Popular": "Popolare", "Popular": "Popolare",
@ -370,7 +370,7 @@
"%A %B %-d, %Y": "%A %-d %B %Y", "%A %B %-d, %Y": "%A %-d %B %Y",
"(edited)": "(modificato)", "(edited)": "(modificato)",
"YouTube comment permalink": "Link permanente al commento di YouTube", "YouTube comment permalink": "Link permanente al commento di YouTube",
"permalink": "", "permalink": "permalink",
"`x` marked it with a ❤": "`x` l'ha contrassegnato con un ❤", "`x` marked it with a ❤": "`x` l'ha contrassegnato con un ❤",
"Audio mode": "Modalità audio", "Audio mode": "Modalità audio",
"Video mode": "Modalità video", "Video mode": "Modalità video",

View File

@ -8,7 +8,7 @@
"": "`x` 個の動画" "": "`x` 個の動画"
}, },
"`x` playlists": { "`x` playlists": {
"(\\D|^)1(\\D|$)": "`x` 個の再生リスト", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` 個の再生リスト",
"": "`x` 個の再生リスト" "": "`x` 個の再生リスト"
}, },
"LIVE": "ライブ", "LIVE": "ライブ",
@ -177,7 +177,7 @@
"View YouTube comments": "YouTube のコメントを見る", "View YouTube comments": "YouTube のコメントを見る",
"View more comments on Reddit": "Reddit でコメントをもっと見る", "View more comments on Reddit": "Reddit でコメントをもっと見る",
"View `x` comments": { "View `x` comments": {
"(\\D|^)1(\\D|$)": "`x` 件のコメントを見る", "([^.,0-9]|^)1([^.,0-9]|$)": "`x` 件のコメントを見る",
"": "`x` 件のコメントを見る" "": "`x` 件のコメントを見る"
}, },
"View Reddit comments": "Reddit のコメントを見る", "View Reddit comments": "Reddit のコメントを見る",

View File

@ -25,13 +25,13 @@
"Import and Export Data": "Importer- og eksporter data", "Import and Export Data": "Importer- og eksporter data",
"Import": "Importer", "Import": "Importer",
"Import Invidious data": "Importer Invidious-data", "Import Invidious data": "Importer Invidious-data",
"Import YouTube subscriptions": "Importer YouTube-abonnenter", "Import YouTube subscriptions": "Importer YouTube-abonnementer",
"Import FreeTube subscriptions (.db)": "Importer FreeTube-abonnenter (.db)", "Import FreeTube subscriptions (.db)": "Importer FreeTube-abonnementer (.db)",
"Import NewPipe subscriptions (.json)": "Importer NewPipe-abonnenter (.json)", "Import NewPipe subscriptions (.json)": "Importer NewPipe-abonnementer (.json)",
"Import NewPipe data (.zip)": "Importer NewPipe-data (.zip)", "Import NewPipe data (.zip)": "Importer NewPipe-data (.zip)",
"Export": "Eksporter", "Export": "Eksporter",
"Export subscriptions as OPML": "Eksporter abonnenter som OPML", "Export subscriptions as OPML": "Eksporter abonnementer som OPML",
"Export subscriptions as OPML (for NewPipe & FreeTube)": "Eksporter abonnenter som OPML (for NewPipe og FreeTube)", "Export subscriptions as OPML (for NewPipe & FreeTube)": "Eksporter abonnementer som OPML (for NewPipe og FreeTube)",
"Export data as JSON": "Eksporter data som JSON", "Export data as JSON": "Eksporter data som JSON",
"Delete account?": "Slett konto?", "Delete account?": "Slett konto?",
"History": "Historikk", "History": "Historikk",

View File

@ -19,7 +19,7 @@
"New passwords must match": "Nowe hasła muszą być identyczne", "New passwords must match": "Nowe hasła muszą być identyczne",
"Cannot change password for Google accounts": "Nie można zmienić hasła do konta Google", "Cannot change password for Google accounts": "Nie można zmienić hasła do konta Google",
"Authorize token?": "Autoryzować token?", "Authorize token?": "Autoryzować token?",
"Authorize token for `x`?": "", "Authorize token for `x`?": "Autoryzować token dla `x`?",
"Yes": "Tak", "Yes": "Tak",
"No": "Nie", "No": "Nie",
"Import and Export Data": "Import i eksport danych", "Import and Export Data": "Import i eksport danych",

View File

@ -1,7 +1,7 @@
{ {
"`x` subscribers": "`x` inscritos", "`x` subscribers": "`x` inscritos",
"`x` videos": "`x` videos", "`x` videos": "`x` videos",
"`x` playlists": "", "`x` playlists": "`x` lista de reprodução",
"LIVE": "AO VIVO", "LIVE": "AO VIVO",
"Shared `x` ago": "Compartilhado `x` atrás", "Shared `x` ago": "Compartilhado `x` atrás",
"Unsubscribe": "Desinscrever-se", "Unsubscribe": "Desinscrever-se",
@ -325,11 +325,11 @@
"%A %B %-d, %Y": "%A %-d %B %Y", "%A %B %-d, %Y": "%A %-d %B %Y",
"(edited)": "(editado)", "(edited)": "(editado)",
"YouTube comment permalink": "Link permanente do comentário do YouTube", "YouTube comment permalink": "Link permanente do comentário do YouTube",
"permalink": "", "permalink": "Link permanente",
"`x` marked it with a ❤": "`x` foi marcado como ❤", "`x` marked it with a ❤": "`x` foi marcado como ❤",
"Audio mode": "Modo de audio", "Audio mode": "Modo de audio",
"Video mode": "Modo de video", "Video mode": "Modo de video",
"Videos": "Videos", "Videos": "Vídeos",
"Playlists": "Listas de reprodução", "Playlists": "Listas de reprodução",
"Community": "Comunidade", "Community": "Comunidade",
"Current version: ": "Versão atual: " "Current version: ": "Versão atual: "

387
locales/pt-PT.json Normal file
View File

@ -0,0 +1,387 @@
{
"`x` subscribers.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` subscritores.",
"": "`x` subscritores."
},
"`x` videos.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` vídeos.",
"": "`x` vídeos."
},
"`x` playlists.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` listas de reprodução.",
"": "`x` listas de reprodução."
},
"LIVE": "Em direto",
"Shared `x` ago": "Partilhado `x` atrás",
"Unsubscribe": "Anular subscrição",
"Subscribe": "Subscrever",
"View channel on YouTube": "Ver canal no YouTube",
"View playlist on YouTube": "Ver lista de reprodução no YouTube",
"newest": "mais recentes",
"oldest": "mais antigos",
"popular": "popular",
"last": "últimos",
"Next page": "Próxima página",
"Previous page": "Página anterior",
"Clear watch history?": "Limpar histórico de reprodução?",
"New password": "Nova palavra-chave",
"New passwords must match": "As novas palavra-chaves devem corresponder",
"Cannot change password for Google accounts": "Não é possível alterar palavra-chave para contas do Google",
"Authorize token?": "Autorizar token?",
"Authorize token for `x`?": "Autorizar token para `x`?",
"Yes": "Sim",
"No": "Não",
"Import and Export Data": "Importar e Exportar Dados",
"Import": "Importar",
"Import Invidious data": "Importar dados do Invidious",
"Import YouTube subscriptions": "Importar subscrições do YouTube",
"Import FreeTube subscriptions (.db)": "Importar subscrições do FreeTube (.db)",
"Import NewPipe subscriptions (.json)": "Importar subscrições do NewPipe (.json)",
"Import NewPipe data (.zip)": "Importar dados do NewPipe (.zip)",
"Export": "Exportar",
"Export subscriptions as OPML": "Exportar subscrições como OPML",
"Export subscriptions as OPML (for NewPipe & FreeTube)": "Exportar subscrições como OPML (para NewPipe e FreeTube)",
"Export data as JSON": "Exportar dados como JSON",
"Delete account?": "Eliminar conta?",
"History": "Histórico",
"An alternative front-end to YouTube": "Uma interface alternativa para o YouTube",
"JavaScript license information": "Informação de licença do JavaScript",
"source": "código-fonte",
"Log in": "Iniciar sessão",
"Log in/register": "Iniciar sessão/Registar",
"Log in with Google": "Iniciar sessão com o Google",
"User ID": "Utilizador",
"Password": "Palavra-chave",
"Time (h:mm:ss):": "Tempo (h:mm:ss):",
"Text CAPTCHA": "Texto CAPTCHA",
"Image CAPTCHA": "Imagem CAPTCHA",
"Sign In": "Iniciar Sessão",
"Register": "Registar",
"E-mail": "E-mail",
"Google verification code": "Código de verificação do Google",
"Preferences": "Preferências",
"Player preferences": "Preferências do reprodutor",
"Always loop: ": "Repetir sempre: ",
"Autoplay: ": "Reprodução automática: ",
"Play next by default: ": "Sempre reproduzir próximo: ",
"Autoplay next video: ": "Reproduzir próximo vídeo automaticamente: ",
"Listen by default: ": "Apenas áudio: ",
"Proxy videos: ": "Usar proxy nos vídeos: ",
"Default speed: ": "Velocidade preferida: ",
"Preferred video quality: ": "Qualidade de vídeo preferida: ",
"Player volume: ": "Volume da reprodução: ",
"Default comments: ": "Preferência dos comentários: ",
"youtube": "youtube",
"reddit": "reddit",
"Default captions: ": "Legendas predefinidas: ",
"Fallback captions: ": "Legendas alternativas: ",
"Show related videos: ": "Mostrar vídeos relacionados: ",
"Show annotations by default: ": "Mostrar sempre anotações: ",
"Visual preferences": "Preferências visuais",
"Player style: ": "Estilo do reprodutor: ",
"Dark mode: ": "Modo escuro: ",
"Theme: ": "Tema: ",
"dark": "escuro",
"light": "claro",
"Thin mode: ": "Modo compacto: ",
"Subscription preferences": "Preferências de subscrições",
"Show annotations by default for subscribed channels: ": "Mostrar sempre anotações para os canais subscritos: ",
"Redirect homepage to feed: ": "Redirecionar página inicial para subscrições: ",
"Number of videos shown in feed: ": "Número de vídeos nas subscrições: ",
"Sort videos by: ": "Ordenar vídeos por: ",
"published": "publicado",
"published - reverse": "publicado - inverso",
"alphabetically": "alfabeticamente",
"alphabetically - reverse": "alfabeticamente - inverso",
"channel name": "nome do canal",
"channel name - reverse": "nome do canal - inverso",
"Only show latest video from channel: ": "Mostrar apenas o vídeo mais recente do canal: ",
"Only show latest unwatched video from channel: ": "Mostrar apenas vídeos mais recentes não visualizados do canal: ",
"Only show unwatched: ": "Mostrar apenas vídeos não visualizados: ",
"Only show notifications (if there are any): ": "Mostrar apenas notificações (se existirem): ",
"Enable web notifications": "Ativar notificações pela web",
"`x` uploaded a video": "`x` publicou um novo vídeo",
"`x` is live": "`x` está em direto",
"Data preferences": "Preferências de dados",
"Clear watch history": "Limpar histórico de reprodução",
"Import/export data": "Importar/Exportar dados",
"Change password": "Alterar palavra-chave",
"Manage subscriptions": "Gerir as subscrições",
"Manage tokens": "Gerir tokens",
"Watch history": "Histórico de reprodução",
"Delete account": "Eliminar conta",
"Administrator preferences": "Preferências de administrador",
"Default homepage: ": "Página inicial padrão: ",
"Feed menu: ": "Menu de subscrições: ",
"Top enabled: ": "Top ativado: ",
"CAPTCHA enabled: ": "CAPTCHA ativado: ",
"Login enabled: ": "Iniciar sessão ativado: ",
"Registration enabled: ": "Registar ativado: ",
"Report statistics: ": "Relatório de estatísticas: ",
"Save preferences": "Gravar preferências",
"Subscription manager": "Gerir subscrições",
"Token manager": "Gerir tokens",
"Token": "Token",
"`x` subscriptions.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` subscrições.",
"": "`x` subscrições."
},
"`x` tokens.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` tokens.",
"": "`x` tokens."
},
"Import/export": "Importar/Exportar",
"unsubscribe": "Anular subscrição",
"revoke": "revogar",
"Subscriptions": "Subscrições",
"`x` unseen notifications.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` notificações não vistas.",
"": "`x` notificações não vistas."
},
"search": "Pesquisar",
"Log out": "Terminar sessão",
"Released under the AGPLv3 by Omar Roth.": "Publicado sob a licença AGPLv3, por Omar Roth.",
"Source available here.": "Código-fonte disponível aqui.",
"View JavaScript license information.": "Ver informações da licença do JavaScript.",
"View privacy policy.": "Ver a política de privacidade.",
"Trending": "Tendências",
"Public": "Público",
"Unlisted": "Não listado",
"Private": "Privado",
"View all playlists": "Ver todas as listas de reprodução",
"Updated `x` ago": "Atualizado `x` atrás",
"Delete playlist `x`?": "Eliminar a lista de reprodução 'x'?",
"Delete playlist": "Eliminar lista de reprodução",
"Create playlist": "Criar lista de reprodução",
"Title": "Título",
"Playlist privacy": "Privacidade da lista de reprodução",
"Editing playlist `x`": "A editar lista de reprodução 'x'",
"Watch on YouTube": "Ver no YouTube",
"Hide annotations": "Ocultar anotações",
"Show annotations": "Mostrar anotações",
"Genre: ": "Género: ",
"License: ": "Licença: ",
"Family friendly? ": "Filtrar conteúdo impróprio: ",
"Wilson score: ": "Pontuação de Wilson: ",
"Engagement: ": "Compromisso: ",
"Whitelisted regions: ": "Regiões permitidas: ",
"Blacklisted regions: ": "Regiões bloqueadas: ",
"Shared `x`": "Partilhado `x`",
"`x` views.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` visualizações.",
"": "`x` visualizações."
},
"Premieres in `x`": "Estreias em 'x'",
"Premieres `x`": "Estreias 'x'",
"Hi! Looks like you have JavaScript turned off. Click here to view comments, keep in mind they may take a bit longer to load.": "Oi! Parece que JavaScript está desativado. Clique aqui para ver os comentários, entretanto eles podem levar mais tempo para carregar.",
"View YouTube comments": "Ver comentários do YouTube",
"View more comments on Reddit": "Ver mais comentários no Reddit",
"View `x` comments.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "Ver `x` comentários.",
"": "Ver `x` comentários."
},
"View Reddit comments": "Ver comentários do Reddit",
"Hide replies": "Ocultar respostas",
"Show replies": "Mostrar respostas",
"Incorrect password": "Palavra-chave incorreta",
"Quota exceeded, try again in a few hours": "Cota excedida. Tente novamente dentro de algumas horas",
"Unable to log in, make sure two-factor authentication (Authenticator or SMS) is turned on.": "Não é possível iniciar sessão, certifique-se de que a autenticação de dois fatores (Autenticador ou SMS) está ativada.",
"Invalid TFA code": "Código TFA inválido",
"Login failed. This may be because two-factor authentication is not turned on for your account.": "Falhou o início de sessão. Isto pode ser devido a dois fatores de autenticação não está ativado para sua conta.",
"Wrong answer": "Resposta errada",
"Erroneous CAPTCHA": "CAPTCHA inválido",
"CAPTCHA is a required field": "CAPTCHA é um campo obrigatório",
"User ID is a required field": "O nome de utilizador é um campo obrigatório",
"Password is a required field": "Palavra-chave é um campo obrigatório",
"Wrong username or password": "Nome de utilizador ou palavra-chave incorreto",
"Please sign in using 'Log in with Google'": "Por favor, inicie sessão usando 'Iniciar sessão com o Google'",
"Password cannot be empty": "A palavra-chave não pode estar vazia",
"Password cannot be longer than 55 characters": "A palavra-chave não pode ser superior a 55 caracteres",
"Please log in": "Por favor, inicie sessão",
"Invidious Private Feed for `x`": "Feed Privado do Invidious para `x`",
"channel:`x`": "canal:'x'",
"Deleted or invalid channel": "Canal apagado ou inválido",
"This channel does not exist.": "Este canal não existe.",
"Could not get channel info.": "Não foi possível obter as informações do canal.",
"Could not fetch comments": "Não foi possível obter os comentários",
"View `x` replies.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "Ver `x` respostas.",
"": "Ver `x` respostas."
},
"`x` ago": "`x` atrás",
"Load more": "Carregar mais",
"`x` points.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "'x' pontos.",
"": "'x' pontos."
},
"Could not create mix.": "Não foi possível criar mistura.",
"Empty playlist": "Lista de reprodução vazia",
"Not a playlist.": "Não é uma lista de reprodução.",
"Playlist does not exist.": "A lista de reprodução não existe.",
"Could not pull trending pages.": "Não foi possível obter páginas de tendências.",
"Hidden field \"challenge\" is a required field": "O campo oculto \"desafio\" é obrigatório",
"Hidden field \"token\" is a required field": "O campo oculto \"token\" é um campo obrigatório",
"Erroneous challenge": "Desafio inválido",
"Erroneous token": "Token inválido",
"No such user": "Utilizador inválido",
"Token is expired, please try again": "Token expirou, tente novamente",
"English": "Inglês",
"English (auto-generated)": "Inglês (auto-gerado)",
"Afrikaans": "Africano",
"Albanian": "Albanês",
"Amharic": "Amárico",
"Arabic": "Árabe",
"Armenian": "Arménio",
"Azerbaijani": "Azerbaijano",
"Bangla": "Bangla",
"Basque": "Basco",
"Belarusian": "Bielorrusso",
"Bosnian": "Bósnio",
"Bulgarian": "Búlgaro",
"Burmese": "Birmanês",
"Catalan": "Catalão",
"Cebuano": "Cebuano",
"Chinese (Simplified)": "Chinês (Simplificado)",
"Chinese (Traditional)": "Chinês (Tradicional)",
"Corsican": "Corso",
"Croatian": "Croata",
"Czech": "Checo",
"Danish": "Dinamarquês",
"Dutch": "Holandês",
"Esperanto": "Esperanto",
"Estonian": "Estónio",
"Filipino": "Filipino",
"Finnish": "Finlandês",
"French": "Francês",
"Galician": "Galego",
"Georgian": "Georgiano",
"German": "Alemão",
"Greek": "Grego",
"Gujarati": "Guzerate",
"Haitian Creole": "Crioulo haitiano",
"Hausa": "Hauçá",
"Hawaiian": "Havaiano",
"Hebrew": "Hebraico",
"Hindi": "Hindi",
"Hmong": "Hmong",
"Hungarian": "Húngaro",
"Icelandic": "Islandês",
"Igbo": "Igbo",
"Indonesian": "Indonésio",
"Irish": "Irlandês",
"Italian": "Italiano",
"Japanese": "Japonês",
"Javanese": "Javanês",
"Kannada": "Canarim",
"Kazakh": "Cazaque",
"Khmer": "Khmer",
"Korean": "Coreano",
"Kurdish": "Curdo",
"Kyrgyz": "Quirguiz",
"Lao": "Laosiano",
"Latin": "Latim",
"Latvian": "Letão",
"Lithuanian": "Lituano",
"Luxembourgish": "Luxemburguês",
"Macedonian": "Macedónio",
"Malagasy": "Malgaxe",
"Malay": "Malaio",
"Malayalam": "Malaiala",
"Maltese": "Maltês",
"Maori": "Maori",
"Marathi": "Marathi",
"Mongolian": "Mongol",
"Nepali": "Nepalês",
"Norwegian Bokmål": "Bokmål norueguês",
"Nyanja": "Nyanja",
"Pashto": "Pashto",
"Persian": "Persa",
"Polish": "Polaco",
"Portuguese": "Português",
"Punjabi": "Punjabi",
"Romanian": "Romeno",
"Russian": "Russo",
"Samoan": "Samoano",
"Scottish Gaelic": "Gaélico escocês",
"Serbian": "Sérvio",
"Shona": "Shona",
"Sindhi": "Sindhi",
"Sinhala": "Cingalês",
"Slovak": "Eslovaco",
"Slovenian": "Esloveno",
"Somali": "Somali",
"Southern Sotho": "Sotho do Sul",
"Spanish": "Espanhol",
"Spanish (Latin America)": "Espanhol (América Latina)",
"Sundanese": "Sudanês",
"Swahili": "Suaíli",
"Swedish": "Sueco",
"Tajik": "Tajique",
"Tamil": "Tâmil",
"Telugu": "Telugu",
"Thai": "Tailandês",
"Turkish": "Turco",
"Ukrainian": "Ucraniano",
"Urdu": "Urdu",
"Uzbek": "Uzbeque",
"Vietnamese": "Vietnamita",
"Welsh": "Galês",
"Western Frisian": "Frísio Ocidental",
"Xhosa": "Xhosa",
"Yiddish": "Iídiche",
"Yoruba": "Ioruba",
"Zulu": "Zulu",
"`x` years.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` anos.",
"": "`x` anos."
},
"`x` months.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` meses.",
"": "`x` meses."
},
"`x` weeks.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` semanas.",
"": "`x` semanas."
},
"`x` days.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` dias.",
"": "`x` dias."
},
"`x` hours.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` horas.",
"": "`x` horas."
},
"`x` minutes.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` minutos.",
"": "`x` minutos."
},
"`x` seconds.": {
"([^.,0-9]|^)1([^.,0-9]|$)": "`x` segundos.",
"": "`x` segundos."
},
"Fallback comments: ": "Comentários alternativos: ",
"Popular": "Popular",
"Top": "Top",
"About": "Sobre",
"Rating: ": "Avaliação: ",
"Language: ": "Idioma: ",
"View as playlist": "Ver como lista de reprodução",
"Default": "Predefinição",
"Music": "Música",
"Gaming": "Jogos",
"News": "Notícias",
"Movies": "Filmes",
"Download": "Transferir",
"Download as: ": "Transferir como: ",
"%A %B %-d, %Y": "%A %B %-d, %Y",
"(edited)": "(editado)",
"YouTube comment permalink": "Link permanente do comentário do YouTube",
"permalink": "ligação permanente",
"`x` marked it with a ❤": "`x` foi marcado como ❤",
"Audio mode": "Modo de áudio",
"Video mode": "Modo de vídeo",
"Videos": "Vídeos",
"Playlists": "Listas de reprodução",
"Community": "Comunidade",
"Current version: ": "Versão atual: "
}

View File

@ -326,7 +326,7 @@
"(edited)": "(editat)", "(edited)": "(editat)",
"YouTube comment permalink": "Permalink pentru comentariul de pe YouTube", "YouTube comment permalink": "Permalink pentru comentariul de pe YouTube",
"permalink": "permalink", "permalink": "permalink",
"`x` marked it with a ❤": "`x` l-a marcat cu o ❤", "`x` marked it with a ❤": "`x` l-a marcat cu o ❤",
"Audio mode": "Mod audio", "Audio mode": "Mod audio",
"Video mode": "Mod video", "Video mode": "Mod video",
"Videos": "Videoclipuri", "Videos": "Videoclipuri",

View File

@ -1,7 +1,7 @@
{ {
"`x` subscribers": "`x` подписчиков", "`x` subscribers": "`x` подписчиков",
"`x` videos": "`x` видео", "`x` videos": "`x` видео",
"`x` playlists": "", "`x` playlists": "`x` плейлистов",
"LIVE": "ПРЯМОЙ ЭФИР", "LIVE": "ПРЯМОЙ ЭФИР",
"Shared `x` ago": "Опубликовано `x` назад", "Shared `x` ago": "Опубликовано `x` назад",
"Unsubscribe": "Отписаться", "Unsubscribe": "Отписаться",
@ -69,7 +69,7 @@
"Show related videos: ": "Показывать похожие видео? ", "Show related videos: ": "Показывать похожие видео? ",
"Show annotations by default: ": "Всегда показывать аннотации? ", "Show annotations by default: ": "Всегда показывать аннотации? ",
"Visual preferences": "Настройки сайта", "Visual preferences": "Настройки сайта",
"Player style: ": "", "Player style: ": "Стиль проигрывателя: ",
"Dark mode: ": "Тёмное оформление: ", "Dark mode: ": "Тёмное оформление: ",
"Theme: ": "Тема: ", "Theme: ": "Тема: ",
"dark": "темная", "dark": "темная",
@ -130,14 +130,14 @@
"Public": "Публичный", "Public": "Публичный",
"Unlisted": "Нет в списке", "Unlisted": "Нет в списке",
"Private": "Приватный", "Private": "Приватный",
"View all playlists": "", "View all playlists": "Посмотреть все плейлисты",
"Updated `x` ago": "", "Updated `x` ago": "Обновлено `x` назад",
"Delete playlist `x`?": "Удалить плейлист `x`?", "Delete playlist `x`?": "Удалить плейлист `x`?",
"Delete playlist": "Удалить плейлист", "Delete playlist": "Удалить плейлист",
"Create playlist": "Создать плейлист", "Create playlist": "Создать плейлист",
"Title": "", "Title": "Заголовок",
"Playlist privacy": "", "Playlist privacy": "Конфиденциальность плейлиста",
"Editing playlist `x`": "", "Editing playlist `x`": "Редактирование плейлиста `x`",
"Watch on YouTube": "Смотреть на YouTube", "Watch on YouTube": "Смотреть на YouTube",
"Hide annotations": "Скрыть аннотации", "Hide annotations": "Скрыть аннотации",
"Show annotations": "Показать аннотации", "Show annotations": "Показать аннотации",
@ -325,12 +325,12 @@
"%A %B %-d, %Y": "%-d %B %Y, %A", "%A %B %-d, %Y": "%-d %B %Y, %A",
"(edited)": "(изменено)", "(edited)": "(изменено)",
"YouTube comment permalink": "Прямая ссылка на YouTube", "YouTube comment permalink": "Прямая ссылка на YouTube",
"permalink": "", "permalink": "постоянная ссылка",
"`x` marked it with a ❤": "❤ от автора канала \"`x`\"", "`x` marked it with a ❤": "❤ от автора канала \"`x`\"",
"Audio mode": "Аудио режим", "Audio mode": "Аудио режим",
"Video mode": "Видео режим", "Video mode": "Видео режим",
"Videos": "Видео", "Videos": "Видео",
"Playlists": "Плейлисты", "Playlists": "Плейлисты",
"Community": "", "Community": "Сообщество",
"Current version: ": "Текущая версия: " "Current version: ": "Текущая версия: "
} }

336
locales/sr_Cyrl.json Normal file
View File

@ -0,0 +1,336 @@
{
"`x` subscribers.": "",
"`x` videos.": "",
"`x` playlists.": "",
"LIVE": "",
"Shared `x` ago": "",
"Unsubscribe": "",
"Subscribe": "Пратите",
"View channel on YouTube": "Погледајте канал на YouTube-у",
"View playlist on YouTube": "Погледајте плејлисту на YouTube-у",
"newest": "",
"oldest": "",
"popular": "",
"last": "",
"Next page": "",
"Previous page": "",
"Clear watch history?": "",
"New password": "",
"New passwords must match": "",
"Cannot change password for Google accounts": "",
"Authorize token?": "",
"Authorize token for `x`?": "",
"Yes": "",
"No": "",
"Import and Export Data": "",
"Import": "",
"Import Invidious data": "",
"Import YouTube subscriptions": "",
"Import FreeTube subscriptions (.db)": "",
"Import NewPipe subscriptions (.json)": "",
"Import NewPipe data (.zip)": "",
"Export": "",
"Export subscriptions as OPML": "",
"Export subscriptions as OPML (for NewPipe & FreeTube)": "",
"Export data as JSON": "",
"Delete account?": "",
"History": "",
"An alternative front-end to YouTube": "",
"JavaScript license information": "",
"source": "",
"Log in": "",
"Log in/register": "",
"Log in with Google": "",
"User ID": "",
"Password": "",
"Time (h:mm:ss):": "",
"Text CAPTCHA": "",
"Image CAPTCHA": "",
"Sign In": "",
"Register": "",
"E-mail": "",
"Google verification code": "",
"Preferences": "",
"Player preferences": "",
"Always loop: ": "",
"Autoplay: ": "",
"Play next by default: ": "",
"Autoplay next video: ": "",
"Listen by default: ": "",
"Proxy videos: ": "",
"Default speed: ": "",
"Preferred video quality: ": "",
"Player volume: ": "",
"Default comments: ": "",
"youtube": "",
"reddit": "",
"Default captions: ": "",
"Fallback captions: ": "",
"Show related videos: ": "",
"Show annotations by default: ": "",
"Visual preferences": "",
"Player style: ": "",
"Dark mode: ": "",
"Theme: ": "",
"dark": "",
"light": "",
"Thin mode: ": "",
"Subscription preferences": "",
"Show annotations by default for subscribed channels: ": "",
"Redirect homepage to feed: ": "",
"Number of videos shown in feed: ": "",
"Sort videos by: ": "",
"published": "",
"published - reverse": "",
"alphabetically": "",
"alphabetically - reverse": "",
"channel name": "",
"channel name - reverse": "",
"Only show latest video from channel: ": "",
"Only show latest unwatched video from channel: ": "",
"Only show unwatched: ": "",
"Only show notifications (if there are any): ": "",
"Enable web notifications": "",
"`x` uploaded a video": "",
"`x` is live": "",
"Data preferences": "",
"Clear watch history": "",
"Import/export data": "",
"Change password": "",
"Manage subscriptions": "",
"Manage tokens": "",
"Watch history": "",
"Delete account": "",
"Administrator preferences": "",
"Default homepage: ": "",
"Feed menu: ": "",
"Top enabled: ": "",
"CAPTCHA enabled: ": "",
"Login enabled: ": "",
"Registration enabled: ": "",
"Report statistics: ": "",
"Save preferences": "",
"Subscription manager": "",
"Token manager": "",
"Token": "",
"`x` subscriptions.": "",
"`x` tokens.": "",
"Import/export": "",
"unsubscribe": "",
"revoke": "",
"Subscriptions": "",
"`x` unseen notifications.": "",
"search": "",
"Log out": "",
"Released under the AGPLv3 by Omar Roth.": "",
"Source available here.": "",
"View JavaScript license information.": "",
"View privacy policy.": "",
"Trending": "",
"Public": "",
"Unlisted": "",
"Private": "",
"View all playlists": "",
"Updated `x` ago": "",
"Delete playlist `x`?": "",
"Delete playlist": "",
"Create playlist": "",
"Title": "",
"Playlist privacy": "",
"Editing playlist `x`": "",
"Watch on YouTube": "",
"Hide annotations": "",
"Show annotations": "",
"Genre: ": "",
"License: ": "",
"Family friendly? ": "",
"Wilson score: ": "",
"Engagement: ": "",
"Whitelisted regions: ": "",
"Blacklisted regions: ": "",
"Shared `x`": "",
"`x` views.": "",
"Premieres in `x`": "",
"Premieres `x`": "",
"Hi! Looks like you have JavaScript turned off. Click here to view comments, keep in mind they may take a bit longer to load.": "",
"View YouTube comments": "",
"View more comments on Reddit": "",
"View `x` comments.": "",
"View Reddit comments": "",
"Hide replies": "",
"Show replies": "",
"Incorrect password": "",
"Quota exceeded, try again in a few hours": "",
"Unable to log in, make sure two-factor authentication (Authenticator or SMS) is turned on.": "",
"Invalid TFA code": "",
"Login failed. This may be because two-factor authentication is not turned on for your account.": "",
"Wrong answer": "",
"Erroneous CAPTCHA": "",
"CAPTCHA is a required field": "",
"User ID is a required field": "",
"Password is a required field": "",
"Wrong username or password": "",
"Please sign in using 'Log in with Google'": "",
"Password cannot be empty": "",
"Password cannot be longer than 55 characters": "",
"Please log in": "",
"Invidious Private Feed for `x`": "",
"channel:`x`": "",
"Deleted or invalid channel": "",
"This channel does not exist.": "",
"Could not get channel info.": "",
"Could not fetch comments": "",
"View `x` replies.": "",
"`x` ago": "",
"Load more": "",
"`x` points.": "",
"Could not create mix.": "",
"Empty playlist": "",
"Not a playlist.": "",
"Playlist does not exist.": "",
"Could not pull trending pages.": "",
"Hidden field \"challenge\" is a required field": "",
"Hidden field \"token\" is a required field": "",
"Erroneous challenge": "",
"Erroneous token": "",
"No such user": "",
"Token is expired, please try again": "",
"English": "",
"English (auto-generated)": "",
"Afrikaans": "",
"Albanian": "",
"Amharic": "",
"Arabic": "",
"Armenian": "",
"Azerbaijani": "",
"Bangla": "",
"Basque": "",
"Belarusian": "",
"Bosnian": "",
"Bulgarian": "",
"Burmese": "",
"Catalan": "",
"Cebuano": "",
"Chinese (Simplified)": "",
"Chinese (Traditional)": "",
"Corsican": "",
"Croatian": "",
"Czech": "",
"Danish": "",
"Dutch": "",
"Esperanto": "",
"Estonian": "",
"Filipino": "",
"Finnish": "",
"French": "",
"Galician": "",
"Georgian": "",
"German": "",
"Greek": "",
"Gujarati": "",
"Haitian Creole": "",
"Hausa": "",
"Hawaiian": "",
"Hebrew": "",
"Hindi": "",
"Hmong": "",
"Hungarian": "",
"Icelandic": "",
"Igbo": "",
"Indonesian": "",
"Irish": "",
"Italian": "",
"Japanese": "",
"Javanese": "",
"Kannada": "",
"Kazakh": "",
"Khmer": "",
"Korean": "",
"Kurdish": "",
"Kyrgyz": "",
"Lao": "",
"Latin": "",
"Latvian": "",
"Lithuanian": "",
"Luxembourgish": "",
"Macedonian": "",
"Malagasy": "",
"Malay": "",
"Malayalam": "",
"Maltese": "",
"Maori": "",
"Marathi": "",
"Mongolian": "",
"Nepali": "",
"Norwegian Bokmål": "",
"Nyanja": "",
"Pashto": "",
"Persian": "",
"Polish": "",
"Portuguese": "",
"Punjabi": "",
"Romanian": "",
"Russian": "",
"Samoan": "",
"Scottish Gaelic": "",
"Serbian": "",
"Shona": "",
"Sindhi": "",
"Sinhala": "",
"Slovak": "",
"Slovenian": "",
"Somali": "",
"Southern Sotho": "",
"Spanish": "",
"Spanish (Latin America)": "",
"Sundanese": "",
"Swahili": "",
"Swedish": "",
"Tajik": "",
"Tamil": "",
"Telugu": "",
"Thai": "",
"Turkish": "",
"Ukrainian": "",
"Urdu": "",
"Uzbek": "",
"Vietnamese": "",
"Welsh": "",
"Western Frisian": "",
"Xhosa": "",
"Yiddish": "",
"Yoruba": "",
"Zulu": "",
"`x` years.": "",
"`x` months.": "",
"`x` weeks.": "",
"`x` days.": "",
"`x` hours.": "",
"`x` minutes.": "",
"`x` seconds.": "",
"Fallback comments: ": "",
"Popular": "",
"Top": "",
"About": "",
"Rating: ": "",
"Language: ": "",
"View as playlist": "",
"Default": "",
"Music": "",
"Gaming": "",
"News": "",
"Movies": "",
"Download": "",
"Download as: ": "",
"%A %B %-d, %Y": "",
"(edited)": "",
"YouTube comment permalink": "",
"permalink": "",
"`x` marked it with a ❤": "",
"Audio mode": "",
"Video mode": "",
"Videos": "",
"Playlists": "",
"Community": "",
"Current version: ": "Тренутна верзија: "
}

336
locales/sv-SE.json Normal file
View File

@ -0,0 +1,336 @@
{
"`x` subscribers": "`x` prenumeranter",
"`x` videos": "`x` videor",
"`x` playlists": "`x` spellistor",
"LIVE": "LIVE",
"Shared `x` ago": "Delad `x` sedan",
"Unsubscribe": "Avprenumerera",
"Subscribe": "Prenumerera",
"View channel on YouTube": "Visa kanalen på YouTube",
"View playlist on YouTube": "Visa spellistan på YouTube",
"newest": "nyaste",
"oldest": "äldsta",
"popular": "populärt",
"last": "sista",
"Next page": "Nästa sida",
"Previous page": "Tidigare sida",
"Clear watch history?": "Töm visningshistorik?",
"New password": "Nytt lösenord",
"New passwords must match": "Nya lösenord måste stämma överens",
"Cannot change password for Google accounts": "Kan inte ändra lösenord på Google-konton",
"Authorize token?": "Auktorisera åtkomsttoken?",
"Authorize token for `x`?": "Auktorisera åtkomsttoken för `x`?",
"Yes": "Ja",
"No": "Nej",
"Import and Export Data": "Importera och exportera data",
"Import": "Importera",
"Import Invidious data": "Importera Invidious-data",
"Import YouTube subscriptions": "Importera YouTube-prenumerationer",
"Import FreeTube subscriptions (.db)": "Importera FreeTube-prenumerationer (.db)",
"Import NewPipe subscriptions (.json)": "Importera NewPipe-prenumerationer (.json)",
"Import NewPipe data (.zip)": "Importera NewPipe-data (.zip)",
"Export": "Exportera",
"Export subscriptions as OPML": "Exportera prenumerationer som OPML",
"Export subscriptions as OPML (for NewPipe & FreeTube)": "Exportera prenumerationer som OPML (för NewPipe och FreeTube)",
"Export data as JSON": "Exportera data som JSON",
"Delete account?": "Radera konto?",
"History": "Historik",
"An alternative front-end to YouTube": "Ett alternativt gränssnitt till YouTube",
"JavaScript license information": "JavaScript-licensinformation",
"source": "källa",
"Log in": "Logga in",
"Log in/register": "Logga in/registrera",
"Log in with Google": "Logga in med Google",
"User ID": "Användar-ID",
"Password": "Lösenord",
"Time (h:mm:ss):": "Tid (h:mm:ss):",
"Text CAPTCHA": "Text-CAPTCHA",
"Image CAPTCHA": "Bild-CAPTCHA",
"Sign In": "Inloggning",
"Register": "Registrera",
"E-mail": "E-post",
"Google verification code": "Google-bekräftelsekod",
"Preferences": "Inställningar",
"Player preferences": "Spelarinställningar",
"Always loop: ": "Loopa alltid: ",
"Autoplay: ": "Autouppspelning: ",
"Play next by default: ": "Spela nästa som förval: ",
"Autoplay next video: ": "Autouppspela nästa video: ",
"Listen by default: ": "Lyssna som förval: ",
"Proxy videos: ": "Proxy:a videor: ",
"Default speed: ": "Förvald hastighet: ",
"Preferred video quality: ": "Föredragen videokvalitet: ",
"Player volume: ": "Volym: ",
"Default comments: ": "Förvalda kommentarer: ",
"youtube": "YouTube",
"reddit": "Reddit",
"Default captions: ": "Förvalda undertexter: ",
"Fallback captions: ": "Ersättningsundertexter: ",
"Show related videos: ": "Visa relaterade videor? ",
"Show annotations by default: ": "Visa länkar-i-videon som förval? ",
"Visual preferences": "Visuella inställningar",
"Player style: ": "Spelarstil: ",
"Dark mode: ": "Mörkt läge: ",
"Theme: ": "Tema: ",
"dark": "Mörkt",
"light": "Ljust",
"Thin mode: ": "Lättviktigt läge: ",
"Subscription preferences": "Prenumerationsinställningar",
"Show annotations by default for subscribed channels: ": "Visa länkar-i-videor som förval för kanaler som prenumereras på? ",
"Redirect homepage to feed: ": "Omdirigera hemsida till flöde: ",
"Number of videos shown in feed: ": "Antal videor att visa i flödet: ",
"Sort videos by: ": "Sortera videor: ",
"published": "publicering",
"published - reverse": "publicering - omvänd",
"alphabetically": "alfabetiskt",
"alphabetically - reverse": "alfabetiskt - omvänd",
"channel name": "kanalnamn",
"channel name - reverse": "kanalnamn - omvänd",
"Only show latest video from channel: ": "Visa bara senaste videon från kanal: ",
"Only show latest unwatched video from channel: ": "Visa bara senaste osedda videon från kanal: ",
"Only show unwatched: ": "Visa bara osedda: ",
"Only show notifications (if there are any): ": "Visa endast aviseringar (om det finns några): ",
"Enable web notifications": "Slå på aviseringar",
"`x` uploaded a video": "`x` laddade upp en video",
"`x` is live": "`x` sänder live",
"Data preferences": "Datainställningar",
"Clear watch history": "Töm visningshistorik",
"Import/export data": "Importera/Exportera data",
"Change password": "Byt lösenord",
"Manage subscriptions": "Hantera prenumerationer",
"Manage tokens": "Hantera åtkomst-tokens",
"Watch history": "Visningshistorik",
"Delete account": "Radera konto",
"Administrator preferences": "Administratörsinställningar",
"Default homepage: ": "Förvald hemsida: ",
"Feed menu: ": "Flödesmeny: ",
"Top enabled: ": "Topp påslaget? ",
"CAPTCHA enabled: ": "CAPTCHA påslaget? ",
"Login enabled: ": "Inloggning påslaget? ",
"Registration enabled: ": "Registrering påslaget? ",
"Report statistics: ": "Rapportera in statistik? ",
"Save preferences": "Spara inställningar",
"Subscription manager": "Prenumerationshanterare",
"Token manager": "Åtkomst-token-hanterare",
"Token": "Åtkomst-token",
"`x` subscriptions": "`x` prenumerationer",
"`x` tokens": "`x` åtkomst-token",
"Import/export": "Importera/exportera",
"unsubscribe": "avprenumerera",
"revoke": "återkalla",
"Subscriptions": "Prenumerationer",
"`x` unseen notifications": "`x` osedda aviseringar",
"search": "sök",
"Log out": "Logga ut",
"Released under the AGPLv3 by Omar Roth.": "Utgiven under AGPLv3-licens av Omar Roth.",
"Source available here.": "Källkod tillgänglig här.",
"View JavaScript license information.": "Visa JavaScript-licensinformation.",
"View privacy policy.": "Visa privatlivspolicy.",
"Trending": "Trendar",
"Public": "Offentlig",
"Unlisted": "Olistad",
"Private": "Privat",
"View all playlists": "Visa alla spellistor",
"Updated `x` ago": "Uppdaterad `x` sedan",
"Delete playlist `x`?": "Radera spellistan `x`?",
"Delete playlist": "Radera spellista",
"Create playlist": "Skapa spellista",
"Title": "Titel",
"Playlist privacy": "Privatläge på spellista",
"Editing playlist `x`": "Redigerer spellistan `x`",
"Watch on YouTube": "Titta på YouTube",
"Hide annotations": "Dölj länkar-i-video",
"Show annotations": "Visa länkar-i-video",
"Genre: ": "Genre: ",
"License: ": "Licens: ",
"Family friendly? ": "Familjevänlig? ",
"Wilson score: ": "Wilson-poängsumma: ",
"Engagement: ": "Engagement: ",
"Whitelisted regions: ": "Vitlistade regioner: ",
"Blacklisted regions: ": "Svartlistade regioner: ",
"Shared `x`": "Delade `x`",
"`x` views": "`x` visningar",
"Premieres in `x`": "Premiär om `x`",
"Premieres `x`": "Premiär av `x`",
"Hi! Looks like you have JavaScript turned off. Click here to view comments, keep in mind they may take a bit longer to load.": "Hej. Det ser ut som att du har JavaScript avstängt. Klicka här för att visa kommentarer, ha i åtanke att nedladdning tar längre tid.",
"View YouTube comments": "Visa YouTube-kommentarer",
"View more comments on Reddit": "Visa flera kommentarer på Reddit",
"View `x` comments": "Visa `x` kommentarer",
"View Reddit comments": "Visa Reddit-kommentarer",
"Hide replies": "Dölj svar",
"Show replies": "Visa svar",
"Incorrect password": "Fel lösenord",
"Quota exceeded, try again in a few hours": "Kvoten överskriden, försök igen om ett par timmar",
"Unable to log in, make sure two-factor authentication (Authenticator or SMS) is turned on.": "Kunde inte logga in, försäkra dig om att tvåfaktors-autentisering (Authenticator eller SMS) är påslagen.",
"Invalid TFA code": "Ogiltig tvåfaktor-kod",
"Login failed. This may be because two-factor authentication is not turned on for your account.": "Inloggning misslyckades. Detta kan vara för att tvåfaktors-autentisering inte är påslaget på ditt konto.",
"Wrong answer": "Fel svar",
"Erroneous CAPTCHA": "Ogiltig CAPTCHA",
"CAPTCHA is a required field": "CAPTCHA är ett obligatoriskt fält",
"User ID is a required field": "Användar-ID är ett obligatoriskt fält",
"Password is a required field": "Lösenord är ett obligatoriskt fält",
"Wrong username or password": "Ogiltigt användarnamn eller lösenord",
"Please sign in using 'Log in with Google'": "Logga in genom \"Google-inloggning\"",
"Password cannot be empty": "Lösenordet kan inte vara tomt",
"Password cannot be longer than 55 characters": "Lösenordet kan inte vara längre än 55 tecken",
"Please log in": "Logga in",
"Invidious Private Feed for `x`": "Ogiltig privat flöde för `x`",
"channel:`x`": "kanal `x`",
"Deleted or invalid channel": "Raderad eller ogiltig kanal",
"This channel does not exist.": "Denna kanal finns inte.",
"Could not get channel info.": "Kunde inte hämta kanalinfo.",
"Could not fetch comments": "Kunde inte hämta kommentarer",
"View `x` replies": "Visa `x` svar",
"`x` ago": "`x` sedan",
"Load more": "Ladda fler",
"`x` points": "`x` poäng",
"Could not create mix.": "Kunde inte skapa mix.",
"Empty playlist": "Spellistan är tom",
"Not a playlist.": "Ogiltig spellista.",
"Playlist does not exist.": "Spellistan finns inte.",
"Could not pull trending pages.": "Kunde inte hämta trendande sidor.",
"Hidden field \"challenge\" is a required field": "Dolt fält \"challenge\" är ett obligatoriskt fält",
"Hidden field \"token\" is a required field": "Dolt fält \"token\" är ett obligatoriskt fält",
"Erroneous challenge": "Felaktig challenge",
"Erroneous token": "Felaktig token",
"No such user": "Ogiltig användare",
"Token is expired, please try again": "Token föråldrad, försök igen",
"English": "",
"English (auto-generated)": "English (auto-genererat)",
"Afrikaans": "",
"Albanian": "",
"Amharic": "",
"Arabic": "",
"Armenian": "",
"Azerbaijani": "",
"Bangla": "",
"Basque": "",
"Belarusian": "",
"Bosnian": "",
"Bulgarian": "",
"Burmese": "",
"Catalan": "",
"Cebuano": "",
"Chinese (Simplified)": "",
"Chinese (Traditional)": "",
"Corsican": "",
"Croatian": "",
"Czech": "",
"Danish": "",
"Dutch": "",
"Esperanto": "",
"Estonian": "",
"Filipino": "",
"Finnish": "",
"French": "",
"Galician": "",
"Georgian": "",
"German": "",
"Greek": "",
"Gujarati": "",
"Haitian Creole": "",
"Hausa": "",
"Hawaiian": "",
"Hebrew": "",
"Hindi": "",
"Hmong": "",
"Hungarian": "",
"Icelandic": "",
"Igbo": "",
"Indonesian": "",
"Irish": "",
"Italian": "",
"Japanese": "",
"Javanese": "",
"Kannada": "",
"Kazakh": "",
"Khmer": "",
"Korean": "",
"Kurdish": "",
"Kyrgyz": "",
"Lao": "",
"Latin": "",
"Latvian": "",
"Lithuanian": "",
"Luxembourgish": "",
"Macedonian": "",
"Malagasy": "",
"Malay": "",
"Malayalam": "",
"Maltese": "",
"Maori": "",
"Marathi": "",
"Mongolian": "",
"Nepali": "",
"Norwegian Bokmål": "",
"Nyanja": "",
"Pashto": "",
"Persian": "",
"Polish": "",
"Portuguese": "",
"Punjabi": "",
"Romanian": "",
"Russian": "",
"Samoan": "",
"Scottish Gaelic": "",
"Serbian": "",
"Shona": "",
"Sindhi": "",
"Sinhala": "",
"Slovak": "",
"Slovenian": "",
"Somali": "",
"Southern Sotho": "",
"Spanish": "",
"Spanish (Latin America)": "",
"Sundanese": "",
"Swahili": "",
"Swedish": "",
"Tajik": "",
"Tamil": "",
"Telugu": "",
"Thai": "",
"Turkish": "",
"Ukrainian": "",
"Urdu": "",
"Uzbek": "",
"Vietnamese": "",
"Welsh": "",
"Western Frisian": "",
"Xhosa": "",
"Yiddish": "",
"Yoruba": "",
"Zulu": "",
"`x` years": "`x` år",
"`x` months": "`x` månader",
"`x` weeks": "`x` veckor",
"`x` days": "`x` dagar",
"`x` hours": "`x` timmar",
"`x` minutes": "`x` minuter",
"`x` seconds": "`x` sekunder",
"Fallback comments: ": "Fallback-kommentarer: ",
"Popular": "Populärt",
"Top": "Topp",
"About": "Om",
"Rating: ": "Betyg: ",
"Language: ": "Språk: ",
"View as playlist": "Visa som spellista",
"Default": "Förvalt",
"Music": "Musik",
"Gaming": "Spel",
"News": "Nyheter",
"Movies": "Filmer",
"Download": "Ladda ned",
"Download as: ": "Ladda ned som: ",
"%A %B %-d, %Y": "",
"(edited)": "(redigerad)",
"YouTube comment permalink": "Permanent YouTube-länk till innehållet",
"permalink": "permalänk",
"`x` marked it with a ❤": "`x` lämnade ett ❤",
"Audio mode": "Ljudläge",
"Video mode": "Videoläge",
"Videos": "Videor",
"Playlists": "Spellistor",
"Community": "Gemenskap",
"Current version: ": "Nuvarande version: "
}

View File

@ -56,20 +56,20 @@
"Player preferences": "Oynatıcı tercihleri", "Player preferences": "Oynatıcı tercihleri",
"Always loop: ": "Sürekli döngü: ", "Always loop: ": "Sürekli döngü: ",
"Autoplay: ": "Otomatik oynat: ", "Autoplay: ": "Otomatik oynat: ",
"Play next by default: ": "Varsayılan olarak sonrakini oynat: ", "Play next by default: ": "Öntanımlı olarak sonrakini oynat: ",
"Autoplay next video: ": "Sonraki videoyu otomatik oynat: ", "Autoplay next video: ": "Sonraki videoyu otomatik oynat: ",
"Listen by default: ": "Varsayılan olarak dinle: ", "Listen by default: ": "Öntanımlı olarak dinle: ",
"Proxy videos: ": "Videoları proxy'le: ", "Proxy videos: ": "Videoları proxy'le: ",
"Default speed: ": "Varsayılan hız: ", "Default speed: ": "Öntanımlı hız: ",
"Preferred video quality: ": "Tercih edilen video kalitesi: ", "Preferred video quality: ": "Tercih edilen video kalitesi: ",
"Player volume: ": "Oynatıcı ses seviyesi: ", "Player volume: ": "Oynatıcı ses seviyesi: ",
"Default comments: ": "Varsayılan yorumlar: ", "Default comments: ": "Öntanımlı yorumlar: ",
"youtube": "youtube", "youtube": "youtube",
"reddit": "reddit", "reddit": "reddit",
"Default captions: ": "Varsayılan altyazılar: ", "Default captions: ": "Öntanımlı altyazılar: ",
"Fallback captions: ": "Yedek altyazılar: ", "Fallback captions: ": "Yedek altyazılar: ",
"Show related videos: ": "İlgili videoları göster: ", "Show related videos: ": "İlgili videoları göster: ",
"Show annotations by default: ": "Varsayılan olarak ek açıklamaları göster: ", "Show annotations by default: ": "Öntanımlı olarak ek açıklamaları göster: ",
"Visual preferences": "Görsel tercihler", "Visual preferences": "Görsel tercihler",
"Player style: ": "Oynatıcı biçimi: ", "Player style: ": "Oynatıcı biçimi: ",
"Dark mode: ": "Karanlık mod: ", "Dark mode: ": "Karanlık mod: ",
@ -78,7 +78,7 @@
"light": "aydınlık", "light": "aydınlık",
"Thin mode: ": "İnce mod: ", "Thin mode: ": "İnce mod: ",
"Subscription preferences": "Abonelik tercihleri", "Subscription preferences": "Abonelik tercihleri",
"Show annotations by default for subscribed channels: ": "Abone olunan kanallar için ek açıklamaları varsayılan olarak göster: ", "Show annotations by default for subscribed channels: ": "Abone olunan kanallar için ek açıklamaları öntanımlı olarak göster: ",
"Redirect homepage to feed: ": "Ana sayfayı akışa yönlendir: ", "Redirect homepage to feed: ": "Ana sayfayı akışa yönlendir: ",
"Number of videos shown in feed: ": "Akışta gösterilen video sayısı: ", "Number of videos shown in feed: ": "Akışta gösterilen video sayısı: ",
"Sort videos by: ": "Videoları sıralama kriteri: ", "Sort videos by: ": "Videoları sıralama kriteri: ",
@ -104,7 +104,7 @@
"Watch history": "İzleme geçmişi", "Watch history": "İzleme geçmişi",
"Delete account": "Hesap silme", "Delete account": "Hesap silme",
"Administrator preferences": "Yönetici tercihleri", "Administrator preferences": "Yönetici tercihleri",
"Default homepage: ": "Varsayılan ana sayfa: ", "Default homepage: ": "Öntanımlı ana sayfa: ",
"Feed menu: ": "Akış menüsü: ", "Feed menu: ": "Akış menüsü: ",
"Top enabled: ": "Top etkin: ", "Top enabled: ": "Top etkin: ",
"CAPTCHA enabled: ": "CAPTCHA etkin: ", "CAPTCHA enabled: ": "CAPTCHA etkin: ",
@ -138,7 +138,7 @@
"Title": "Başlık", "Title": "Başlık",
"Playlist privacy": "Çalma listesi gizliliği", "Playlist privacy": "Çalma listesi gizliliği",
"Editing playlist `x`": "`x` çalma listesi düzenleniyor", "Editing playlist `x`": "`x` çalma listesi düzenleniyor",
"Source available here.": "Kaynak kodu burada mevcut.", "Source available here.": "Kaynak kodları burada bulunabilir.",
"View JavaScript license information.": "JavaScript lisans bilgilerini görüntüle.", "View JavaScript license information.": "JavaScript lisans bilgilerini görüntüle.",
"View privacy policy.": "Gizlilik politikasını görüntüle.", "View privacy policy.": "Gizlilik politikasını görüntüle.",
"Trending": "Trendler", "Trending": "Trendler",
@ -323,7 +323,7 @@
"Rating: ": "Değerlendirme: ", "Rating: ": "Değerlendirme: ",
"Language: ": "Dil: ", "Language: ": "Dil: ",
"View as playlist": "Oynatma listesi olarak görüntüle", "View as playlist": "Oynatma listesi olarak görüntüle",
"Default": "Varsayılan", "Default": "Öntanımlı",
"Music": "Müzik", "Music": "Müzik",
"Gaming": "Oyun", "Gaming": "Oyun",
"News": "Haberler", "News": "Haberler",
@ -340,5 +340,5 @@
"Videos": "Videolar", "Videos": "Videolar",
"Playlists": "Oynatma listeleri", "Playlists": "Oynatma listeleri",
"Community": "Topluluk", "Community": "Topluluk",
"Current version: ": "Şu anki versiyon: " "Current version: ": "Şu anki sürüm: "
} }

View File

@ -1,7 +1,7 @@
{ {
"`x` subscribers": "`x` підписників", "`x` subscribers": "`x` підписників",
"`x` videos": "`x` відео", "`x` videos": "`x` відео",
"`x` playlists": "", "`x` playlists": "списки відтворення \"x\"",
"LIVE": "ПРЯМИЙ ЕФІР", "LIVE": "ПРЯМИЙ ЕФІР",
"Shared `x` ago": "Розміщено `x` назад", "Shared `x` ago": "Розміщено `x` назад",
"Unsubscribe": "Відписатися", "Unsubscribe": "Відписатися",
@ -69,11 +69,11 @@
"Show related videos: ": "Показувати схожі відео? ", "Show related videos: ": "Показувати схожі відео? ",
"Show annotations by default: ": "Завжди показувати анотації? ", "Show annotations by default: ": "Завжди показувати анотації? ",
"Visual preferences": "Налаштування сайту", "Visual preferences": "Налаштування сайту",
"Player style: ": "", "Player style: ": "Стиль програвача: ",
"Dark mode: ": "Темне оформлення: ", "Dark mode: ": "Темне оформлення: ",
"Theme: ": "", "Theme: ": "Тема: ",
"dark": "", "dark": "темна",
"light": "", "light": "Світла",
"Thin mode: ": "Полегшене оформлення: ", "Thin mode: ": "Полегшене оформлення: ",
"Subscription preferences": "Налаштування підписок", "Subscription preferences": "Налаштування підписок",
"Show annotations by default for subscribed channels: ": "Завжди показувати анотації у відео каналів, на які ви підписані? ", "Show annotations by default for subscribed channels: ": "Завжди показувати анотації у відео каналів, на які ви підписані? ",
@ -127,17 +127,17 @@
"View JavaScript license information.": "Переглянути інформацію щодо ліцензії JavaScript.", "View JavaScript license information.": "Переглянути інформацію щодо ліцензії JavaScript.",
"View privacy policy.": "Переглянути політику приватності.", "View privacy policy.": "Переглянути політику приватності.",
"Trending": "У тренді", "Trending": "У тренді",
"Public": "", "Public": "Прилюдний",
"Unlisted": "Немає в списку", "Unlisted": "Немає в списку",
"Private": "", "Private": "Особистий",
"View all playlists": "", "View all playlists": "Переглянути всі списки відтворення",
"Updated `x` ago": "", "Updated `x` ago": "Оновлено `x` тому",
"Delete playlist `x`?": "", "Delete playlist `x`?": "Видалити список відтворення \"x\"?",
"Delete playlist": "", "Delete playlist": "Видалити список відтворення",
"Create playlist": "", "Create playlist": "Створити список відтворення",
"Title": "", "Title": "Заголовок",
"Playlist privacy": "", "Playlist privacy": "Конфіденційність списку відтворення",
"Editing playlist `x`": "", "Editing playlist `x`": "Редагування списку відтворення \"x\"",
"Watch on YouTube": "Дивитися на YouTube", "Watch on YouTube": "Дивитися на YouTube",
"Hide annotations": "Приховати анотації", "Hide annotations": "Приховати анотації",
"Show annotations": "Показати анотації", "Show annotations": "Показати анотації",
@ -325,12 +325,12 @@
"%A %B %-d, %Y": "%-d %B %Y, %A", "%A %B %-d, %Y": "%-d %B %Y, %A",
"(edited)": "(змінено)", "(edited)": "(змінено)",
"YouTube comment permalink": "Пряме посилання на коментар в YouTube", "YouTube comment permalink": "Пряме посилання на коментар в YouTube",
"permalink": "", "permalink": "постійне посилання",
"`x` marked it with a ❤": "❤ цьому від каналу `x`", "`x` marked it with a ❤": "❤ цьому від каналу `x`",
"Audio mode": "Аудіорежим", "Audio mode": "Аудіорежим",
"Video mode": "Відеорежим", "Video mode": "Відеорежим",
"Videos": "Відео", "Videos": "Відео",
"Playlists": "Плейлисти", "Playlists": "Плейлисти",
"Community": "", "Community": "Спільнота",
"Current version: ": "Поточна версія: " "Current version: ": "Поточна версія: "
} }

View File

@ -1,7 +1,7 @@
{ {
"`x` subscribers": "`x` 订阅者", "`x` subscribers": "`x` 订阅者",
"`x` videos": "`x` 视频", "`x` videos": "`x` 视频",
"`x` playlists": "", "`x` playlists": "`x` 个播放列表",
"LIVE": "直播", "LIVE": "直播",
"Shared `x` ago": "`x` 前分享", "Shared `x` ago": "`x` 前分享",
"Unsubscribe": "取消订阅", "Unsubscribe": "取消订阅",
@ -69,11 +69,11 @@
"Show related videos: ": "显示相关视频?", "Show related videos: ": "显示相关视频?",
"Show annotations by default: ": "默认显示视频注释?", "Show annotations by default: ": "默认显示视频注释?",
"Visual preferences": "视觉选项", "Visual preferences": "视觉选项",
"Player style: ": "", "Player style: ": "播放器样式:",
"Dark mode: ": "暗色模式:", "Dark mode: ": "暗色模式:",
"Theme: ": "", "Theme: ": "主题",
"dark": "", "dark": "暗色",
"light": "", "light": "亮色",
"Thin mode: ": "窄页模式:", "Thin mode: ": "窄页模式:",
"Subscription preferences": "订阅设置", "Subscription preferences": "订阅设置",
"Show annotations by default for subscribed channels: ": "在订阅频道的视频默认显示注释?", "Show annotations by default for subscribed channels: ": "在订阅频道的视频默认显示注释?",
@ -129,15 +129,15 @@
"Trending": "时下流行", "Trending": "时下流行",
"Public": "公开", "Public": "公开",
"Unlisted": "不公开", "Unlisted": "不公开",
"Private": "", "Private": "私享",
"View all playlists": "", "View all playlists": "查看所有播放列表",
"Updated `x` ago": "", "Updated `x` ago": "`x` 前更新",
"Delete playlist `x`?": "", "Delete playlist `x`?": "是否删除播放列表 `x`",
"Delete playlist": "", "Delete playlist": "删除播放列表",
"Create playlist": "", "Create playlist": "创建播放列表",
"Title": "", "Title": "标题",
"Playlist privacy": "", "Playlist privacy": "播放列表隐私设置",
"Editing playlist `x`": "", "Editing playlist `x`": "正在编辑播放列表 `x`",
"Watch on YouTube": "在 YouTube 观看", "Watch on YouTube": "在 YouTube 观看",
"Hide annotations": "隐藏注释", "Hide annotations": "隐藏注释",
"Show annotations": "显示注释", "Show annotations": "显示注释",
@ -325,12 +325,12 @@
"%A %B %-d, %Y": "%Y年%-m月%-d日 %a", "%A %B %-d, %Y": "%Y年%-m月%-d日 %a",
"(edited)": "(已编辑)", "(edited)": "(已编辑)",
"YouTube comment permalink": "YouTube 评论永久链接", "YouTube comment permalink": "YouTube 评论永久链接",
"permalink": "", "permalink": "永久链接",
"`x` marked it with a ❤": "`x` 为此加 ❤", "`x` marked it with a ❤": "`x` 为此加 ❤",
"Audio mode": "音频模式", "Audio mode": "音频模式",
"Video mode": "视频模式", "Video mode": "视频模式",
"Videos": "视频", "Videos": "视频",
"Playlists": "播放列表", "Playlists": "播放列表",
"Community": "", "Community": "社区",
"Current version: ": "当前版本:" "Current version: ": "当前版本:"
} }

Binary file not shown.

Before

(image error) Size: 536 KiB

View File

@ -11,13 +11,13 @@ targets:
dependencies: dependencies:
pg: pg:
github: will/crystal-pg github: will/crystal-pg
version: ~> 0.19.0 version: ~> 0.21.1
sqlite3: sqlite3:
github: crystal-lang/crystal-sqlite3 github: crystal-lang/crystal-sqlite3
version: ~> 0.14.0 version: ~> 0.16.0
kemal: kemal:
github: kemalcr/kemal github: kemalcr/kemal
version: ~> 0.26.1 commit: dfe7dca08f4c9a9456d6132af5f6b59fcd6865e4
pool: pool:
github: ysbaddaden/pool github: ysbaddaden/pool
version: ~> 0.2.3 version: ~> 0.2.3
@ -25,9 +25,9 @@ dependencies:
github: omarroth/protodec github: omarroth/protodec
version: ~> 0.1.2 version: ~> 0.1.2
lsquic: lsquic:
github: omarroth/lsquic.cr github: iv-org/lsquic.cr
version: ~> 0.1.8 version: ~> 2.18.1-1
crystal: 0.32.0 crystal: 0.35.1
license: AGPLv3 license: AGPLv3

View File

@ -9,8 +9,11 @@ require "../src/invidious/channels"
require "../src/invidious/comments" require "../src/invidious/comments"
require "../src/invidious/playlists" require "../src/invidious/playlists"
require "../src/invidious/search" require "../src/invidious/search"
require "../src/invidious/trending"
require "../src/invidious/users" require "../src/invidious/users"
CONFIG = Config.from_yaml(File.open("config/config.yml"))
describe "Helper" do describe "Helper" do
describe "#produce_channel_videos_url" do describe "#produce_channel_videos_url" do
it "correctly produces url for requesting page `x` of a channel's videos" do it "correctly produces url for requesting page `x` of a channel's videos" do
@ -26,9 +29,9 @@ describe "Helper" do
describe "#produce_channel_search_url" do describe "#produce_channel_search_url" do
it "correctly produces token for searching a specific channel" do it "correctly produces token for searching a specific channel" do
produce_channel_search_url("UCXuqSBlHAE6Xw-yeJA0Tunw", "", 100).should eq("/browse_ajax?continuation=4qmFsgI-EhhVQ1h1cVNCbEhBRTZYdy15ZUpBMFR1bncaIEVnWnpaV0Z5WTJnd0FqZ0JZQUZxQUxnQkFIb0RNVEF3WgA%3D&gl=US&hl=en") produce_channel_search_url("UCXuqSBlHAE6Xw-yeJA0Tunw", "", 100).should eq("/browse_ajax?continuation=4qmFsgI2EhhVQ1h1cVNCbEhBRTZYdy15ZUpBMFR1bncaGEVnWnpaV0Z5WTJnNEFYb0RNVEF3dUFFQVoA&gl=US&hl=en")
produce_channel_search_url("UCXuqSBlHAE6Xw-yeJA0Tunw", "По ожиशुपतिरपि子而時ஸ்றீனி", 0).should eq("/browse_ajax?continuation=4qmFsgJ8EhhVQ1h1cVNCbEhBRTZYdy15ZUpBMFR1bncaIEVnWnpaV0Z5WTJnd0FqZ0JZQUZxQUxnQkFIb0JNQT09Wj7Qn9C-INC-0LbQuOCktuClgeCkquCkpOCkv-CksOCkquCkv-WtkOiAjOaZguCuuOCvjeCuseCvgOCuqeCuvw%3D%3D&gl=US&hl=en") produce_channel_search_url("UCXuqSBlHAE6Xw-yeJA0Tunw", "По ожиशुपतिरपि子而時ஸ்றீனி", 0).should eq("/browse_ajax?continuation=4qmFsgJ0EhhVQ1h1cVNCbEhBRTZYdy15ZUpBMFR1bncaGEVnWnpaV0Z5WTJnNEFYb0JNTGdCQUE9PVo-0J_QviDQvtC20LjgpLbgpYHgpKrgpKTgpL_gpLDgpKrgpL_lrZDogIzmmYLgrrjgr43grrHgr4Dgrqngrr8%3D&gl=US&hl=en")
end end
end end
@ -40,7 +43,7 @@ describe "Helper" do
describe "#extract_channel_playlists_cursor" do describe "#extract_channel_playlists_cursor" do
it "correctly extracts a playlists cursor from the given URL" do it "correctly extracts a playlists cursor from the given URL" do
extract_channel_playlists_cursor("/browse_ajax?continuation=4qmFsgLRARIYVUNDajk1NklGNjJGYlQ3R291c3phajl3GrQBRWdsd2JHRjViR2x6ZEhNWUF5QUJNQUk0QVdBQmFnQjZabEZWYkZCaE1XczFVbFpHZDJGV09XNWxWelI0V0RGR2VWSnVWbUZOV0Vwc1ZHcG5lRmd3TVU1aVZXdDRWMWN4YzFGdFNuTmtlbWh4VGpCd1NWTllVa1pTYTJNeFlVUmtlRmt3Y0ZWVWJWRXdWbnBzTkU1V1JqRmhNVGxFVm14dmQwMXFhRzVXZDdnQkFBJTNEJTNE&gl=US&hl=en", false).should eq("AIOkY9EQpi_gyn1_QrFuZ1reN81_MMmI1YmlBblw8j7JHItEFG5h7qcJTNd4W9x5Quk_CVZ028gW") extract_channel_playlists_cursor("4qmFsgLRARIYVUNDajk1NklGNjJGYlQ3R291c3phajl3GrQBRWdsd2JHRjViR2x6ZEhNWUF5QUJNQUk0QVdBQmFnQjZabEZWYkZCaE1XczFVbFpHZDJGV09XNWxWelI0V0RGR2VWSnVWbUZOV0Vwc1ZHcG5lRmd3TVU1aVZXdDRWMWN4YzFGdFNuTmtlbWh4VGpCd1NWTllVa1pTYTJNeFlVUmtlRmt3Y0ZWVWJWRXdWbnBzTkU1V1JqRmhNVGxFVm14dmQwMXFhRzVXZDdnQkFBJTNEJTNE", false).should eq("AIOkY9EQpi_gyn1_QrFuZ1reN81_MMmI1YmlBblw8j7JHItEFG5h7qcJTNd4W9x5Quk_CVZ028gW")
end end
end end
@ -124,6 +127,15 @@ describe "Helper" do
end end
end end
describe "#extract_plid" do
it "correctly extracts playlist ID from trending URL" do
extract_plid("/feed/trending?bp=4gIuCggvbS8wNHJsZhIiUExGZ3F1TG5MNTlhbVBud2pLbmNhZUp3MDYzZlU1M3Q0cA%3D%3D").should eq("PLFgquLnL59amPnwjKncaeJw063fU53t4p")
extract_plid("/feed/trending?bp=4gIvCgkvbS8wYnp2bTISIlBMaUN2Vkp6QnVwS2tDaFNnUDdGWFhDclo2aEp4NmtlTm0%3D").should eq("PLiCvVJzBupKkChSgP7FXXCrZ6hJx6keNm")
extract_plid("/feed/trending?bp=4gIuCggvbS8wNWpoZxIiUEwzWlE1Q3BOdWxRbUtPUDNJekdsYWN0V1c4dklYX0hFUA%3D%3D").should eq("PL3ZQ5CpNulQmKOP3IzGlactWW8vIX_HEP")
extract_plid("/feed/trending?bp=4gIuCggvbS8wMnZ4bhIiUEx6akZiYUZ6c21NUnFhdEJnVTdPeGNGTkZhQ2hqTkVERA%3D%3D").should eq("PLzjFbaFzsmMRqatBgU7OxcFNFaChjNEDD")
end
end
describe "#sign_token" do describe "#sign_token" do
it "correctly signs a given hash" do it "correctly signs a given hash" do
token = { token = {

File diff suppressed because it is too large Load Diff

View File

@ -1,22 +1,35 @@
struct InvidiousChannel struct InvidiousChannel
db_mapping({ include DB::Serializable
id: String,
author: String, property id : String
updated: Time, property author : String
deleted: Bool, property updated : Time
subscribed: Time?, property deleted : Bool
}) property subscribed : Time?
end end
struct ChannelVideo struct ChannelVideo
def to_json(locale, config, kemal_config, json : JSON::Builder) include DB::Serializable
property id : String
property title : String
property published : Time
property updated : Time
property ucid : String
property author : String
property length_seconds : Int32 = 0
property live_now : Bool = false
property premiere_timestamp : Time? = nil
property views : Int64? = nil
def to_json(locale, json : JSON::Builder)
json.object do json.object do
json.field "type", "shortVideo" json.field "type", "shortVideo"
json.field "title", self.title json.field "title", self.title
json.field "videoId", self.id json.field "videoId", self.id
json.field "videoThumbnails" do json.field "videoThumbnails" do
generate_thumbnails(json, self.id, config, Kemal.config) generate_thumbnails(json, self.id)
end end
json.field "lengthSeconds", self.length_seconds json.field "lengthSeconds", self.length_seconds
@ -31,17 +44,17 @@ struct ChannelVideo
end end
end end
def to_json(locale, config, kemal_config, json : JSON::Builder | Nil = nil) def to_json(locale, json : JSON::Builder | Nil = nil)
if json if json
to_json(locale, config, kemal_config, json) to_json(locale, json)
else else
JSON.build do |json| JSON.build do |json|
to_json(locale, config, kemal_config, json) to_json(locale, json)
end end
end end
end end
def to_xml(locale, host_url, query_params, xml : XML::Builder) def to_xml(locale, query_params, xml : XML::Builder)
query_params["v"] = self.id query_params["v"] = self.id
xml.element("entry") do xml.element("entry") do
@ -49,17 +62,17 @@ struct ChannelVideo
xml.element("yt:videoId") { xml.text self.id } xml.element("yt:videoId") { xml.text self.id }
xml.element("yt:channelId") { xml.text self.ucid } xml.element("yt:channelId") { xml.text self.ucid }
xml.element("title") { xml.text self.title } xml.element("title") { xml.text self.title }
xml.element("link", rel: "alternate", href: "#{host_url}/watch?#{query_params}") xml.element("link", rel: "alternate", href: "#{HOST_URL}/watch?#{query_params}")
xml.element("author") do xml.element("author") do
xml.element("name") { xml.text self.author } xml.element("name") { xml.text self.author }
xml.element("uri") { xml.text "#{host_url}/channel/#{self.ucid}" } xml.element("uri") { xml.text "#{HOST_URL}/channel/#{self.ucid}" }
end end
xml.element("content", type: "xhtml") do xml.element("content", type: "xhtml") do
xml.element("div", xmlns: "http://www.w3.org/1999/xhtml") do xml.element("div", xmlns: "http://www.w3.org/1999/xhtml") do
xml.element("a", href: "#{host_url}/watch?#{query_params}") do xml.element("a", href: "#{HOST_URL}/watch?#{query_params}") do
xml.element("img", src: "#{host_url}/vi/#{self.id}/mqdefault.jpg") xml.element("img", src: "#{HOST_URL}/vi/#{self.id}/mqdefault.jpg")
end end
end end
end end
@ -69,64 +82,59 @@ struct ChannelVideo
xml.element("media:group") do xml.element("media:group") do
xml.element("media:title") { xml.text self.title } xml.element("media:title") { xml.text self.title }
xml.element("media:thumbnail", url: "#{host_url}/vi/#{self.id}/mqdefault.jpg", xml.element("media:thumbnail", url: "#{HOST_URL}/vi/#{self.id}/mqdefault.jpg",
width: "320", height: "180") width: "320", height: "180")
end end
end end
end end
def to_xml(locale, config, kemal_config, xml : XML::Builder | Nil = nil) def to_xml(locale, xml : XML::Builder | Nil = nil)
if xml if xml
to_xml(locale, config, kemal_config, xml) to_xml(locale, xml)
else else
XML.build do |xml| XML.build do |xml|
to_xml(locale, config, kemal_config, xml) to_xml(locale, xml)
end end
end end
end end
db_mapping({ def to_tuple
id: String, {% begin %}
title: String, {
published: Time, {{*@type.instance_vars.map { |var| var.name }}}
updated: Time, }
ucid: String, {% end %}
author: String, end
length_seconds: {type: Int32, default: 0},
live_now: {type: Bool, default: false},
premiere_timestamp: {type: Time?, default: nil},
views: {type: Int64?, default: nil},
})
end end
struct AboutRelatedChannel struct AboutRelatedChannel
db_mapping({ include DB::Serializable
ucid: String,
author: String, property ucid : String
author_url: String, property author : String
author_thumbnail: String, property author_url : String
}) property author_thumbnail : String
end end
# TODO: Refactor into either SearchChannel or InvidiousChannel # TODO: Refactor into either SearchChannel or InvidiousChannel
struct AboutChannel struct AboutChannel
db_mapping({ include DB::Serializable
ucid: String,
author: String, property ucid : String
auto_generated: Bool, property author : String
author_url: String, property auto_generated : Bool
author_thumbnail: String, property author_url : String
banner: String?, property author_thumbnail : String
description_html: String, property banner : String?
paid: Bool, property description_html : String
total_views: Int64, property paid : Bool
sub_count: Int32, property total_views : Int64
joined: Time, property sub_count : Int32
is_family_friendly: Bool, property joined : Time
allowed_regions: Array(String), property is_family_friendly : Bool
related_channels: Array(AboutRelatedChannel), property allowed_regions : Array(String)
tabs: Array(String), property related_channels : Array(AboutRelatedChannel)
}) property tabs : Array(String)
end end
class ChannelRedirect < Exception class ChannelRedirect < Exception
@ -213,33 +221,20 @@ def fetch_channel(ucid, db, pull_all_videos = true, locale = nil)
page = 1 page = 1
url = produce_channel_videos_url(ucid, page, auto_generated: auto_generated) response = get_channel_videos_response(ucid, page, auto_generated: auto_generated)
response = YT_POOL.client &.get(url)
videos = [] of SearchVideo
begin begin
json = JSON.parse(response.body) initial_data = JSON.parse(response.body).as_a.find &.["response"]?
raise "Could not extract JSON" if !initial_data
videos = extract_videos(initial_data.as_h, author, ucid)
rescue ex rescue ex
if response.body.includes?("To continue with your YouTube experience, please fill out the form below.") || if response.body.includes?("To continue with your YouTube experience, please fill out the form below.") ||
response.body.includes?("https://www.google.com/sorry/index") response.body.includes?("https://www.google.com/sorry/index")
raise "Could not extract channel info. Instance is likely blocked." raise "Could not extract channel info. Instance is likely blocked."
end end
raise "Could not extract JSON"
end end
if json["content_html"]? && !json["content_html"].as_s.empty?
document = XML.parse_html(json["content_html"].as_s)
nodeset = document.xpath_nodes(%q(//li[contains(@class, "feed-item-container")]))
if auto_generated
videos = extract_videos(nodeset)
else
videos = extract_videos(nodeset, ucid, author)
end
end
videos ||= [] of ChannelVideo
rss.xpath_nodes("//feed/entry").each do |entry| rss.xpath_nodes("//feed/entry").each do |entry|
video_id = entry.xpath_node("videoid").not_nil!.content video_id = entry.xpath_node("videoid").not_nil!.content
title = entry.xpath_node("title").not_nil!.content title = entry.xpath_node("title").not_nil!.content
@ -260,41 +255,28 @@ def fetch_channel(ucid, db, pull_all_videos = true, locale = nil)
premiere_timestamp = channel_video.try &.premiere_timestamp premiere_timestamp = channel_video.try &.premiere_timestamp
video = ChannelVideo.new( video = ChannelVideo.new({
id: video_id, id: video_id,
title: title, title: title,
published: published, published: published,
updated: Time.utc, updated: Time.utc,
ucid: ucid, ucid: ucid,
author: author, author: author,
length_seconds: length_seconds, length_seconds: length_seconds,
live_now: live_now, live_now: live_now,
premiere_timestamp: premiere_timestamp, premiere_timestamp: premiere_timestamp,
views: views, views: views,
) })
emails = db.query_all("UPDATE users SET notifications = notifications || $1 \
WHERE updated < $2 AND $3 = ANY(subscriptions) AND $1 <> ALL(notifications) RETURNING email",
video.id, video.published, ucid, as: String)
video_array = video.to_a
args = arg_array(video_array)
# We don't include the 'premiere_timestamp' here because channel pages don't include them, # We don't include the 'premiere_timestamp' here because channel pages don't include them,
# meaning the above timestamp is always null # meaning the above timestamp is always null
db.exec("INSERT INTO channel_videos VALUES (#{args}) \ was_insert = db.query_one("INSERT INTO channel_videos VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) \
ON CONFLICT (id) DO UPDATE SET title = $2, published = $3, \ ON CONFLICT (id) DO UPDATE SET title = $2, published = $3, \
updated = $4, ucid = $5, author = $6, length_seconds = $7, \ updated = $4, ucid = $5, author = $6, length_seconds = $7, \
live_now = $8, views = $10", args: video_array) live_now = $8, views = $10 returning (xmax=0) as was_insert", *video.to_tuple, as: Bool)
# Update all users affected by insert db.exec("UPDATE users SET notifications = array_append(notifications, $1), \
if emails.empty? feed_needs_update = true WHERE $2 = ANY(subscriptions)", video.id, video.ucid) if was_insert
values = "'{}'"
else
values = "VALUES #{emails.map { |email| %((E'#{email.gsub({'\'' => "\\'", '\\' => "\\\\"})}')) }.join(",")}"
end
db.exec("UPDATE users SET feed_needs_update = true WHERE email = ANY(#{values})")
end end
if pull_all_videos if pull_all_videos
@ -303,38 +285,24 @@ def fetch_channel(ucid, db, pull_all_videos = true, locale = nil)
ids = [] of String ids = [] of String
loop do loop do
url = produce_channel_videos_url(ucid, page, auto_generated: auto_generated) response = get_channel_videos_response(ucid, page, auto_generated: auto_generated)
response = YT_POOL.client &.get(url) initial_data = JSON.parse(response.body).as_a.find &.["response"]?
json = JSON.parse(response.body) raise "Could not extract JSON" if !initial_data
videos = extract_videos(initial_data.as_h, author, ucid)
if json["content_html"]? && !json["content_html"].as_s.empty? count = videos.size
document = XML.parse_html(json["content_html"].as_s) videos = videos.map { |video| ChannelVideo.new({
nodeset = document.xpath_nodes(%q(//li[contains(@class, "feed-item-container")])) id: video.id,
else title: video.title,
break published: video.published,
end updated: Time.utc,
ucid: video.ucid,
nodeset = nodeset.not_nil! author: video.author,
length_seconds: video.length_seconds,
if auto_generated live_now: video.live_now,
videos = extract_videos(nodeset)
else
videos = extract_videos(nodeset, ucid, author)
end
count = nodeset.size
videos = videos.map { |video| ChannelVideo.new(
id: video.id,
title: video.title,
published: video.published,
updated: Time.utc,
ucid: video.ucid,
author: video.author,
length_seconds: video.length_seconds,
live_now: video.live_now,
premiere_timestamp: video.premiere_timestamp, premiere_timestamp: video.premiere_timestamp,
views: video.views views: video.views,
) } }) }
videos.each do |video| videos.each do |video|
ids << video.id ids << video.id
@ -342,42 +310,28 @@ def fetch_channel(ucid, db, pull_all_videos = true, locale = nil)
# We are notified of Red videos elsewhere (PubSub), which includes a correct published date, # We are notified of Red videos elsewhere (PubSub), which includes a correct published date,
# so since they don't provide a published date here we can safely ignore them. # so since they don't provide a published date here we can safely ignore them.
if Time.utc - video.published > 1.minute if Time.utc - video.published > 1.minute
emails = db.query_all("UPDATE users SET notifications = notifications || $1 \ was_insert = db.query_one("INSERT INTO channel_videos VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) \
WHERE updated < $2 AND $3 = ANY(subscriptions) AND $1 <> ALL(notifications) RETURNING email",
video.id, video.published, video.ucid, as: String)
video_array = video.to_a
args = arg_array(video_array)
# We don't update the 'premire_timestamp' here because channel pages don't include them
db.exec("INSERT INTO channel_videos VALUES (#{args}) \
ON CONFLICT (id) DO UPDATE SET title = $2, published = $3, \ ON CONFLICT (id) DO UPDATE SET title = $2, published = $3, \
updated = $4, ucid = $5, author = $6, length_seconds = $7, \ updated = $4, ucid = $5, author = $6, length_seconds = $7, \
live_now = $8, views = $10", args: video_array) live_now = $8, views = $10 returning (xmax=0) as was_insert", *video.to_tuple, as: Bool)
# Update all users affected by insert db.exec("UPDATE users SET notifications = array_append(notifications, $1), \
if emails.empty? feed_needs_update = true WHERE $2 = ANY(subscriptions)", video.id, video.ucid) if was_insert
values = "'{}'"
else
values = "VALUES #{emails.map { |email| %((E'#{email.gsub({'\'' => "\\'", '\\' => "\\\\"})}')) }.join(",")}"
end
db.exec("UPDATE users SET feed_needs_update = true WHERE email = ANY(#{values})")
end end
end end
if count < 25 break if count < 25
break
end
page += 1 page += 1
end end
# When a video is deleted from a channel, we find and remove it here
db.exec("DELETE FROM channel_videos * WHERE NOT id = ANY ('{#{ids.map { |id| %("#{id}") }.join(",")}}') AND ucid = $1", ucid)
end end
channel = InvidiousChannel.new(ucid, author, Time.utc, false, nil) channel = InvidiousChannel.new({
id: ucid,
author: author,
updated: Time.utc,
deleted: false,
subscribed: nil,
})
return channel return channel
end end
@ -387,23 +341,11 @@ def fetch_channel_playlists(ucid, author, auto_generated, continuation, sort_by)
url = produce_channel_playlists_url(ucid, continuation, sort_by, auto_generated) url = produce_channel_playlists_url(ucid, continuation, sort_by, auto_generated)
response = YT_POOL.client &.get(url) response = YT_POOL.client &.get(url)
json = JSON.parse(response.body)
if json["load_more_widget_html"].as_s.empty? continuation = response.body.match(/"continuation":"(?<continuation>[^"]+)"/).try &.["continuation"]?
continuation = nil initial_data = JSON.parse(response.body).as_a.find(&.["response"]?).try &.as_h
else
continuation = XML.parse_html(json["load_more_widget_html"].as_s)
continuation = continuation.xpath_node(%q(//button[@data-uix-load-more-href]))
if continuation
continuation = extract_channel_playlists_cursor(continuation["data-uix-load-more-href"], auto_generated)
end
end
html = XML.parse_html(json["content_html"].as_s)
nodeset = html.xpath_nodes(%q(//li[contains(@class, "feed-item-container")]))
else else
url = "/channel/#{ucid}/playlists?disable_polymer=1&flow=list&view=1" url = "/channel/#{ucid}/playlists?flow=list&view=1"
case sort_by case sort_by
when "last", "last_added" when "last", "last_added"
@ -412,55 +354,58 @@ def fetch_channel_playlists(ucid, author, auto_generated, continuation, sort_by)
url += "&sort=da" url += "&sort=da"
when "newest", "newest_created" when "newest", "newest_created"
url += "&sort=dd" url += "&sort=dd"
else nil # Ignore
end end
response = YT_POOL.client &.get(url) response = YT_POOL.client &.get(url)
html = XML.parse_html(response.body) continuation = response.body.match(/"continuation":"(?<continuation>[^"]+)"/).try &.["continuation"]?
initial_data = extract_initial_data(response.body)
continuation = html.xpath_node(%q(//button[@data-uix-load-more-href]))
if continuation
continuation = extract_channel_playlists_cursor(continuation["data-uix-load-more-href"], auto_generated)
end
nodeset = html.xpath_nodes(%q(//ul[@id="browse-items-primary"]/li[contains(@class, "feed-item-container")]))
end end
if auto_generated return [] of SearchItem, nil if !initial_data
items = extract_shelf_items(nodeset, ucid, author) items = extract_items(initial_data)
else continuation = extract_channel_playlists_cursor(continuation, auto_generated) if continuation
items = extract_items(nodeset, ucid, author)
end
return items, continuation return items, continuation
end end
def produce_channel_videos_url(ucid, page = 1, auto_generated = nil, sort_by = "newest") def produce_channel_videos_url(ucid, page = 1, auto_generated = nil, sort_by = "newest", v2 = false)
object = { object = {
"80226972:embedded" => { "80226972:embedded" => {
"2:string" => ucid, "2:string" => ucid,
"3:base64" => { "3:base64" => {
"2:string" => "videos", "2:string" => "videos",
"6:varint": 2_i64, "6:varint" => 2_i64,
"7:varint": 1_i64, "7:varint" => 1_i64,
"12:varint": 1_i64, "12:varint" => 1_i64,
"13:string": "", "13:string" => "",
"23:varint": 0_i64, "23:varint" => 0_i64,
}, },
}, },
} }
if auto_generated if !v2
seed = Time.unix(1525757349) if auto_generated
until seed >= Time.utc seed = Time.unix(1525757349)
seed += 1.month until seed >= Time.utc
end seed += 1.month
timestamp = seed - (page - 1).months end
timestamp = seed - (page - 1).months
object["80226972:embedded"]["3:base64"].as(Hash)["4:varint"] = 0x36_i64 object["80226972:embedded"]["3:base64"].as(Hash)["4:varint"] = 0x36_i64
object["80226972:embedded"]["3:base64"].as(Hash)["15:string"] = "#{timestamp.to_unix}" object["80226972:embedded"]["3:base64"].as(Hash)["15:string"] = "#{timestamp.to_unix}"
else
object["80226972:embedded"]["3:base64"].as(Hash)["4:varint"] = 0_i64
object["80226972:embedded"]["3:base64"].as(Hash)["15:string"] = "#{page}"
end
else else
object["80226972:embedded"]["3:base64"].as(Hash)["4:varint"] = 0_i64 object["80226972:embedded"]["3:base64"].as(Hash)["4:varint"] = 0_i64
object["80226972:embedded"]["3:base64"].as(Hash)["15:string"] = "#{page}"
object["80226972:embedded"]["3:base64"].as(Hash)["61:string"] = Base64.urlsafe_encode(Protodec::Any.from_json(Protodec::Any.cast_json({
"1:string" => Base64.urlsafe_encode(Protodec::Any.from_json(Protodec::Any.cast_json({
"1:varint" => 30_i64 * (page - 1),
}))),
})))
end end
case sort_by case sort_by
@ -469,6 +414,7 @@ def produce_channel_videos_url(ucid, page = 1, auto_generated = nil, sort_by = "
object["80226972:embedded"]["3:base64"].as(Hash)["3:varint"] = 0x01_i64 object["80226972:embedded"]["3:base64"].as(Hash)["3:varint"] = 0x01_i64
when "oldest" when "oldest"
object["80226972:embedded"]["3:base64"].as(Hash)["3:varint"] = 0x02_i64 object["80226972:embedded"]["3:base64"].as(Hash)["3:varint"] = 0x02_i64
else nil # Ignore
end end
object["80226972:embedded"]["3:string"] = Base64.urlsafe_encode(Protodec::Any.from_json(Protodec::Any.cast_json(object["80226972:embedded"]["3:base64"]))) object["80226972:embedded"]["3:string"] = Base64.urlsafe_encode(Protodec::Any.from_json(Protodec::Any.cast_json(object["80226972:embedded"]["3:base64"])))
@ -487,12 +433,12 @@ def produce_channel_playlists_url(ucid, cursor, sort = "newest", auto_generated
"80226972:embedded" => { "80226972:embedded" => {
"2:string" => ucid, "2:string" => ucid,
"3:base64" => { "3:base64" => {
"2:string" => "playlists", "2:string" => "playlists",
"6:varint": 2_i64, "6:varint" => 2_i64,
"7:varint": 1_i64, "7:varint" => 1_i64,
"12:varint": 1_i64, "12:varint" => 1_i64,
"13:string": "", "13:string" => "",
"23:varint": 0_i64, "23:varint" => 0_i64,
}, },
}, },
} }
@ -513,6 +459,7 @@ def produce_channel_playlists_url(ucid, cursor, sort = "newest", auto_generated
object["80226972:embedded"]["3:base64"].as(Hash)["3:varint"] = 3_i64 object["80226972:embedded"]["3:base64"].as(Hash)["3:varint"] = 3_i64
when "last", "last_added" when "last", "last_added"
object["80226972:embedded"]["3:base64"].as(Hash)["3:varint"] = 4_i64 object["80226972:embedded"]["3:base64"].as(Hash)["3:varint"] = 4_i64
else nil # Ignore
end end
end end
@ -527,9 +474,8 @@ def produce_channel_playlists_url(ucid, cursor, sort = "newest", auto_generated
return "/browse_ajax?continuation=#{continuation}&gl=US&hl=en" return "/browse_ajax?continuation=#{continuation}&gl=US&hl=en"
end end
def extract_channel_playlists_cursor(url, auto_generated) def extract_channel_playlists_cursor(cursor, auto_generated)
cursor = URI.parse(url).query_params cursor = URI.decode_www_form(cursor)
.try { |i| URI.decode_www_form(i["continuation"]) }
.try { |i| Base64.decode(i) } .try { |i| Base64.decode(i) }
.try { |i| IO::Memory.new(i) } .try { |i| IO::Memory.new(i) }
.try { |i| Protodec::Any.parse(i) } .try { |i| Protodec::Any.parse(i) }
@ -554,13 +500,13 @@ def extract_channel_playlists_cursor(url, auto_generated)
end end
# TODO: Add "sort_by" # TODO: Add "sort_by"
def fetch_channel_community(ucid, continuation, locale, config, kemal_config, format, thin_mode) def fetch_channel_community(ucid, continuation, locale, format, thin_mode)
response = YT_POOL.client &.get("/channel/#{ucid}/community?gl=US&hl=en") response = YT_POOL.client &.get("/channel/#{ucid}/community?gl=US&hl=en")
if response.status_code == 404 if response.status_code != 200
response = YT_POOL.client &.get("/user/#{ucid}/community?gl=US&hl=en") response = YT_POOL.client &.get("/user/#{ucid}/community?gl=US&hl=en")
end end
if response.status_code == 404 if response.status_code != 200
error_message = translate(locale, "This channel does not exist.") error_message = translate(locale, "This channel does not exist.")
raise error_message raise error_message
end end
@ -581,16 +527,8 @@ def fetch_channel_community(ucid, continuation, locale, config, kemal_config, fo
headers = HTTP::Headers.new headers = HTTP::Headers.new
headers["cookie"] = response.cookies.add_request_headers(headers)["cookie"] headers["cookie"] = response.cookies.add_request_headers(headers)["cookie"]
headers["content-type"] = "application/x-www-form-urlencoded"
headers["x-client-data"] = "CIi2yQEIpbbJAQipncoBCNedygEIqKPKAQ==" session_token = response.body.match(/"XSRF_TOKEN":"(?<session_token>[^"]+)"/).try &.["session_token"]? || ""
headers["x-spf-previous"] = ""
headers["x-spf-referer"] = ""
headers["x-youtube-client-name"] = "1"
headers["x-youtube-client-version"] = "2.20180719"
session_token = response.body.match(/"XSRF_TOKEN":"(?<session_token>[A-Za-z0-9\_\-\=]+)"/).try &.["session_token"]? || ""
post_req = { post_req = {
session_token: session_token, session_token: session_token,
} }
@ -628,17 +566,9 @@ def fetch_channel_community(ucid, continuation, locale, config, kemal_config, fo
post = post["backstagePostThreadRenderer"]?.try &.["post"]["backstagePostRenderer"]? || post = post["backstagePostThreadRenderer"]?.try &.["post"]["backstagePostRenderer"]? ||
post["commentThreadRenderer"]?.try &.["comment"]["commentRenderer"]? post["commentThreadRenderer"]?.try &.["comment"]["commentRenderer"]?
if !post next if !post
next
end
if !post["contentText"]?
content_html = ""
else
content_html = post["contentText"]["simpleText"]?.try &.as_s.rchop('\ufeff').try { |block| HTML.escape(block) }.to_s ||
content_to_comment_html(post["contentText"]["runs"].as_a).try &.to_s || ""
end
content_html = post["contentText"]?.try { |t| parse_content(t) } || ""
author = post["authorText"]?.try &.["simpleText"]? || "" author = post["authorText"]?.try &.["simpleText"]? || ""
json.object do json.object do
@ -707,7 +637,7 @@ def fetch_channel_community(ucid, continuation, locale, config, kemal_config, fo
json.field "title", attachment["title"]["simpleText"].as_s json.field "title", attachment["title"]["simpleText"].as_s
json.field "videoId", video_id json.field "videoId", video_id
json.field "videoThumbnails" do json.field "videoThumbnails" do
generate_thumbnails(json, video_id, config, kemal_config) generate_thumbnails(json, video_id)
end end
json.field "lengthSeconds", decode_length_seconds(attachment["lengthText"]["simpleText"].as_s) json.field "lengthSeconds", decode_length_seconds(attachment["lengthText"]["simpleText"].as_s)
@ -845,16 +775,34 @@ def extract_channel_community_cursor(continuation)
cursor cursor
end end
INITDATA_PREQUERY = "window[\"ytInitialData\"] = {"
def get_about_info(ucid, locale) def get_about_info(ucid, locale)
about = YT_POOL.client &.get("/channel/#{ucid}/about?disable_polymer=1&gl=US&hl=en") about = YT_POOL.client &.get("/channel/#{ucid}/about?gl=US&hl=en")
if about.status_code == 404 if about.status_code != 200
about = YT_POOL.client &.get("/user/#{ucid}/about?disable_polymer=1&gl=US&hl=en") about = YT_POOL.client &.get("/user/#{ucid}/about?gl=US&hl=en")
end end
if md = about.headers["location"]?.try &.match(/\/channel\/(?<ucid>UC[a-zA-Z0-9_-]{22})/) if md = about.headers["location"]?.try &.match(/\/channel\/(?<ucid>UC[a-zA-Z0-9_-]{22})/)
raise ChannelRedirect.new(channel_id: md["ucid"]) raise ChannelRedirect.new(channel_id: md["ucid"])
end end
if about.status_code != 200
error_message = translate(locale, "This channel does not exist.")
raise error_message
end
initdata_pre = about.body.index(INITDATA_PREQUERY)
initdata_post = initdata_pre.nil? ? nil : about.body.index("};", initdata_pre)
if initdata_post.nil?
about = XML.parse_html(about.body)
error_message = about.xpath_node(%q(//div[@class="yt-alert-content"])).try &.content.strip
error_message ||= translate(locale, "Could not get channel info.")
raise error_message
end
initdata_pre = initdata_pre.not_nil! + INITDATA_PREQUERY.size - 1
initdata = JSON.parse(about.body[initdata_pre, initdata_post - initdata_pre + 1])
about = XML.parse_html(about.body) about = XML.parse_html(about.body)
if about.xpath_node(%q(//div[contains(@class, "channel-empty-message")])) if about.xpath_node(%q(//div[contains(@class, "channel-empty-message")]))
@ -862,136 +810,138 @@ def get_about_info(ucid, locale)
raise error_message raise error_message
end end
if about.xpath_node(%q(//span[contains(@class,"qualified-channel-title-text")]/a)).try &.content.empty? author = about.xpath_node(%q(//meta[@name="title"])).not_nil!["content"]
error_message = about.xpath_node(%q(//div[@class="yt-alert-content"])).try &.content.strip author_url = about.xpath_node(%q(//link[@rel="canonical"])).not_nil!["href"]
error_message ||= translate(locale, "Could not get channel info.") author_thumbnail = about.xpath_node(%q(//link[@rel="image_src"])).not_nil!["href"]
raise error_message
end
author = about.xpath_node(%q(//span[contains(@class,"qualified-channel-title-text")]/a)).not_nil!.content
author_url = about.xpath_node(%q(//span[contains(@class,"qualified-channel-title-text")]/a)).not_nil!["href"]
author_thumbnail = about.xpath_node(%q(//img[@class="channel-header-profile-image"])).not_nil!["src"]
ucid = about.xpath_node(%q(//meta[@itemprop="channelId"])).not_nil!["content"] ucid = about.xpath_node(%q(//meta[@itemprop="channelId"])).not_nil!["content"]
banner = about.xpath_node(%q(//div[@id="gh-banner"]/style)).not_nil!.content # Raises a KeyError on failure.
banner = "https:" + banner.match(/background-image: url\((?<url>[^)]+)\)/).not_nil!["url"] banners = initdata["header"]["c4TabbedHeaderRenderer"]?.try &.["banner"]?.try &.["thumbnails"]?
banner = banners.try &.[-1]?.try &.["url"].as_s?
if banner.includes? "channels/c4/default_banner" # if banner.includes? "channels/c4/default_banner"
banner = nil # banner = nil
end # end
description_html = about.xpath_node(%q(//div[contains(@class,"about-description")])).try &.to_s || description = initdata["metadata"]["channelMetadataRenderer"]?.try &.["description"]?.try &.as_s? || ""
%(<div class="about-description branded-page-box-padding"><pre></pre></div>) description_html = HTML.escape(description).gsub("\n", "<br>")
paid = about.xpath_node(%q(//meta[@itemprop="paid"])).not_nil!["content"] == "True" paid = about.xpath_node(%q(//meta[@itemprop="paid"])).not_nil!["content"] == "True"
is_family_friendly = about.xpath_node(%q(//meta[@itemprop="isFamilyFriendly"])).not_nil!["content"] == "True" is_family_friendly = about.xpath_node(%q(//meta[@itemprop="isFamilyFriendly"])).not_nil!["content"] == "True"
allowed_regions = about.xpath_node(%q(//meta[@itemprop="regionsAllowed"])).not_nil!["content"].split(",") allowed_regions = about.xpath_node(%q(//meta[@itemprop="regionsAllowed"])).not_nil!["content"].split(",")
related_channels = about.xpath_nodes(%q(//div[contains(@class, "branded-page-related-channels")]/ul/li)) related_channels = initdata["contents"]["twoColumnBrowseResultsRenderer"]
related_channels = related_channels.map do |node| .["secondaryContents"]?.try &.["browseSecondaryContentsRenderer"]["contents"][0]?
related_id = node["data-external-id"]? .try &.["verticalChannelSectionRenderer"]?.try &.["items"]?.try &.as_a.map do |node|
related_id ||= "" renderer = node["miniChannelRenderer"]?
related_id = renderer.try &.["channelId"]?.try &.as_s?
related_id ||= ""
anchor = node.xpath_node(%q(.//h3[contains(@class, "yt-lockup-title")]/a)) related_title = renderer.try &.["title"]?.try &.["simpleText"]?.try &.as_s?
related_title = anchor.try &.["title"] related_title ||= ""
related_title ||= ""
related_author_url = anchor.try &.["href"] related_author_url = renderer.try &.["navigationEndpoint"]?.try &.["commandMetadata"]?.try &.["webCommandMetadata"]?
related_author_url ||= "" .try &.["url"]?.try &.as_s?
related_author_url ||= ""
related_author_thumbnail = node.xpath_node(%q(.//img)).try &.["data-thumb"] related_author_thumbnails = renderer.try &.["thumbnail"]?.try &.["thumbnails"]?.try &.as_a?
related_author_thumbnail ||= "" related_author_thumbnails ||= [] of JSON::Any
AboutRelatedChannel.new( related_author_thumbnail = ""
ucid: related_id, if related_author_thumbnails.size > 0
author: related_title, related_author_thumbnail = related_author_thumbnails[-1]["url"]?.try &.as_s?
author_url: related_author_url, related_author_thumbnail ||= ""
author_thumbnail: related_author_thumbnail, end
)
end
joined = about.xpath_node(%q(//span[contains(., "Joined")])) AboutRelatedChannel.new({
.try &.content.try { |text| Time.parse(text, "Joined %b %-d, %Y", Time::Location.local) } || Time.unix(0) ucid: related_id,
author: related_title,
author_url: related_author_url,
author_thumbnail: related_author_thumbnail,
})
end
related_channels ||= [] of AboutRelatedChannel
total_views = about.xpath_node(%q(//span[contains(., "views")]/b)) total_views = 0_i64
.try &.content.try &.gsub(/\D/, "").to_i64? || 0_i64 joined = Time.unix(0)
tabs = [] of String
sub_count = about.xpath_node(%q(.//span[contains(@class, "subscriber-count")]))
.try &.["title"].try { |text| short_text_to_number(text) } || 0
# Auto-generated channels
# https://support.google.com/youtube/answer/2579942
auto_generated = false auto_generated = false
if about.xpath_node(%q(//ul[@class="about-custom-links"]/li/a[@title="Auto-generated by YouTube"])) ||
about.xpath_node(%q(//span[@class="qualified-channel-title-badge"]/span[@title="Auto-generated by YouTube"])) tabs_json = initdata["contents"]["twoColumnBrowseResultsRenderer"]["tabs"]?.try &.as_a?
auto_generated = true if !tabs_json.nil?
# Retrieve information from the tabs array. The index we are looking for varies between channels.
tabs_json.each do |node|
# Try to find the about section which is located in only one of the tabs.
channel_about_meta = node["tabRenderer"]?.try &.["content"]?.try &.["sectionListRenderer"]?
.try &.["contents"]?.try &.[0]?.try &.["itemSectionRenderer"]?.try &.["contents"]?
.try &.[0]?.try &.["channelAboutFullMetadataRenderer"]?
if !channel_about_meta.nil?
total_views = channel_about_meta["viewCountText"]?.try &.["simpleText"]?.try &.as_s.gsub(/\D/, "").to_i64? || 0_i64
# The joined text is split to several sub strings. The reduce joins those strings before parsing the date.
joined = channel_about_meta["joinedDateText"]?.try &.["runs"]?.try &.as_a.reduce("") { |acc, node| acc + node["text"].as_s }
.try { |text| Time.parse(text, "Joined %b %-d, %Y", Time::Location.local) } || Time.unix(0)
# Auto-generated channels
# https://support.google.com/youtube/answer/2579942
# For auto-generated channels, channel_about_meta only has ["description"]["simpleText"] and ["primaryLinks"][0]["title"]["simpleText"]
if (channel_about_meta["primaryLinks"]?.try &.size || 0) == 1 && (channel_about_meta["primaryLinks"][0]?) &&
(channel_about_meta["primaryLinks"][0]["title"]?.try &.["simpleText"]?.try &.as_s? || "") == "Auto-generated by YouTube"
auto_generated = true
end
end
end
tabs = tabs_json.reject { |node| node["tabRenderer"]?.nil? }.map { |node| node["tabRenderer"]["title"].as_s.downcase }
end end
tabs = about.xpath_nodes(%q(//ul[@id="channel-navigation-menu"]/li/a/span)).map { |node| node.content.downcase } sub_count = initdata["header"]["c4TabbedHeaderRenderer"]?.try &.["subscriberCountText"]?.try &.["simpleText"]?.try &.as_s?
.try { |text| short_text_to_number(text.split(" ")[0]) } || 0
AboutChannel.new( AboutChannel.new({
ucid: ucid, ucid: ucid,
author: author, author: author,
auto_generated: auto_generated, auto_generated: auto_generated,
author_url: author_url, author_url: author_url,
author_thumbnail: author_thumbnail, author_thumbnail: author_thumbnail,
banner: banner, banner: banner,
description_html: description_html, description_html: description_html,
paid: paid, paid: paid,
total_views: total_views, total_views: total_views,
sub_count: sub_count, sub_count: sub_count,
joined: joined, joined: joined,
is_family_friendly: is_family_friendly, is_family_friendly: is_family_friendly,
allowed_regions: allowed_regions, allowed_regions: allowed_regions,
related_channels: related_channels, related_channels: related_channels,
tabs: tabs tabs: tabs,
) })
end
def get_channel_videos_response(ucid, page = 1, auto_generated = nil, sort_by = "newest")
url = produce_channel_videos_url(ucid, page, auto_generated: auto_generated, sort_by: sort_by, v2: true)
return YT_POOL.client &.get(url)
end end
def get_60_videos(ucid, author, page, auto_generated, sort_by = "newest") def get_60_videos(ucid, author, page, auto_generated, sort_by = "newest")
count = 0
videos = [] of SearchVideo videos = [] of SearchVideo
2.times do |i| 2.times do |i|
url = produce_channel_videos_url(ucid, page * 2 + (i - 1), auto_generated: auto_generated, sort_by: sort_by) response = get_channel_videos_response(ucid, page * 2 + (i - 1), auto_generated: auto_generated, sort_by: sort_by)
response = YT_POOL.client &.get(url) initial_data = JSON.parse(response.body).as_a.find &.["response"]?
json = JSON.parse(response.body) break if !initial_data
videos.concat extract_videos(initial_data.as_h, author, ucid)
if json["content_html"]? && !json["content_html"].as_s.empty?
document = XML.parse_html(json["content_html"].as_s)
nodeset = document.xpath_nodes(%q(//li[contains(@class, "feed-item-container")]))
if !json["load_more_widget_html"]?.try &.as_s.empty?
count += 30
end
if auto_generated
videos += extract_videos(nodeset)
else
videos += extract_videos(nodeset, ucid, author)
end
else
break
end
end end
return videos, count return videos.size, videos
end end
def get_latest_videos(ucid) def get_latest_videos(ucid)
videos = [] of SearchVideo response = get_channel_videos_response(ucid, 1)
initial_data = JSON.parse(response.body).as_a.find &.["response"]?
return [] of SearchVideo if !initial_data
author = initial_data["response"]?.try &.["metadata"]?.try &.["channelMetadataRenderer"]?.try &.["title"]?.try &.as_s
items = extract_videos(initial_data.as_h, author, ucid)
url = produce_channel_videos_url(ucid, 0) return items
response = YT_POOL.client &.get(url)
json = JSON.parse(response.body)
if json["content_html"]? && !json["content_html"].as_s.empty?
document = XML.parse_html(json["content_html"].as_s)
nodeset = document.xpath_nodes(%q(//li[contains(@class, "feed-item-container")]))
videos = extract_videos(nodeset, ucid)
end
return videos
end end

View File

@ -1,11 +1,23 @@
class RedditThing class RedditThing
JSON.mapping({ include JSON::Serializable
kind: String,
data: RedditComment | RedditLink | RedditMore | RedditListing, property kind : String
}) property data : RedditComment | RedditLink | RedditMore | RedditListing
end end
class RedditComment class RedditComment
include JSON::Serializable
property author : String
property body_html : String
property replies : RedditThing | String
property score : Int32
property depth : Int32
property permalink : String
@[JSON::Field(converter: RedditComment::TimeConverter)]
property created_utc : Time
module TimeConverter module TimeConverter
def self.from_json(value : JSON::PullParser) : Time def self.from_json(value : JSON::PullParser) : Time
Time.unix(value.read_float.to_i) Time.unix(value.read_float.to_i)
@ -15,51 +27,38 @@ class RedditComment
json.number(value.to_unix) json.number(value.to_unix)
end end
end end
JSON.mapping({
author: String,
body_html: String,
replies: RedditThing | String,
score: Int32,
depth: Int32,
permalink: String,
created_utc: {
type: Time,
converter: RedditComment::TimeConverter,
},
})
end end
struct RedditLink struct RedditLink
JSON.mapping({ include JSON::Serializable
author: String,
score: Int32, property author : String
subreddit: String, property score : Int32
num_comments: Int32, property subreddit : String
id: String, property num_comments : Int32
permalink: String, property id : String
title: String, property permalink : String
}) property title : String
end end
struct RedditMore struct RedditMore
JSON.mapping({ include JSON::Serializable
children: Array(String),
count: Int32, property children : Array(String)
depth: Int32, property count : Int32
}) property depth : Int32
end end
class RedditListing class RedditListing
JSON.mapping({ include JSON::Serializable
children: Array(RedditThing),
modhash: String, property children : Array(RedditThing)
}) property modhash : String
end end
def fetch_youtube_comments(id, db, cursor, format, locale, thin_mode, region, sort_by = "top") def fetch_youtube_comments(id, db, cursor, format, locale, thin_mode, region, sort_by = "top")
video = get_video(id, db, region: region) video = get_video(id, db, region: region)
session_token = video.info["session_token"]? session_token = video.session_token
case cursor case cursor
when nil, "" when nil, ""
@ -85,17 +84,9 @@ def fetch_youtube_comments(id, db, cursor, format, locale, thin_mode, region, so
session_token: session_token, session_token: session_token,
} }
headers = HTTP::Headers.new headers = HTTP::Headers{
"cookie" => video.cookie,
headers["content-type"] = "application/x-www-form-urlencoded" }
headers["cookie"] = video.info["cookie"]
headers["x-client-data"] = "CIi2yQEIpbbJAQipncoBCNedygEIqKPKAQ=="
headers["x-spf-previous"] = "https://www.youtube.com/watch?v=#{id}&gl=US&hl=en&disable_polymer=1&has_verified=1&bpctr=9999999999"
headers["x-spf-referer"] = "https://www.youtube.com/watch?v=#{id}&gl=US&hl=en&disable_polymer=1&has_verified=1&bpctr=9999999999"
headers["x-youtube-client-name"] = "1"
headers["x-youtube-client-version"] = "2.20180719"
response = YT_POOL.client(region, &.post("/comment_service_ajax?action_get_comments=1&hl=en&gl=US", headers, form: post_req)) response = YT_POOL.client(region, &.post("/comment_service_ajax?action_get_comments=1&hl=en&gl=US", headers, form: post_req))
response = JSON.parse(response.body) response = JSON.parse(response.body)
@ -150,8 +141,7 @@ def fetch_youtube_comments(id, db, cursor, format, locale, thin_mode, region, so
node_comment = node["commentRenderer"] node_comment = node["commentRenderer"]
end end
content_html = node_comment["contentText"]["simpleText"]?.try &.as_s.rchop('\ufeff').try { |block| HTML.escape(block) }.to_s || content_html = node_comment["contentText"]?.try { |t| parse_content(t) } || ""
content_to_comment_html(node_comment["contentText"]["runs"].as_a).try &.to_s || ""
author = node_comment["authorText"]?.try &.["simpleText"]? || "" author = node_comment["authorText"]?.try &.["simpleText"]? || ""
json.field "author", author json.field "author", author
@ -294,7 +284,7 @@ def template_youtube_comments(comments, locale, thin_mode)
<div class="pure-u-23-24"> <div class="pure-u-23-24">
<p> <p>
<a href="javascript:void(0)" data-continuation="#{child["replies"]["continuation"]}" <a href="javascript:void(0)" data-continuation="#{child["replies"]["continuation"]}"
onclick="get_youtube_replies(this)">#{translate(locale, "View `x` replies", number_with_separator(child["replies"]["replyCount"]))}</a> data-onclick="get_youtube_replies">#{translate(locale, "View `x` replies", number_with_separator(child["replies"]["replyCount"]))}</a>
</p> </p>
</div> </div>
</div> </div>
@ -347,7 +337,7 @@ def template_youtube_comments(comments, locale, thin_mode)
END_HTML END_HTML
else else
html << <<-END_HTML html << <<-END_HTML
<iframe id='ivplayer' type='text/html' style='position:absolute;width:100%;height:100%;left:0;top:0' src='/embed/#{attachment["videoId"]?}?autoplay=0' frameborder='0'></iframe> <iframe id='ivplayer' style='position:absolute;width:100%;height:100%;left:0;top:0' src='/embed/#{attachment["videoId"]?}?autoplay=0' style='border:none;'></iframe>
END_HTML END_HTML
end end
@ -356,6 +346,7 @@ def template_youtube_comments(comments, locale, thin_mode)
</div> </div>
</div> </div>
END_HTML END_HTML
else nil # Ignore
end end
end end
@ -413,7 +404,7 @@ def template_youtube_comments(comments, locale, thin_mode)
<div class="pure-u-1"> <div class="pure-u-1">
<p> <p>
<a href="javascript:void(0)" data-continuation="#{comments["continuation"]}" <a href="javascript:void(0)" data-continuation="#{comments["continuation"]}"
onclick="get_youtube_replies(this, true)">#{translate(locale, "Load more")}</a> data-onclick="get_youtube_replies" data-load-more>#{translate(locale, "Load more")}</a>
</p> </p>
</div> </div>
</div> </div>
@ -451,7 +442,7 @@ def template_reddit_comments(root, locale)
html << <<-END_HTML html << <<-END_HTML
<p> <p>
<a href="javascript:void(0)" onclick="toggle_parent(this)">[ - ]</a> <a href="javascript:void(0)" data-onclick="toggle_parent">[ - ]</a>
<b><a href="https://www.reddit.com/user/#{child.author}">#{child.author}</a></b> <b><a href="https://www.reddit.com/user/#{child.author}">#{child.author}</a></b>
#{translate(locale, "`x` points", number_with_separator(child.score))} #{translate(locale, "`x` points", number_with_separator(child.score))}
<span title="#{child.created_utc.to_s(translate(locale, "%a %B %-d %T %Y UTC"))}">#{translate(locale, "`x` ago", recode_date(child.created_utc, locale))}</span> <span title="#{child.created_utc.to_s(translate(locale, "%a %B %-d %T %Y UTC"))}">#{translate(locale, "`x` ago", recode_date(child.created_utc, locale))}</span>
@ -522,6 +513,11 @@ def fill_links(html, scheme, host)
return html.to_xml(options: XML::SaveOptions::NO_DECL) return html.to_xml(options: XML::SaveOptions::NO_DECL)
end end
def parse_content(content : JSON::Any) : String
content["simpleText"]?.try &.as_s.rchop('\ufeff').try { |b| HTML.escape(b) }.to_s ||
content["runs"]?.try &.as_a.try { |r| content_to_comment_html(r).try &.to_s } || ""
end
def content_to_comment_html(content) def content_to_comment_html(content)
comment_html = content.map do |run| comment_html = content.map do |run|
text = HTML.escape(run["text"].as_s) text = HTML.escape(run["text"].as_s)
@ -556,7 +552,7 @@ def content_to_comment_html(content)
video_id = watch_endpoint["videoId"].as_s video_id = watch_endpoint["videoId"].as_s
if length_seconds if length_seconds
text = %(<a href="javascript:void(0)" onclick="player.currentTime(#{length_seconds})">#{text}</a>) text = %(<a href="javascript:void(0)" data-onclick="jump_to_time" data-jump-time="#{length_seconds}">#{text}</a>)
else else
text = %(<a href="/watch?v=#{video_id}">#{text}</a>) text = %(<a href="/watch?v=#{video_id}">#{text}</a>)
end end
@ -609,6 +605,8 @@ def produce_comment_continuation(video_id, cursor = "", sort_by = "top")
object["6:embedded"].as(Hash)["4:embedded"].as(Hash)["6:varint"] = 0_i64 object["6:embedded"].as(Hash)["4:embedded"].as(Hash)["6:varint"] = 0_i64
when "new", "newest" when "new", "newest"
object["6:embedded"].as(Hash)["4:embedded"].as(Hash)["6:varint"] = 1_i64 object["6:embedded"].as(Hash)["4:embedded"].as(Hash)["6:varint"] = 1_i64
else # top
object["6:embedded"].as(Hash)["4:embedded"].as(Hash)["6:varint"] = 0_i64
end end
continuation = object.try { |i| Protodec::Any.cast_json(object) } continuation = object.try { |i| Protodec::Any.cast_json(object) }

View File

@ -61,7 +61,7 @@ class Kemal::ExceptionHandler
end end
class FilteredCompressHandler < Kemal::Handler class FilteredCompressHandler < Kemal::Handler
exclude ["/videoplayback", "/videoplayback/*", "/vi/*", "/ggpht/*", "/api/v1/auth/notifications"] exclude ["/videoplayback", "/videoplayback/*", "/vi/*", "/sb/*", "/ggpht/*", "/api/v1/auth/notifications"]
exclude ["/api/v1/auth/notifications", "/data_control"], "POST" exclude ["/api/v1/auth/notifications", "/data_control"], "POST"
def call(env) def call(env)
@ -74,10 +74,10 @@ class FilteredCompressHandler < Kemal::Handler
if request_headers.includes_word?("Accept-Encoding", "gzip") if request_headers.includes_word?("Accept-Encoding", "gzip")
env.response.headers["Content-Encoding"] = "gzip" env.response.headers["Content-Encoding"] = "gzip"
env.response.output = Gzip::Writer.new(env.response.output, sync_close: true) env.response.output = Compress::Gzip::Writer.new(env.response.output, sync_close: true)
elsif request_headers.includes_word?("Accept-Encoding", "deflate") elsif request_headers.includes_word?("Accept-Encoding", "deflate")
env.response.headers["Content-Encoding"] = "deflate" env.response.headers["Content-Encoding"] = "deflate"
env.response.output = Flate::Writer.new(env.response.output, sync_close: true) env.response.output = Compress::Deflate::Writer.new(env.response.output, sync_close: true)
end end
call_next env call_next env
@ -212,29 +212,3 @@ class DenyFrame < Kemal::Handler
call_next env call_next env
end end
end end
# Temp fixes for https://github.com/crystal-lang/crystal/issues/7383
class HTTP::UnknownLengthContent
def read_byte
ensure_send_continue
if @io.is_a?(OpenSSL::SSL::Socket::Client)
return if @io.as(OpenSSL::SSL::Socket::Client).@in_buffer_rem.empty?
end
@io.read_byte
end
end
class HTTP::Client
private def handle_response(response)
if @socket.is_a?(OpenSSL::SSL::Socket::Client) && @host.ends_with?("googlevideo.com")
close unless response.keep_alive? || @socket.as(OpenSSL::SSL::Socket::Client).@in_buffer_rem.empty?
if @socket.as(OpenSSL::SSL::Socket::Client).@in_buffer_rem.empty?
@socket = nil
end
else
close unless response.keep_alive?
end
response
end
end

View File

@ -1,217 +1,100 @@
require "./macros" require "./macros"
struct Nonce struct Nonce
db_mapping({ include DB::Serializable
nonce: String,
expire: Time, property nonce : String
}) property expire : Time
end end
struct SessionId struct SessionId
db_mapping({ include DB::Serializable
id: String,
email: String, property id : String
issued: String, property email : String
}) property issued : String
end end
struct Annotation struct Annotation
db_mapping({ include DB::Serializable
id: String,
annotations: String, property id : String
}) property annotations : String
end end
struct ConfigPreferences struct ConfigPreferences
module StringToArray include YAML::Serializable
def self.to_json(value : Array(String), json : JSON::Builder)
json.array do
value.each do |element|
json.string element
end
end
end
def self.from_json(value : JSON::PullParser) : Array(String) property annotations : Bool = false
begin property annotations_subscribed : Bool = false
result = [] of String property autoplay : Bool = false
value.read_array do property captions : Array(String) = ["", "", ""]
result << HTML.escape(value.read_string[0, 100]) property comments : Array(String) = ["youtube", ""]
end property continue : Bool = false
rescue ex property continue_autoplay : Bool = true
result = [HTML.escape(value.read_string[0, 100]), ""] property dark_mode : String = ""
end property latest_only : Bool = false
property listen : Bool = false
property local : Bool = false
property locale : String = "en-US"
property max_results : Int32 = 40
property notifications_only : Bool = false
property player_style : String = "invidious"
property quality : String = "hd720"
property default_home : String = "Popular"
property feed_menu : Array(String) = ["Popular", "Trending", "Subscriptions", "Playlists"]
property related_videos : Bool = true
property sort : String = "published"
property speed : Float32 = 1.0_f32
property thin_mode : Bool = false
property unseen_only : Bool = false
property video_loop : Bool = false
property volume : Int32 = 100
result def to_tuple
end {% begin %}
{
def self.to_yaml(value : Array(String), yaml : YAML::Nodes::Builder) {{*@type.instance_vars.map { |var| "#{var.name}: #{var.name}".id }}}
yaml.sequence do }
value.each do |element| {% end %}
yaml.scalar element
end
end
end
def self.from_yaml(ctx : YAML::ParseContext, node : YAML::Nodes::Node) : Array(String)
begin
unless node.is_a?(YAML::Nodes::Sequence)
node.raise "Expected sequence, not #{node.class}"
end
result = [] of String
node.nodes.each do |item|
unless item.is_a?(YAML::Nodes::Scalar)
node.raise "Expected scalar, not #{item.class}"
end
result << HTML.escape(item.value[0, 100])
end
rescue ex
if node.is_a?(YAML::Nodes::Scalar)
result = [HTML.escape(node.value[0, 100]), ""]
else
result = ["", ""]
end
end
result
end
end end
module BoolToString
def self.to_json(value : String, json : JSON::Builder)
json.string value
end
def self.from_json(value : JSON::PullParser) : String
begin
result = value.read_string
if result.empty?
CONFIG.default_user_preferences.dark_mode
else
result
end
rescue ex
if value.read_bool
"dark"
else
"light"
end
end
end
def self.to_yaml(value : String, yaml : YAML::Nodes::Builder)
yaml.scalar value
end
def self.from_yaml(ctx : YAML::ParseContext, node : YAML::Nodes::Node) : String
unless node.is_a?(YAML::Nodes::Scalar)
node.raise "Expected scalar, not #{node.class}"
end
case node.value
when "true"
"dark"
when "false"
"light"
when ""
CONFIG.default_user_preferences.dark_mode
else
node.value
end
end
end
yaml_mapping({
annotations: {type: Bool, default: false},
annotations_subscribed: {type: Bool, default: false},
autoplay: {type: Bool, default: false},
captions: {type: Array(String), default: ["", "", ""], converter: StringToArray},
comments: {type: Array(String), default: ["youtube", ""], converter: StringToArray},
continue: {type: Bool, default: false},
continue_autoplay: {type: Bool, default: true},
dark_mode: {type: String, default: "", converter: BoolToString},
latest_only: {type: Bool, default: false},
listen: {type: Bool, default: false},
local: {type: Bool, default: false},
locale: {type: String, default: "en-US"},
max_results: {type: Int32, default: 40},
notifications_only: {type: Bool, default: false},
player_style: {type: String, default: "invidious"},
quality: {type: String, default: "hd720"},
default_home: {type: String, default: "Popular"},
feed_menu: {type: Array(String), default: ["Popular", "Trending", "Subscriptions", "Playlists"]},
related_videos: {type: Bool, default: true},
sort: {type: String, default: "published"},
speed: {type: Float32, default: 1.0_f32},
thin_mode: {type: Bool, default: false},
unseen_only: {type: Bool, default: false},
video_loop: {type: Bool, default: false},
volume: {type: Int32, default: 100},
})
end end
struct Config struct Config
module ConfigPreferencesConverter include YAML::Serializable
def self.to_yaml(value : Preferences, yaml : YAML::Nodes::Builder)
value.to_yaml(yaml)
end
def self.from_yaml(ctx : YAML::ParseContext, node : YAML::Nodes::Node) : Preferences property channel_threads : Int32 # Number of threads to use for crawling videos from channels (for updating subscriptions)
Preferences.new(*ConfigPreferences.new(ctx, node).to_tuple) property feed_threads : Int32 # Number of threads to use for updating feeds
end property db : DBConfig # Database configuration
end property full_refresh : Bool # Used for crawling channels: threads should check all videos uploaded by a channel
property https_only : Bool? # Used to tell Invidious it is behind a proxy, so links to resources should be https://
property hmac_key : String? # HMAC signing key for CSRF tokens and verifying pubsub subscriptions
property domain : String? # Domain to be used for links to resources on the site where an absolute URL is required
property use_pubsub_feeds : Bool | Int32 = false # Subscribe to channels using PubSubHubbub (requires domain, hmac_key)
property captcha_enabled : Bool = true
property login_enabled : Bool = true
property registration_enabled : Bool = true
property statistics_enabled : Bool = false
property admins : Array(String) = [] of String
property external_port : Int32? = nil
property default_user_preferences : ConfigPreferences = ConfigPreferences.from_yaml("")
property dmca_content : Array(String) = [] of String # For compliance with DMCA, disables download widget using list of video IDs
property check_tables : Bool = false # Check table integrity, automatically try to add any missing columns, create tables, etc.
property cache_annotations : Bool = false # Cache annotations requested from IA, will not cache empty annotations or annotations that only contain cards
property banner : String? = nil # Optional banner to be displayed along top of page for announcements, etc.
property hsts : Bool? = true # Enables 'Strict-Transport-Security'. Ensure that `domain` and all subdomains are served securely
property disable_proxy : Bool? | Array(String)? = false # Disable proxying server-wide: options: 'dash', 'livestreams', 'downloads', 'local'
module FamilyConverter @[YAML::Field(converter: Preferences::FamilyConverter)]
def self.to_yaml(value : Socket::Family, yaml : YAML::Nodes::Builder) property force_resolve : Socket::Family = Socket::Family::UNSPEC # Connect to YouTube over 'ipv6', 'ipv4'. Will sometimes resolve fix issues with rate-limiting (see https://github.com/ytdl-org/youtube-dl/issues/21729)
case value property port : Int32 = 3000 # Port to listen for connections (overrided by command line argument)
when Socket::Family::UNSPEC property host_binding : String = "0.0.0.0" # Host to bind (overrided by command line argument)
yaml.scalar nil property pool_size : Int32 = 100 # Pool size for HTTP requests to youtube.com and ytimg.com (each domain has a separate pool of `pool_size`)
when Socket::Family::INET property admin_email : String = "omarroth@protonmail.com" # Email for bug reports
yaml.scalar "ipv4"
when Socket::Family::INET6
yaml.scalar "ipv6"
end
end
def self.from_yaml(ctx : YAML::ParseContext, node : YAML::Nodes::Node) : Socket::Family @[YAML::Field(converter: Preferences::StringToCookies)]
if node.is_a?(YAML::Nodes::Scalar) property cookies : HTTP::Cookies = HTTP::Cookies.new # Saved cookies in "name1=value1; name2=value2..." format
case node.value.downcase property captcha_key : String? = nil # Key for Anti-Captcha
when "ipv4"
Socket::Family::INET
when "ipv6"
Socket::Family::INET6
else
Socket::Family::UNSPEC
end
else
node.raise "Expected scalar, not #{node.class}"
end
end
end
module StringToCookies
def self.to_yaml(value : HTTP::Cookies, yaml : YAML::Nodes::Builder)
(value.map { |c| "#{c.name}=#{c.value}" }).join("; ").to_yaml(yaml)
end
def self.from_yaml(ctx : YAML::ParseContext, node : YAML::Nodes::Node) : HTTP::Cookies
unless node.is_a?(YAML::Nodes::Scalar)
node.raise "Expected scalar, not #{node.class}"
end
cookies = HTTP::Cookies.new
node.value.split(";").each do |cookie|
next if cookie.strip.empty?
name, value = cookie.split("=", 2)
cookies << HTTP::Cookie.new(name.strip, value.strip)
end
cookies
end
end
def disabled?(option) def disabled?(option)
case disabled = CONFIG.disable_proxy case disabled = CONFIG.disable_proxy
@ -223,77 +106,20 @@ struct Config
else else
return false return false
end end
else
return false
end end
end end
YAML.mapping({
channel_threads: Int32, # Number of threads to use for crawling videos from channels (for updating subscriptions)
feed_threads: Int32, # Number of threads to use for updating feeds
db: DBConfig, # Database configuration
full_refresh: Bool, # Used for crawling channels: threads should check all videos uploaded by a channel
https_only: Bool?, # Used to tell Invidious it is behind a proxy, so links to resources should be https://
hmac_key: String?, # HMAC signing key for CSRF tokens and verifying pubsub subscriptions
domain: String?, # Domain to be used for links to resources on the site where an absolute URL is required
use_pubsub_feeds: {type: Bool | Int32, default: false}, # Subscribe to channels using PubSubHubbub (requires domain, hmac_key)
top_enabled: {type: Bool, default: true},
captcha_enabled: {type: Bool, default: true},
login_enabled: {type: Bool, default: true},
registration_enabled: {type: Bool, default: true},
statistics_enabled: {type: Bool, default: false},
admins: {type: Array(String), default: [] of String},
external_port: {type: Int32?, default: nil},
default_user_preferences: {type: Preferences,
default: Preferences.new(*ConfigPreferences.from_yaml("").to_tuple),
converter: ConfigPreferencesConverter,
},
dmca_content: {type: Array(String), default: [] of String}, # For compliance with DMCA, disables download widget using list of video IDs
check_tables: {type: Bool, default: false}, # Check table integrity, automatically try to add any missing columns, create tables, etc.
cache_annotations: {type: Bool, default: false}, # Cache annotations requested from IA, will not cache empty annotations or annotations that only contain cards
banner: {type: String?, default: nil}, # Optional banner to be displayed along top of page for announcements, etc.
hsts: {type: Bool?, default: true}, # Enables 'Strict-Transport-Security'. Ensure that `domain` and all subdomains are served securely
disable_proxy: {type: Bool? | Array(String)?, default: false}, # Disable proxying server-wide: options: 'dash', 'livestreams', 'downloads', 'local'
force_resolve: {type: Socket::Family, default: Socket::Family::UNSPEC, converter: FamilyConverter}, # Connect to YouTube over 'ipv6', 'ipv4'. Will sometimes resolve fix issues with rate-limiting (see https://github.com/ytdl-org/youtube-dl/issues/21729)
port: {type: Int32, default: 3000}, # Port to listen for connections (overrided by command line argument)
host_binding: {type: String, default: "0.0.0.0"}, # Host to bind (overrided by command line argument)
pool_size: {type: Int32, default: 100}, # Pool size for HTTP requests to youtube.com and ytimg.com (each domain has a separate pool of `pool_size`)
admin_email: {type: String, default: "omarroth@protonmail.com"}, # Email for bug reports
cookies: {type: HTTP::Cookies, default: HTTP::Cookies.new, converter: StringToCookies}, # Saved cookies in "name1=value1; name2=value2..." format
captcha_key: {type: String?, default: nil}, # Key for Anti-Captcha
})
end end
struct DBConfig struct DBConfig
yaml_mapping({ include YAML::Serializable
user: String,
password: String,
host: String,
port: Int32,
dbname: String,
})
end
def rank_videos(db, n) property user : String
top = [] of {Float64, String} property password : String
property host : String
db.query("SELECT id, wilson_score, published FROM videos WHERE views > 5000 ORDER BY published DESC LIMIT 1000") do |rs| property port : Int32
rs.each do property dbname : String
id = rs.read(String)
wilson_score = rs.read(Float64)
published = rs.read(Time)
# Exponential decay, older videos tend to rank lower
temperature = wilson_score * Math.exp(-0.000005*((Time.utc - published).total_minutes))
top << {temperature, id}
end
end
top.sort!
# Make hottest come first
top.reverse!
top = top.map { |a, b| b }
return top[0..n - 1]
end end
def login_req(f_req) def login_req(f_req)
@ -334,293 +160,179 @@ def html_to_content(description_html : String)
return description return description
end end
def extract_videos(nodeset, ucid = nil, author_name = nil) def extract_videos(initial_data : Hash(String, JSON::Any), author_fallback : String? = nil, author_id_fallback : String? = nil)
videos = extract_items(nodeset, ucid, author_name) extract_items(initial_data, author_fallback, author_id_fallback).select(&.is_a?(SearchVideo)).map(&.as(SearchVideo))
videos.select { |item| item.is_a?(SearchVideo) }.map { |video| video.as(SearchVideo) }
end end
def extract_items(nodeset, ucid = nil, author_name = nil) def extract_item(item : JSON::Any, author_fallback : String? = nil, author_id_fallback : String? = nil)
# TODO: Make this a 'common', so it makes more sense to be used here if i = (item["videoRenderer"]? || item["gridVideoRenderer"]?)
video_id = i["videoId"].as_s
title = i["title"].try { |t| t["simpleText"]?.try &.as_s || t["runs"]?.try &.as_a.map(&.["text"].as_s).join("") } || ""
author_info = i["ownerText"]?.try &.["runs"].as_a[0]?
author = author_info.try &.["text"].as_s || author_fallback || ""
author_id = author_info.try &.["navigationEndpoint"]?.try &.["browseEndpoint"]["browseId"].as_s || author_id_fallback || ""
published = i["publishedTimeText"]?.try &.["simpleText"]?.try { |t| decode_date(t.as_s) } || Time.local
view_count = i["viewCountText"]?.try &.["simpleText"]?.try &.as_s.gsub(/\D+/, "").to_i64? || 0_i64
description_html = i["descriptionSnippet"]?.try { |t| parse_content(t) } || ""
length_seconds = i["lengthText"]?.try &.["simpleText"]?.try &.as_s.try { |t| decode_length_seconds(t) } ||
i["thumbnailOverlays"]?.try &.as_a.find(&.["thumbnailOverlayTimeStatusRenderer"]?).try &.["thumbnailOverlayTimeStatusRenderer"]?
.try &.["text"]?.try &.["simpleText"]?.try &.as_s.try { |t| decode_length_seconds(t) } || 0
live_now = false
paid = false
premium = false
premiere_timestamp = i["upcomingEventData"]?.try &.["startTime"]?.try { |t| Time.unix(t.as_s.to_i64) }
i["badges"]?.try &.as_a.each do |badge|
b = badge["metadataBadgeRenderer"]
case b["label"].as_s
when "LIVE NOW"
live_now = true
when "New", "4K", "CC"
# TODO
when "Premium"
paid = true
# TODO: Potentially available as i["topStandaloneBadge"]["metadataBadgeRenderer"]
premium = true
else nil # Ignore
end
end
SearchVideo.new({
title: title,
id: video_id,
author: author,
ucid: author_id,
published: published,
views: view_count,
description_html: description_html,
length_seconds: length_seconds,
live_now: live_now,
paid: paid,
premium: premium,
premiere_timestamp: premiere_timestamp,
})
elsif i = item["channelRenderer"]?
author = i["title"]["simpleText"]?.try &.as_s || author_fallback || ""
author_id = i["channelId"]?.try &.as_s || author_id_fallback || ""
author_thumbnail = i["thumbnail"]["thumbnails"]?.try &.as_a[0]?.try { |u| "https:#{u["url"]}" } || ""
subscriber_count = i["subscriberCountText"]?.try &.["simpleText"]?.try &.as_s.try { |s| short_text_to_number(s.split(" ")[0]) } || 0
auto_generated = false
auto_generated = true if !i["videoCountText"]?
video_count = i["videoCountText"]?.try &.["runs"].as_a[0]?.try &.["text"].as_s.gsub(/\D/, "").to_i || 0
description_html = i["descriptionSnippet"]?.try { |t| parse_content(t) } || ""
SearchChannel.new({
author: author,
ucid: author_id,
author_thumbnail: author_thumbnail,
subscriber_count: subscriber_count,
video_count: video_count,
description_html: description_html,
auto_generated: auto_generated,
})
elsif i = item["gridPlaylistRenderer"]?
title = i["title"]["runs"].as_a[0]?.try &.["text"].as_s || ""
plid = i["playlistId"]?.try &.as_s || ""
video_count = i["videoCountText"]["runs"].as_a[0]?.try &.["text"].as_s.gsub(/\D/, "").to_i || 0
playlist_thumbnail = i["thumbnail"]["thumbnails"][0]?.try &.["url"]?.try &.as_s || ""
SearchPlaylist.new({
title: title,
id: plid,
author: author_fallback || "",
ucid: author_id_fallback || "",
video_count: video_count,
videos: [] of SearchPlaylistVideo,
thumbnail: playlist_thumbnail,
})
elsif i = item["playlistRenderer"]?
title = i["title"]["simpleText"]?.try &.as_s || ""
plid = i["playlistId"]?.try &.as_s || ""
video_count = i["videoCount"]?.try &.as_s.to_i || 0
playlist_thumbnail = i["thumbnails"].as_a[0]?.try &.["thumbnails"]?.try &.as_a[0]?.try &.["url"].as_s || ""
author_info = i["shortBylineText"]?.try &.["runs"].as_a[0]?
author = author_info.try &.["text"].as_s || author_fallback || ""
author_id = author_info.try &.["navigationEndpoint"]?.try &.["browseEndpoint"]["browseId"].as_s || author_id_fallback || ""
videos = i["videos"]?.try &.as_a.map do |v|
v = v["childVideoRenderer"]
v_title = v["title"]["simpleText"]?.try &.as_s || ""
v_id = v["videoId"]?.try &.as_s || ""
v_length_seconds = v["lengthText"]?.try &.["simpleText"]?.try { |t| decode_length_seconds(t.as_s) } || 0
SearchPlaylistVideo.new({
title: v_title,
id: v_id,
length_seconds: v_length_seconds,
})
end || [] of SearchPlaylistVideo
# TODO: i["publishedTimeText"]?
SearchPlaylist.new({
title: title,
id: plid,
author: author,
ucid: author_id,
video_count: video_count,
videos: videos,
thumbnail: playlist_thumbnail,
})
elsif i = item["radioRenderer"]? # Mix
# TODO
elsif i = item["showRenderer"]? # Show
# TODO
elsif i = item["shelfRenderer"]?
elsif i = item["horizontalCardListRenderer"]?
elsif i = item["searchPyvRenderer"]? # Ad
end
end
def extract_items(initial_data : Hash(String, JSON::Any), author_fallback : String? = nil, author_id_fallback : String? = nil)
items = [] of SearchItem items = [] of SearchItem
nodeset.each do |node| channel_v2_response = initial_data
anchor = node.xpath_node(%q(.//h3[contains(@class, "yt-lockup-title")]/a)) .try &.["response"]?
if !anchor .try &.["continuationContents"]?
next .try &.["gridContinuation"]?
end .try &.["items"]?
title = anchor.content.strip
id = anchor["href"]
if anchor["href"].starts_with? "https://www.googleadservices.com" if channel_v2_response
next channel_v2_response.try &.as_a.each { |item|
end extract_item(item, author_fallback, author_id_fallback)
.try { |t| items << t }
author_id = node.xpath_node(%q(.//div[contains(@class, "yt-lockup-byline")]/a)).try &.["href"].split("/")[-1] || ucid || "" }
author = node.xpath_node(%q(.//div[contains(@class, "yt-lockup-byline")]/a)).try &.content.strip || author_name || "" else
description_html = node.xpath_node(%q(.//div[contains(@class, "yt-lockup-description")])).try &.to_s || "" initial_data.try { |t| t["contents"]? || t["response"]? }
.try { |t| t["twoColumnBrowseResultsRenderer"]?.try &.["tabs"].as_a.select(&.["tabRenderer"]?.try &.["selected"].as_bool)[0]?.try &.["tabRenderer"]["content"] ||
tile = node.xpath_node(%q(.//div[contains(@class, "yt-lockup-tile")])) t["twoColumnSearchResultsRenderer"]?.try &.["primaryContents"] ||
if !tile t["continuationContents"]? }
next .try { |t| t["sectionListRenderer"]? || t["sectionListContinuation"]? }
end .try &.["contents"].as_a
.each { |c| c.try &.["itemSectionRenderer"]?.try &.["contents"].as_a
case tile["class"] .try { |t| t[0]?.try &.["shelfRenderer"]?.try &.["content"]["expandedShelfContentsRenderer"]?.try &.["items"].as_a ||
when .includes? "yt-lockup-playlist" t[0]?.try &.["gridRenderer"]?.try &.["items"].as_a || t }
plid = HTTP::Params.parse(URI.parse(id).query.not_nil!)["list"] .each { |item|
extract_item(item, author_fallback, author_id_fallback)
anchor = node.xpath_node(%q(.//div[contains(@class, "yt-lockup-meta")]/a)) .try { |t| items << t }
} }
if !anchor
anchor = node.xpath_node(%q(.//ul[@class="yt-lockup-meta-info"]/li/a))
end
video_count = node.xpath_node(%q(.//span[@class="formatted-video-count-label"]/b)) ||
node.xpath_node(%q(.//span[@class="formatted-video-count-label"]))
if video_count
video_count = video_count.content
if video_count == "50+"
author = "YouTube"
author_id = "UC-9-kyTW8ZkZNDHQJ6FgpwQ"
end
video_count = video_count.gsub(/\D/, "").to_i?
end
video_count ||= 0
videos = [] of SearchPlaylistVideo
node.xpath_nodes(%q(.//*[contains(@class, "yt-lockup-playlist-items")]/li)).each do |video|
anchor = video.xpath_node(%q(.//a))
if anchor
video_title = anchor.content.strip
id = HTTP::Params.parse(URI.parse(anchor["href"]).query.not_nil!)["v"]
end
video_title ||= ""
id ||= ""
anchor = video.xpath_node(%q(.//span/span))
if anchor
length_seconds = decode_length_seconds(anchor.content)
end
length_seconds ||= 0
videos << SearchPlaylistVideo.new(
video_title,
id,
length_seconds
)
end
playlist_thumbnail = node.xpath_node(%q(.//span/img)).try &.["data-thumb"]?
playlist_thumbnail ||= node.xpath_node(%q(.//span/img)).try &.["src"]
items << SearchPlaylist.new(
title: title,
id: plid,
author: author,
ucid: author_id,
video_count: video_count,
videos: videos,
thumbnail: playlist_thumbnail
)
when .includes? "yt-lockup-channel"
author = title.strip
ucid = node.xpath_node(%q(.//button[contains(@class, "yt-uix-subscription-button")])).try &.["data-channel-external-id"]?
ucid ||= id.split("/")[-1]
author_thumbnail = node.xpath_node(%q(.//div/span/img)).try &.["data-thumb"]?
author_thumbnail ||= node.xpath_node(%q(.//div/span/img)).try &.["src"]
if author_thumbnail
author_thumbnail = URI.parse(author_thumbnail)
author_thumbnail.scheme = "https"
author_thumbnail = author_thumbnail.to_s
end
author_thumbnail ||= ""
subscriber_count = node.xpath_node(%q(.//span[contains(@class, "subscriber-count")]))
.try &.["title"].try { |text| short_text_to_number(text) } || 0
video_count = node.xpath_node(%q(.//ul[@class="yt-lockup-meta-info"]/li)).try &.content.split(" ")[0].gsub(/\D/, "").to_i?
items << SearchChannel.new(
author: author,
ucid: ucid,
author_thumbnail: author_thumbnail,
subscriber_count: subscriber_count,
video_count: video_count || 0,
description_html: description_html,
auto_generated: video_count ? false : true,
)
else
id = id.lchop("/watch?v=")
metadata = node.xpath_node(%q(.//div[contains(@class,"yt-lockup-meta")]/ul))
published = metadata.try &.xpath_node(%q(.//li[contains(text(), " ago")])).try { |node| decode_date(node.content.sub(/^[a-zA-Z]+ /, "")) }
published ||= metadata.try &.xpath_node(%q(.//span[@data-timestamp])).try { |node| Time.unix(node["data-timestamp"].to_i64) }
published ||= Time.utc
view_count = metadata.try &.xpath_node(%q(.//li[contains(text(), " views")])).try &.content.gsub(/\D/, "").to_i64?
view_count ||= 0_i64
length_seconds = node.xpath_node(%q(.//span[@class="video-time"])).try { |node| decode_length_seconds(node.content) }
length_seconds ||= -1
live_now = node.xpath_node(%q(.//span[contains(@class, "yt-badge-live")])) ? true : false
premium = node.xpath_node(%q(.//span[text()="Premium"])) ? true : false
if !premium || node.xpath_node(%q(.//span[contains(text(), "Free episode")]))
paid = false
else
paid = true
end
premiere_timestamp = node.xpath_node(%q(.//ul[@class="yt-lockup-meta-info"]/li/span[@class="localized-date"])).try &.["data-timestamp"]?.try &.to_i64
if premiere_timestamp
premiere_timestamp = Time.unix(premiere_timestamp)
end
items << SearchVideo.new(
title: title,
id: id,
author: author,
ucid: author_id,
published: published,
views: view_count,
description_html: description_html,
length_seconds: length_seconds,
live_now: live_now,
paid: paid,
premium: premium,
premiere_timestamp: premiere_timestamp
)
end
end end
return items items
end
def extract_shelf_items(nodeset, ucid = nil, author_name = nil)
items = [] of SearchPlaylist
nodeset.each do |shelf|
shelf_anchor = shelf.xpath_node(%q(.//h2[contains(@class, "branded-page-module-title")]))
next if !shelf_anchor
title = shelf_anchor.xpath_node(%q(.//span[contains(@class, "branded-page-module-title-text")])).try &.content.strip
title ||= ""
id = shelf_anchor.xpath_node(%q(.//a)).try &.["href"]
next if !id
shelf_is_playlist = false
videos = [] of SearchPlaylistVideo
shelf.xpath_nodes(%q(.//ul[contains(@class, "yt-uix-shelfslider-list") or contains(@class, "expanded-shelf-content-list")]/li)).each do |child_node|
type = child_node.xpath_node(%q(./div))
if !type
next
end
case type["class"]
when .includes? "yt-lockup-video"
shelf_is_playlist = true
anchor = child_node.xpath_node(%q(.//h3[contains(@class, "yt-lockup-title")]/a))
if anchor
video_title = anchor.content.strip
video_id = HTTP::Params.parse(URI.parse(anchor["href"]).query.not_nil!)["v"]
end
video_title ||= ""
video_id ||= ""
anchor = child_node.xpath_node(%q(.//span[@class="video-time"]))
if anchor
length_seconds = decode_length_seconds(anchor.content)
end
length_seconds ||= 0
videos << SearchPlaylistVideo.new(
video_title,
video_id,
length_seconds
)
when .includes? "yt-lockup-playlist"
anchor = child_node.xpath_node(%q(.//h3[contains(@class, "yt-lockup-title")]/a))
if anchor
playlist_title = anchor.content.strip
params = HTTP::Params.parse(URI.parse(anchor["href"]).query.not_nil!)
plid = params["list"]
end
playlist_title ||= ""
plid ||= ""
playlist_thumbnail = child_node.xpath_node(%q(.//span/img)).try &.["data-thumb"]?
playlist_thumbnail ||= child_node.xpath_node(%q(.//span/img)).try &.["src"]
video_count = child_node.xpath_node(%q(.//span[@class="formatted-video-count-label"]/b)) ||
child_node.xpath_node(%q(.//span[@class="formatted-video-count-label"]))
if video_count
video_count = video_count.content.gsub(/\D/, "").to_i?
end
video_count ||= 50
videos = [] of SearchPlaylistVideo
child_node.xpath_nodes(%q(.//*[contains(@class, "yt-lockup-playlist-items")]/li)).each do |video|
anchor = video.xpath_node(%q(.//a))
if anchor
video_title = anchor.content.strip
id = HTTP::Params.parse(URI.parse(anchor["href"]).query.not_nil!)["v"]
end
video_title ||= ""
id ||= ""
anchor = video.xpath_node(%q(.//span/span))
if anchor
length_seconds = decode_length_seconds(anchor.content)
end
length_seconds ||= 0
videos << SearchPlaylistVideo.new(
video_title,
id,
length_seconds
)
end
items << SearchPlaylist.new(
title: playlist_title,
id: plid,
author: author_name,
ucid: ucid,
video_count: video_count,
videos: videos,
thumbnail: playlist_thumbnail
)
end
end
if shelf_is_playlist
plid = HTTP::Params.parse(URI.parse(id).query.not_nil!)["list"]
items << SearchPlaylist.new(
title: title,
id: plid,
author: author_name,
ucid: ucid,
video_count: videos.size,
videos: videos,
thumbnail: "https://i.ytimg.com/vi/#{videos[0].id}/mqdefault.jpg"
)
end
end
return items
end end
def check_enum(db, logger, enum_name, struct_type = nil) def check_enum(db, logger, enum_name, struct_type = nil)
return # TODO
if !db.query_one?("SELECT true FROM pg_type WHERE typname = $1", enum_name, as: Bool) if !db.query_one?("SELECT true FROM pg_type WHERE typname = $1", enum_name, as: Bool)
logger.puts("CREATE TYPE #{enum_name}") logger.puts("CREATE TYPE #{enum_name}")
@ -642,18 +354,14 @@ def check_table(db, logger, table_name, struct_type = nil)
end end
end end
if !struct_type return if !struct_type
return
end
struct_array = struct_type.to_type_tuple struct_array = struct_type.type_array
column_array = get_column_array(db, table_name) column_array = get_column_array(db, table_name)
column_types = File.read("config/sql/#{table_name}.sql").match(/CREATE TABLE public\.#{table_name}\n\((?<types>[\d\D]*?)\);/) column_types = File.read("config/sql/#{table_name}.sql").match(/CREATE TABLE public\.#{table_name}\n\((?<types>[\d\D]*?)\);/)
.try &.["types"].split(",").map { |line| line.strip } .try &.["types"].split(",").map { |line| line.strip }.reject &.starts_with?("CONSTRAINT")
if !column_types return if !column_types
return
end
struct_array.each_with_index do |name, i| struct_array.each_with_index do |name, i|
if name != column_array[i]? if name != column_array[i]?
@ -704,6 +412,15 @@ def check_table(db, logger, table_name, struct_type = nil)
end end
end end
end end
return if column_array.size <= struct_array.size
column_array.each do |column|
if !struct_array.includes? column
logger.puts("ALTER TABLE #{table_name} DROP COLUMN #{column} CASCADE")
db.exec("ALTER TABLE #{table_name} DROP COLUMN #{column} CASCADE")
end
end
end end
class PG::ResultSet class PG::ResultSet
@ -732,9 +449,7 @@ def cache_annotation(db, id, annotations)
body = XML.parse(annotations) body = XML.parse(annotations)
nodeset = body.xpath_nodes(%q(/document/annotations/annotation)) nodeset = body.xpath_nodes(%q(/document/annotations/annotation))
if nodeset == 0 return if nodeset == 0
return
end
has_legacy_annotations = false has_legacy_annotations = false
nodeset.each do |node| nodeset.each do |node|
@ -744,13 +459,10 @@ def cache_annotation(db, id, annotations)
end end
end end
if has_legacy_annotations db.exec("INSERT INTO annotations VALUES ($1, $2) ON CONFLICT DO NOTHING", id, annotations) if has_legacy_annotations
# TODO: Update on conflict?
db.exec("INSERT INTO annotations VALUES ($1, $2) ON CONFLICT DO NOTHING", id, annotations)
end
end end
def create_notification_stream(env, config, kemal_config, decrypt_function, topics, connection_channel) def create_notification_stream(env, topics, connection_channel)
connection = Channel(PQ::Notification).new(8) connection = Channel(PQ::Notification).new(8)
connection_channel.send({true, connection}) connection_channel.send({true, connection})
@ -765,12 +477,12 @@ def create_notification_stream(env, config, kemal_config, decrypt_function, topi
loop do loop do
time_span = [0, 0, 0, 0] time_span = [0, 0, 0, 0]
time_span[rand(4)] = rand(30) + 5 time_span[rand(4)] = rand(30) + 5
published = Time.utc - Time::Span.new(time_span[0], time_span[1], time_span[2], time_span[3]) published = Time.utc - Time::Span.new(days: time_span[0], hours: time_span[1], minutes: time_span[2], seconds: time_span[3])
video_id = TEST_IDS[rand(TEST_IDS.size)] video_id = TEST_IDS[rand(TEST_IDS.size)]
video = get_video(video_id, PG_DB) video = get_video(video_id, PG_DB)
video.published = published video.published = published
response = JSON.parse(video.to_json(locale, config, kemal_config, decrypt_function)) response = JSON.parse(video.to_json(locale))
if fields_text = env.params.query["fields"]? if fields_text = env.params.query["fields"]?
begin begin
@ -804,7 +516,7 @@ def create_notification_stream(env, config, kemal_config, decrypt_function, topi
when .match(/UC[A-Za-z0-9_-]{22}/) when .match(/UC[A-Za-z0-9_-]{22}/)
PG_DB.query_all("SELECT * FROM channel_videos WHERE ucid = $1 AND published > $2 ORDER BY published DESC LIMIT 15", PG_DB.query_all("SELECT * FROM channel_videos WHERE ucid = $1 AND published > $2 ORDER BY published DESC LIMIT 15",
topic, Time.unix(since.not_nil!), as: ChannelVideo).each do |video| topic, Time.unix(since.not_nil!), as: ChannelVideo).each do |video|
response = JSON.parse(video.to_json(locale, config, Kemal.config)) response = JSON.parse(video.to_json(locale))
if fields_text = env.params.query["fields"]? if fields_text = env.params.query["fields"]?
begin begin
@ -846,7 +558,7 @@ def create_notification_stream(env, config, kemal_config, decrypt_function, topi
video = get_video(video_id, PG_DB) video = get_video(video_id, PG_DB)
video.published = Time.unix(published) video.published = Time.unix(published)
response = JSON.parse(video.to_json(locale, config, Kemal.config, decrypt_function)) response = JSON.parse(video.to_json(locale))
if fields_text = env.params.query["fields"]? if fields_text = env.params.query["fields"]?
begin begin
@ -884,26 +596,46 @@ def create_notification_stream(env, config, kemal_config, decrypt_function, topi
end end
end end
def extract_initial_data(body) def extract_initial_data(body) : Hash(String, JSON::Any)
initial_data = body.match(/window\["ytInitialData"\] = (?<info>.*?);\n/).try &.["info"] || "{}" initial_data = body.match(/(window\["ytInitialData"\]|var\s+ytInitialData)\s*=\s*(?<info>.*?);+\s*\n/).try &.["info"] || "{}"
if initial_data.starts_with?("JSON.parse(\"") if initial_data.starts_with?("JSON.parse(\"")
return JSON.parse(JSON.parse(%({"initial_data":"#{initial_data[12..-3]}"}))["initial_data"].as_s) return JSON.parse(JSON.parse(%({"initial_data":"#{initial_data[12..-3]}"}))["initial_data"].as_s).as_h
else else
return JSON.parse(initial_data) return JSON.parse(initial_data).as_h
end end
end end
def proxy_file(response, env) def proxy_file(response, env)
if response.headers.includes_word?("Content-Encoding", "gzip") if response.headers.includes_word?("Content-Encoding", "gzip")
Gzip::Writer.open(env.response) do |deflate| Compress::Gzip::Writer.open(env.response) do |deflate|
response.pipe(deflate) IO.copy response.body_io, deflate
end end
elsif response.headers.includes_word?("Content-Encoding", "deflate") elsif response.headers.includes_word?("Content-Encoding", "deflate")
Flate::Writer.open(env.response) do |deflate| Compress::Deflate::Writer.open(env.response) do |deflate|
response.pipe(deflate) IO.copy response.body_io, deflate
end end
else else
response.pipe(env.response) IO.copy response.body_io, env.response
end
end
# See https://github.com/kemalcr/kemal/pull/576
class HTTP::Server::Response::Output
def close
return if closed?
unless response.wrote_headers?
response.content_length = @out_count
end
ensure_headers_written
super
if @chunked
@io << "0\r\n\r\n"
@io.flush
end
end end
end end

View File

@ -24,6 +24,8 @@ def translate(locale : Hash(String, JSON::Any) | Nil, translation : String, text
if !locale[translation].as_s.empty? if !locale[translation].as_s.empty?
translation = locale[translation].as_s translation = locale[translation].as_s
end end
else
raise "Invalid translation #{translation}"
end end
end end

View File

@ -1,370 +0,0 @@
def refresh_channels(db, logger, config)
max_channel = Channel(Int32).new
spawn do
max_threads = max_channel.receive
active_threads = 0
active_channel = Channel(Bool).new
loop do
db.query("SELECT id FROM channels ORDER BY updated") do |rs|
rs.each do
id = rs.read(String)
if active_threads >= max_threads
if active_channel.receive
active_threads -= 1
end
end
active_threads += 1
spawn do
begin
channel = fetch_channel(id, db, config.full_refresh)
db.exec("UPDATE channels SET updated = $1, author = $2, deleted = false WHERE id = $3", Time.utc, channel.author, id)
rescue ex
if ex.message == "Deleted or invalid channel"
db.exec("UPDATE channels SET updated = $1, deleted = true WHERE id = $2", Time.utc, id)
end
logger.puts("#{id} : #{ex.message}")
end
active_channel.send(true)
end
end
end
sleep 1.minute
Fiber.yield
end
end
max_channel.send(config.channel_threads)
end
def refresh_feeds(db, logger, config)
max_channel = Channel(Int32).new
spawn do
max_threads = max_channel.receive
active_threads = 0
active_channel = Channel(Bool).new
loop do
db.query("SELECT email FROM users WHERE feed_needs_update = true OR feed_needs_update IS NULL") do |rs|
rs.each do
email = rs.read(String)
view_name = "subscriptions_#{sha256(email)}"
if active_threads >= max_threads
if active_channel.receive
active_threads -= 1
end
end
active_threads += 1
spawn do
begin
# Drop outdated views
column_array = get_column_array(db, view_name)
ChannelVideo.to_type_tuple.each_with_index do |name, i|
if name != column_array[i]?
logger.puts("DROP MATERIALIZED VIEW #{view_name}")
db.exec("DROP MATERIALIZED VIEW #{view_name}")
raise "view does not exist"
end
end
if !db.query_one("SELECT pg_get_viewdef('#{view_name}')", as: String).includes? "WHERE ((cv.ucid = ANY (u.subscriptions))"
logger.puts("Materialized view #{view_name} is out-of-date, recreating...")
db.exec("DROP MATERIALIZED VIEW #{view_name}")
end
db.exec("REFRESH MATERIALIZED VIEW #{view_name}")
db.exec("UPDATE users SET feed_needs_update = false WHERE email = $1", email)
rescue ex
# Rename old views
begin
legacy_view_name = "subscriptions_#{sha256(email)[0..7]}"
db.exec("SELECT * FROM #{legacy_view_name} LIMIT 0")
logger.puts("RENAME MATERIALIZED VIEW #{legacy_view_name}")
db.exec("ALTER MATERIALIZED VIEW #{legacy_view_name} RENAME TO #{view_name}")
rescue ex
begin
# While iterating through, we may have an email stored from a deleted account
if db.query_one?("SELECT true FROM users WHERE email = $1", email, as: Bool)
logger.puts("CREATE #{view_name}")
db.exec("CREATE MATERIALIZED VIEW #{view_name} AS #{MATERIALIZED_VIEW_SQL.call(email)}")
db.exec("UPDATE users SET feed_needs_update = false WHERE email = $1", email)
end
rescue ex
logger.puts("REFRESH #{email} : #{ex.message}")
end
end
end
active_channel.send(true)
end
end
end
sleep 5.seconds
Fiber.yield
end
end
max_channel.send(config.feed_threads)
end
def subscribe_to_feeds(db, logger, key, config)
if config.use_pubsub_feeds
case config.use_pubsub_feeds
when Bool
max_threads = config.use_pubsub_feeds.as(Bool).to_unsafe
when Int32
max_threads = config.use_pubsub_feeds.as(Int32)
end
max_channel = Channel(Int32).new
spawn do
max_threads = max_channel.receive
active_threads = 0
active_channel = Channel(Bool).new
loop do
db.query_all("SELECT id FROM channels WHERE CURRENT_TIMESTAMP - subscribed > interval '4 days' OR subscribed IS NULL") do |rs|
rs.each do
ucid = rs.read(String)
if active_threads >= max_threads.as(Int32)
if active_channel.receive
active_threads -= 1
end
end
active_threads += 1
spawn do
begin
response = subscribe_pubsub(ucid, key, config)
if response.status_code >= 400
logger.puts("#{ucid} : #{response.body}")
end
rescue ex
logger.puts("#{ucid} : #{ex.message}")
end
active_channel.send(true)
end
end
end
sleep 1.minute
Fiber.yield
end
end
max_channel.send(max_threads.as(Int32))
end
end
def pull_top_videos(config, db)
loop do
begin
top = rank_videos(db, 40)
rescue ex
sleep 1.minute
Fiber.yield
next
end
if top.size == 0
sleep 1.minute
Fiber.yield
next
end
videos = [] of Video
top.each do |id|
begin
videos << get_video(id, db)
rescue ex
next
end
end
yield videos
sleep 1.minute
Fiber.yield
end
end
def pull_popular_videos(db)
loop do
videos = db.query_all("SELECT DISTINCT ON (ucid) * FROM channel_videos WHERE ucid IN \
(SELECT channel FROM (SELECT UNNEST(subscriptions) AS channel FROM users) AS d \
GROUP BY channel ORDER BY COUNT(channel) DESC LIMIT 40) \
ORDER BY ucid, published DESC", as: ChannelVideo).sort_by { |video| video.published }.reverse
yield videos
sleep 1.minute
Fiber.yield
end
end
def update_decrypt_function
loop do
begin
decrypt_function = fetch_decrypt_function
yield decrypt_function
rescue ex
next
ensure
sleep 1.minute
Fiber.yield
end
end
end
def bypass_captcha(captcha_key, logger)
loop do
begin
response = YT_POOL.client &.get("/watch?v=CvFH_6DNRCY&gl=US&hl=en&disable_polymer=1&has_verified=1&bpctr=9999999999")
if response.body.includes?("To continue with your YouTube experience, please fill out the form below.")
html = XML.parse_html(response.body)
form = html.xpath_node(%(//form[@action="/das_captcha"])).not_nil!
site_key = form.xpath_node(%(.//div[@class="g-recaptcha"])).try &.["data-sitekey"]
inputs = {} of String => String
form.xpath_nodes(%(.//input[@name])).map do |node|
inputs[node["name"]] = node["value"]
end
headers = response.cookies.add_request_headers(HTTP::Headers.new)
response = JSON.parse(HTTP::Client.post("https://api.anti-captcha.com/createTask", body: {
"clientKey" => CONFIG.captcha_key,
"task" => {
"type" => "NoCaptchaTaskProxyless",
# "type" => "NoCaptchaTask",
"websiteURL" => "https://www.youtube.com/watch?v=CvFH_6DNRCY&gl=US&hl=en&disable_polymer=1&has_verified=1&bpctr=9999999999",
"websiteKey" => site_key,
# "proxyType" => "http",
# "proxyAddress" => CONFIG.proxy_address,
# "proxyPort" => CONFIG.proxy_port,
# "proxyLogin" => CONFIG.proxy_user,
# "proxyPassword" => CONFIG.proxy_pass,
# "userAgent" => "User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36",
},
}.to_json).body)
if response["error"]?
raise response["error"].as_s
end
task_id = response["taskId"].as_i
loop do
sleep 10.seconds
response = JSON.parse(HTTP::Client.post("https://api.anti-captcha.com/getTaskResult", body: {
"clientKey" => CONFIG.captcha_key,
"taskId" => task_id,
}.to_json).body)
if response["status"]?.try &.== "ready"
break
elsif response["errorId"]?.try &.as_i != 0
raise response["errorDescription"].as_s
end
end
inputs["g-recaptcha-response"] = response["solution"]["gRecaptchaResponse"].as_s
response = YT_POOL.client &.post("/das_captcha", headers, form: inputs)
yield response.cookies.select { |cookie| cookie.name != "PREF" }
elsif response.headers["Location"]?.try &.includes?("/sorry/index")
location = response.headers["Location"].try { |u| URI.parse(u) }
client = QUIC::Client.new(location.host.not_nil!)
response = client.get(location.full_path)
html = XML.parse_html(response.body)
form = html.xpath_node(%(//form[@action="index"])).not_nil!
site_key = form.xpath_node(%(.//div[@class="g-recaptcha"])).try &.["data-sitekey"]
inputs = {} of String => String
form.xpath_nodes(%(.//input[@name])).map do |node|
inputs[node["name"]] = node["value"]
end
response = JSON.parse(HTTP::Client.post("https://api.anti-captcha.com/createTask", body: {
"clientKey" => CONFIG.captcha_key,
"task" => {
"type" => "NoCaptchaTaskProxyless",
"websiteURL" => location.to_s,
"websiteKey" => site_key,
},
}.to_json).body)
if response["error"]?
raise response["error"].as_s
end
task_id = response["taskId"].as_i
loop do
sleep 10.seconds
response = JSON.parse(HTTP::Client.post("https://api.anti-captcha.com/getTaskResult", body: {
"clientKey" => CONFIG.captcha_key,
"taskId" => task_id,
}.to_json).body)
if response["status"]?.try &.== "ready"
break
elsif response["errorId"]?.try &.as_i != 0
raise response["errorDescription"].as_s
end
end
inputs["g-recaptcha-response"] = response["solution"]["gRecaptchaResponse"].as_s
client.close
client = QUIC::Client.new("www.google.com")
response = client.post(location.full_path, form: inputs)
headers = HTTP::Headers{
"Cookie" => URI.parse(response.headers["location"]).query_params["google_abuse"].split(";")[0],
}
cookies = HTTP::Cookies.from_headers(headers)
yield cookies
end
rescue ex
logger.puts("Exception: #{ex.message}")
ensure
sleep 1.minute
Fiber.yield
end
end
end
def find_working_proxies(regions)
loop do
regions.each do |region|
proxies = get_proxies(region).first(20)
proxies = proxies.map { |proxy| {ip: proxy[:ip], port: proxy[:port]} }
# proxies = filter_proxies(proxies)
yield region, proxies
end
sleep 1.minute
Fiber.yield
end
end

View File

@ -1,43 +1,51 @@
macro db_mapping(mapping) module DB::Serializable
def initialize({{*mapping.keys.map { |id| "@#{id}".id }}}) macro included
end {% verbatim do %}
macro finished
def self.type_array
\{{ @type.instance_vars
.reject { |var| var.annotation(::DB::Field) && var.annotation(::DB::Field)[:ignore] }
.map { |name| name.stringify }
}}
end
def to_a def initialize(tuple)
return [ {{*mapping.keys.map { |id| "@#{id}".id }}} ] \{% for var in @type.instance_vars %}
end \{% ann = var.annotation(::DB::Field) %}
\{% if ann && ann[:ignore] %}
\{% else %}
@\{{var.name}} = tuple[:\{{var.name.id}}]
\{% end %}
\{% end %}
end
def self.to_type_tuple def to_a
return { {{*mapping.keys.map { |id| "#{id}" }}} } \{{ @type.instance_vars
.reject { |var| var.annotation(::DB::Field) && var.annotation(::DB::Field)[:ignore] }
.map { |name| name }
}}
end
end
{% end %}
end end
DB.mapping( {{mapping}} )
end end
macro json_mapping(mapping) module JSON::Serializable
def initialize({{*mapping.keys.map { |id| "@#{id}".id }}}) macro included
{% verbatim do %}
macro finished
def initialize(tuple)
\{% for var in @type.instance_vars %}
\{% ann = var.annotation(::JSON::Field) %}
\{% if ann && ann[:ignore] %}
\{% else %}
@\{{var.name}} = tuple[:\{{var.name.id}}]
\{% end %}
\{% end %}
end
end
{% end %}
end end
def to_a
return [ {{*mapping.keys.map { |id| "@#{id}".id }}} ]
end
patched_json_mapping( {{mapping}} )
YAML.mapping( {{mapping}} )
end
macro yaml_mapping(mapping)
def initialize({{*mapping.keys.map { |id| "@#{id}".id }}})
end
def to_a
return [ {{*mapping.keys.map { |id| "@#{id}".id }}} ]
end
def to_tuple
return { {{*mapping.keys.map { |id| "@#{id}".id }}} }
end
YAML.mapping({{mapping}})
end end
macro templated(filename, template = "template") macro templated(filename, template = "template")

View File

@ -1,166 +0,0 @@
# Overloads https://github.com/crystal-lang/crystal/blob/0.28.0/src/json/from_json.cr#L24
def Object.from_json(string_or_io, default) : self
parser = JSON::PullParser.new(string_or_io)
new parser, default
end
# Adds configurable 'default'
macro patched_json_mapping(_properties_, strict = false)
{% for key, value in _properties_ %}
{% _properties_[key] = {type: value} unless value.is_a?(HashLiteral) || value.is_a?(NamedTupleLiteral) %}
{% end %}
{% for key, value in _properties_ %}
{% _properties_[key][:key_id] = key.id.gsub(/\?$/, "") %}
{% end %}
{% for key, value in _properties_ %}
@{{value[:key_id]}} : {{value[:type]}}{{ (value[:nilable] ? "?" : "").id }}
{% if value[:setter] == nil ? true : value[:setter] %}
def {{value[:key_id]}}=(_{{value[:key_id]}} : {{value[:type]}}{{ (value[:nilable] ? "?" : "").id }})
@{{value[:key_id]}} = _{{value[:key_id]}}
end
{% end %}
{% if value[:getter] == nil ? true : value[:getter] %}
def {{key.id}} : {{value[:type]}}{{ (value[:nilable] ? "?" : "").id }}
@{{value[:key_id]}}
end
{% end %}
{% if value[:presence] %}
@{{value[:key_id]}}_present : Bool = false
def {{value[:key_id]}}_present?
@{{value[:key_id]}}_present
end
{% end %}
{% end %}
def initialize(%pull : ::JSON::PullParser, default = nil)
{% for key, value in _properties_ %}
%var{key.id} = nil
%found{key.id} = false
{% end %}
%location = %pull.location
begin
%pull.read_begin_object
rescue exc : ::JSON::ParseException
raise ::JSON::MappingError.new(exc.message, self.class.to_s, nil, *%location, exc)
end
until %pull.kind.end_object?
%key_location = %pull.location
key = %pull.read_object_key
case key
{% for key, value in _properties_ %}
when {{value[:key] || value[:key_id].stringify}}
%found{key.id} = true
begin
%var{key.id} =
{% if value[:nilable] || value[:default] != nil %} %pull.read_null_or { {% end %}
{% if value[:root] %}
%pull.on_key!({{value[:root]}}) do
{% end %}
{% if value[:converter] %}
{{value[:converter]}}.from_json(%pull)
{% elsif value[:type].is_a?(Path) || value[:type].is_a?(Generic) %}
{{value[:type]}}.new(%pull)
{% else %}
::Union({{value[:type]}}).new(%pull)
{% end %}
{% if value[:root] %}
end
{% end %}
{% if value[:nilable] || value[:default] != nil %} } {% end %}
rescue exc : ::JSON::ParseException
raise ::JSON::MappingError.new(exc.message, self.class.to_s, {{value[:key] || value[:key_id].stringify}}, *%key_location, exc)
end
{% end %}
else
{% if strict %}
raise ::JSON::MappingError.new("Unknown JSON attribute: #{key}", self.class.to_s, nil, *%key_location, nil)
{% else %}
%pull.skip
{% end %}
end
end
%pull.read_next
{% for key, value in _properties_ %}
{% unless value[:nilable] || value[:default] != nil %}
if %var{key.id}.nil? && !%found{key.id} && !::Union({{value[:type]}}).nilable?
raise ::JSON::MappingError.new("Missing JSON attribute: {{(value[:key] || value[:key_id]).id}}", self.class.to_s, nil, *%location, nil)
end
{% end %}
{% if value[:nilable] %}
{% if value[:default] != nil %}
@{{value[:key_id]}} = %found{key.id} ? %var{key.id} : (default.responds_to?(:{{value[:key_id]}}) ? default.{{value[:key_id]}} : {{value[:default]}})
{% else %}
@{{value[:key_id]}} = %var{key.id}
{% end %}
{% elsif value[:default] != nil %}
@{{value[:key_id]}} = %var{key.id}.nil? ? (default.responds_to?(:{{value[:key_id]}}) ? default.{{value[:key_id]}} : {{value[:default]}}) : %var{key.id}
{% else %}
@{{value[:key_id]}} = (%var{key.id}).as({{value[:type]}})
{% end %}
{% if value[:presence] %}
@{{value[:key_id]}}_present = %found{key.id}
{% end %}
{% end %}
end
def to_json(json : ::JSON::Builder)
json.object do
{% for key, value in _properties_ %}
_{{value[:key_id]}} = @{{value[:key_id]}}
{% unless value[:emit_null] %}
unless _{{value[:key_id]}}.nil?
{% end %}
json.field({{value[:key] || value[:key_id].stringify}}) do
{% if value[:root] %}
{% if value[:emit_null] %}
if _{{value[:key_id]}}.nil?
nil.to_json(json)
else
{% end %}
json.object do
json.field({{value[:root]}}) do
{% end %}
{% if value[:converter] %}
if _{{value[:key_id]}}
{{ value[:converter] }}.to_json(_{{value[:key_id]}}, json)
else
nil.to_json(json)
end
{% else %}
_{{value[:key_id]}}.to_json(json)
{% end %}
{% if value[:root] %}
{% if value[:emit_null] %}
end
{% end %}
end
end
{% end %}
end
{% unless value[:emit_null] %}
end
{% end %}
{% end %}
end
end
end

View File

@ -1,69 +1,53 @@
alias SigProc = Proc(Array(String), Int32, Array(String))
def fetch_decrypt_function(id = "CvFH_6DNRCY") def fetch_decrypt_function(id = "CvFH_6DNRCY")
document = YT_POOL.client &.get("/watch?v=#{id}&gl=US&hl=en&disable_polymer=1").body document = YT_POOL.client &.get("/watch?v=#{id}&gl=US&hl=en").body
url = document.match(/src="(?<url>\/yts\/jsbin\/player_ias-.{9}\/en_US\/base.js)"/).not_nil!["url"] url = document.match(/src="(?<url>\/s\/player\/[^\/]+\/player_ias[^\/]+\/en_US\/base.js)"/).not_nil!["url"]
player = YT_POOL.client &.get(url).body player = YT_POOL.client &.get(url).body
function_name = player.match(/^(?<name>[^=]+)=function\(a\){a=a\.split\(""\)/m).not_nil!["name"] function_name = player.match(/^(?<name>[^=]+)=function\(\w\){\w=\w\.split\(""\);[^\. ]+\.[^( ]+/m).not_nil!["name"]
function_body = player.match(/^#{Regex.escape(function_name)}=function\(a\){(?<body>[^}]+)}/m).not_nil!["body"] function_body = player.match(/^#{Regex.escape(function_name)}=function\(\w\){(?<body>[^}]+)}/m).not_nil!["body"]
function_body = function_body.split(";")[1..-2] function_body = function_body.split(";")[1..-2]
var_name = function_body[0][0, 2] var_name = function_body[0][0, 2]
var_body = player.delete("\n").match(/var #{Regex.escape(var_name)}={(?<body>(.*?))};/).not_nil!["body"] var_body = player.delete("\n").match(/var #{Regex.escape(var_name)}={(?<body>(.*?))};/).not_nil!["body"]
operations = {} of String => String operations = {} of String => SigProc
var_body.split("},").each do |operation| var_body.split("},").each do |operation|
op_name = operation.match(/^[^:]+/).not_nil![0] op_name = operation.match(/^[^:]+/).not_nil![0]
op_body = operation.match(/\{[^}]+/).not_nil![0] op_body = operation.match(/\{[^}]+/).not_nil![0]
case op_body case op_body
when "{a.reverse()" when "{a.reverse()"
operations[op_name] = "a" operations[op_name] = ->(a : Array(String), b : Int32) { a.reverse }
when "{a.splice(0,b)" when "{a.splice(0,b)"
operations[op_name] = "b" operations[op_name] = ->(a : Array(String), b : Int32) { a.delete_at(0..(b - 1)); a }
else else
operations[op_name] = "c" operations[op_name] = ->(a : Array(String), b : Int32) { c = a[0]; a[0] = a[b % a.size]; a[b % a.size] = c; a }
end end
end end
decrypt_function = [] of {name: String, value: Int32} decrypt_function = [] of {SigProc, Int32}
function_body.each do |function| function_body.each do |function|
function = function.lchop(var_name).delete("[].") function = function.lchop(var_name).delete("[].")
op_name = function.match(/[^\(]+/).not_nil![0] op_name = function.match(/[^\(]+/).not_nil![0]
value = function.match(/\(a,(?<value>[\d]+)\)/).not_nil!["value"].to_i value = function.match(/\(\w,(?<value>[\d]+)\)/).not_nil!["value"].to_i
decrypt_function << {name: operations[op_name], value: value} decrypt_function << {operations[op_name], value}
end end
return decrypt_function return decrypt_function
end end
def decrypt_signature(fmt, code) def decrypt_signature(fmt : Hash(String, JSON::Any))
if !fmt["s"]? return "" if !fmt["s"]? || !fmt["sp"]?
return ""
sp = fmt["sp"].as_s
sig = fmt["s"].as_s.split("")
DECRYPT_FUNCTION.each do |proc, value|
sig = proc.call(sig, value)
end end
a = fmt["s"] return "&#{sp}=#{sig.join("")}"
a = a.split("")
code.each do |item|
case item[:name]
when "a"
a.reverse!
when "b"
a.delete_at(0..(item[:value] - 1))
when "c"
a = splice(a, item[:value])
end
end
signature = a.join("")
return "&#{fmt["sp"]?}=#{signature}"
end
def splice(a, b)
c = a[0]
a[0] = a[b % a.size]
a[b % a.size] = c
return a
end end

View File

@ -81,12 +81,12 @@ def send_file(env : HTTP::Server::Context, file_path : String, data : Slice(UInt
condition = config.is_a?(Hash) && config["gzip"]? == true && filesize > minsize && Kemal::Utils.zip_types(file_path) condition = config.is_a?(Hash) && config["gzip"]? == true && filesize > minsize && Kemal::Utils.zip_types(file_path)
if condition && request_headers.includes_word?("Accept-Encoding", "gzip") if condition && request_headers.includes_word?("Accept-Encoding", "gzip")
env.response.headers["Content-Encoding"] = "gzip" env.response.headers["Content-Encoding"] = "gzip"
Gzip::Writer.open(env.response) do |deflate| Compress::Gzip::Writer.open(env.response) do |deflate|
IO.copy(file, deflate) IO.copy(file, deflate)
end end
elsif condition && request_headers.includes_word?("Accept-Encoding", "deflate") elsif condition && request_headers.includes_word?("Accept-Encoding", "deflate")
env.response.headers["Content-Encoding"] = "deflate" env.response.headers["Content-Encoding"] = "deflate"
Flate::Writer.open(env.response) do |deflate| Compress::Deflate::Writer.open(env.response) do |deflate|
IO.copy(file, deflate) IO.copy(file, deflate)
end end
else else

View File

@ -1,3 +1,5 @@
require "crypto/subtle"
def generate_token(email, scopes, expire, key, db) def generate_token(email, scopes, expire, key, db)
session = "v1:#{Base64.urlsafe_encode(Random::Secure.random_bytes(32))}" session = "v1:#{Base64.urlsafe_encode(Random::Secure.random_bytes(32))}"
PG_DB.exec("INSERT INTO session_ids VALUES ($1, $2, $3)", session, email, Time.utc) PG_DB.exec("INSERT INTO session_ids VALUES ($1, $2, $3)", session, email, Time.utc)
@ -41,15 +43,10 @@ def sign_token(key, hash)
string_to_sign = [] of String string_to_sign = [] of String
hash.each do |key, value| hash.each do |key, value|
if key == "signature" next if key == "signature"
next
end
if value.is_a?(JSON::Any) if value.is_a?(JSON::Any) && value.as_a?
case value value = value.as_a.map { |i| i.as_s }
when .as_a?
value = value.as_a.map { |item| item.as_s }
end
end end
case value case value
@ -76,14 +73,25 @@ def validate_request(token, session, request, key, db, locale = nil)
raise translate(locale, "Hidden field \"token\" is a required field") raise translate(locale, "Hidden field \"token\" is a required field")
end end
if token["signature"] != sign_token(key, token) expire = token["expire"]?.try &.as_i
raise translate(locale, "Invalid signature") if expire.try &.< Time.utc.to_unix
raise translate(locale, "Token is expired, please try again")
end end
if token["session"] != session if token["session"] != session
raise translate(locale, "Erroneous token") raise translate(locale, "Erroneous token")
end end
scopes = token["scopes"].as_a.map { |v| v.as_s }
scope = "#{request.method}:#{request.path.lchop("/api/v1/auth/").lstrip("/")}"
if !scopes_include_scope(scopes, scope)
raise translate(locale, "Invalid scope")
end
if !Crypto::Subtle.constant_time_compare(token["signature"].to_s, sign_token(key, token))
raise translate(locale, "Invalid signature")
end
if token["nonce"]? && (nonce = db.query_one?("SELECT * FROM nonces WHERE nonce = $1", token["nonce"], as: {String, Time})) if token["nonce"]? && (nonce = db.query_one?("SELECT * FROM nonces WHERE nonce = $1", token["nonce"], as: {String, Time}))
if nonce[1] > Time.utc if nonce[1] > Time.utc
db.exec("UPDATE nonces SET expire = $1 WHERE nonce = $2", Time.utc(1990, 1, 1), nonce[0]) db.exec("UPDATE nonces SET expire = $1 WHERE nonce = $2", Time.utc(1990, 1, 1), nonce[0])
@ -92,18 +100,6 @@ def validate_request(token, session, request, key, db, locale = nil)
end end
end end
scopes = token["scopes"].as_a.map { |v| v.as_s }
scope = "#{request.method}:#{request.path.lchop("/api/v1/auth/").lstrip("/")}"
if !scopes_include_scope(scopes, scope)
raise translate(locale, "Invalid scope")
end
expire = token["expire"]?.try &.as_i
if expire.try &.< Time.utc.to_unix
raise translate(locale, "Token is expired, please try again")
end
return {scopes, expire, token["signature"].as_s} return {scopes, expire, token["signature"].as_s}
end end

View File

@ -2,13 +2,16 @@ require "lsquic"
require "pool/connection" require "pool/connection"
def add_yt_headers(request) def add_yt_headers(request)
request.headers["x-youtube-client-name"] ||= "1"
request.headers["x-youtube-client-version"] ||= "1.20180719"
request.headers["user-agent"] ||= "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36" request.headers["user-agent"] ||= "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.97 Safari/537.36"
request.headers["accept-charset"] ||= "ISO-8859-1,utf-8;q=0.7,*;q=0.7" request.headers["accept-charset"] ||= "ISO-8859-1,utf-8;q=0.7,*;q=0.7"
request.headers["accept"] ||= "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" request.headers["accept"] ||= "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8"
request.headers["accept-language"] ||= "en-us,en;q=0.5" request.headers["accept-language"] ||= "en-us,en;q=0.5"
request.headers["cookie"] = "#{(CONFIG.cookies.map { |c| "#{c.name}=#{c.value}" }).join("; ")}; #{request.headers["cookie"]?}" return if request.resource.starts_with? "/sorry/index"
request.headers["x-youtube-client-name"] ||= "1"
request.headers["x-youtube-client-version"] ||= "2.20200609"
if !CONFIG.cookies.empty?
request.headers["cookie"] = "#{(CONFIG.cookies.map { |c| "#{c.name}=#{c.value}" }).join("; ")}; #{request.headers["cookie"]?}"
end
end end
struct QUICPool struct QUICPool
@ -77,7 +80,8 @@ def elapsed_text(elapsed)
end end
def make_client(url : URI, region = nil) def make_client(url : URI, region = nil)
client = HTTPClient.new(url) # TODO: Migrate any applicable endpoints to QUIC
client = HTTPClient.new(url, OpenSSL::SSL::Context::Client.insecure)
client.family = (url.host == "www.youtube.com") ? CONFIG.force_resolve : Socket::Family::UNSPEC client.family = (url.host == "www.youtube.com") ? CONFIG.force_resolve : Socket::Family::UNSPEC
client.read_timeout = 10.seconds client.read_timeout = 10.seconds
client.connect_timeout = 10.seconds client.connect_timeout = 10.seconds
@ -99,7 +103,7 @@ end
def decode_length_seconds(string) def decode_length_seconds(string)
length_seconds = string.gsub(/[^0-9:]/, "").split(":").map &.to_i length_seconds = string.gsub(/[^0-9:]/, "").split(":").map &.to_i
length_seconds = [0] * (3 - length_seconds.size) + length_seconds length_seconds = [0] * (3 - length_seconds.size) + length_seconds
length_seconds = Time::Span.new(length_seconds[0], length_seconds[1], length_seconds[2]) length_seconds = Time::Span.new hours: length_seconds[0], minutes: length_seconds[1], seconds: length_seconds[2]
length_seconds = length_seconds.total_seconds.to_i length_seconds = length_seconds.total_seconds.to_i
return length_seconds return length_seconds
@ -161,6 +165,7 @@ def decode_date(string : String)
return Time.utc return Time.utc
when "yesterday" when "yesterday"
return Time.utc - 1.day return Time.utc - 1.day
else nil # Continue
end end
# String matches format "20 hours ago", "4 months ago"... # String matches format "20 hours ago", "4 months ago"...
@ -315,7 +320,7 @@ def get_referer(env, fallback = "/", unroll = true)
end end
referer = referer.full_path referer = referer.full_path
referer = "/" + referer.lstrip("\/\\") referer = "/" + referer.gsub(/[^\/?@&%=\-_.0-9a-zA-Z]/, "").lstrip("/\\")
if referer == env.request.path if referer == env.request.path
referer = fallback referer = fallback
@ -324,47 +329,10 @@ def get_referer(env, fallback = "/", unroll = true)
return referer return referer
end end
struct VarInt
def self.from_io(io : IO, format = IO::ByteFormat::NetworkEndian) : Int32
result = 0_u32
num_read = 0
loop do
byte = io.read_byte
raise "Invalid VarInt" if !byte
value = byte & 0x7f
result |= value.to_u32 << (7 * num_read)
num_read += 1
break if byte & 0x80 == 0
raise "Invalid VarInt" if num_read > 5
end
result.to_i32
end
def self.to_io(io : IO, value : Int32)
io.write_byte 0x00 if value == 0x00
value = value.to_u32
while value != 0
byte = (value & 0x7f).to_u8
value >>= 7
if value != 0
byte |= 0x80
end
io.write_byte byte
end
end
end
def sha256(text) def sha256(text)
digest = OpenSSL::Digest.new("SHA256") digest = OpenSSL::Digest.new("SHA256")
digest << text digest << text
return digest.hexdigest return digest.final.hexstring
end end
def subscribe_pubsub(topic, key, config) def subscribe_pubsub(topic, key, config)
@ -383,10 +351,8 @@ def subscribe_pubsub(topic, key, config)
nonce = Random::Secure.hex(4) nonce = Random::Secure.hex(4)
signature = "#{time}:#{nonce}" signature = "#{time}:#{nonce}"
host_url = make_host_url(config, Kemal.config)
body = { body = {
"hub.callback" => "#{host_url}/feed/webhook/v1:#{time}:#{nonce}:#{OpenSSL::HMAC.hexdigest(:sha1, key, signature)}", "hub.callback" => "#{HOST_URL}/feed/webhook/v1:#{time}:#{nonce}:#{OpenSSL::HMAC.hexdigest(:sha1, key, signature)}",
"hub.topic" => "https://www.youtube.com/xml/feeds/videos.xml?#{topic}", "hub.topic" => "https://www.youtube.com/xml/feeds/videos.xml?#{topic}",
"hub.verify" => "async", "hub.verify" => "async",
"hub.mode" => "subscribe", "hub.mode" => "subscribe",

13
src/invidious/jobs.cr Normal file
View File

@ -0,0 +1,13 @@
module Invidious::Jobs
JOBS = [] of BaseJob
def self.register(job : BaseJob)
JOBS << job
end
def self.start_all
JOBS.each do |job|
spawn { job.begin }
end
end
end

View File

@ -0,0 +1,3 @@
abstract class Invidious::Jobs::BaseJob
abstract def begin
end

View File

@ -0,0 +1,131 @@
class Invidious::Jobs::BypassCaptchaJob < Invidious::Jobs::BaseJob
private getter logger : Invidious::LogHandler
private getter config : Config
def initialize(@logger, @config)
end
def begin
loop do
begin
{"/watch?v=jNQXAC9IVRw&gl=US&hl=en&has_verified=1&bpctr=9999999999", produce_channel_videos_url(ucid: "UC4QobU6STFB0P71PMvOGN5A")}.each do |path|
response = YT_POOL.client &.get(path)
if response.body.includes?("To continue with your YouTube experience, please fill out the form below.")
html = XML.parse_html(response.body)
form = html.xpath_node(%(//form[@action="/das_captcha"])).not_nil!
site_key = form.xpath_node(%(.//div[@id="recaptcha"])).try &.["data-sitekey"]
s_value = form.xpath_node(%(.//div[@id="recaptcha"])).try &.["data-s"]
inputs = {} of String => String
form.xpath_nodes(%(.//input[@name])).map do |node|
inputs[node["name"]] = node["value"]
end
headers = response.cookies.add_request_headers(HTTP::Headers.new)
response = JSON.parse(HTTP::Client.post("https://api.anti-captcha.com/createTask", body: {
"clientKey" => config.captcha_key,
"task" => {
"type" => "NoCaptchaTaskProxyless",
"websiteURL" => "https://www.youtube.com#{path}",
"websiteKey" => site_key,
"recaptchaDataSValue" => s_value,
},
}.to_json).body)
raise response["error"].as_s if response["error"]?
task_id = response["taskId"].as_i
loop do
sleep 10.seconds
response = JSON.parse(HTTP::Client.post("https://api.anti-captcha.com/getTaskResult", body: {
"clientKey" => config.captcha_key,
"taskId" => task_id,
}.to_json).body)
if response["status"]?.try &.== "ready"
break
elsif response["errorId"]?.try &.as_i != 0
raise response["errorDescription"].as_s
end
end
inputs["g-recaptcha-response"] = response["solution"]["gRecaptchaResponse"].as_s
headers["Cookies"] = response["solution"]["cookies"].as_h?.try &.map { |k, v| "#{k}=#{v}" }.join("; ") || ""
response = YT_POOL.client &.post("/das_captcha", headers, form: inputs)
response.cookies
.select { |cookie| cookie.name != "PREF" }
.each { |cookie| config.cookies << cookie }
# Persist cookies between runs
File.write("config/config.yml", config.to_yaml)
elsif response.headers["Location"]?.try &.includes?("/sorry/index")
location = response.headers["Location"].try { |u| URI.parse(u) }
headers = HTTP::Headers{":authority" => location.host.not_nil!}
response = YT_POOL.client &.get(location.full_path, headers)
html = XML.parse_html(response.body)
form = html.xpath_node(%(//form[@action="index"])).not_nil!
site_key = form.xpath_node(%(.//div[@id="recaptcha"])).try &.["data-sitekey"]
s_value = form.xpath_node(%(.//div[@id="recaptcha"])).try &.["data-s"]
inputs = {} of String => String
form.xpath_nodes(%(.//input[@name])).map do |node|
inputs[node["name"]] = node["value"]
end
captcha_client = HTTPClient.new(URI.parse("https://api.anti-captcha.com"))
captcha_client.family = config.force_resolve || Socket::Family::INET
response = JSON.parse(captcha_client.post("/createTask", body: {
"clientKey" => config.captcha_key,
"task" => {
"type" => "NoCaptchaTaskProxyless",
"websiteURL" => location.to_s,
"websiteKey" => site_key,
"recaptchaDataSValue" => s_value,
},
}.to_json).body)
raise response["error"].as_s if response["error"]?
task_id = response["taskId"].as_i
loop do
sleep 10.seconds
response = JSON.parse(captcha_client.post("/getTaskResult", body: {
"clientKey" => config.captcha_key,
"taskId" => task_id,
}.to_json).body)
if response["status"]?.try &.== "ready"
break
elsif response["errorId"]?.try &.as_i != 0
raise response["errorDescription"].as_s
end
end
inputs["g-recaptcha-response"] = response["solution"]["gRecaptchaResponse"].as_s
headers["Cookies"] = response["solution"]["cookies"].as_h?.try &.map { |k, v| "#{k}=#{v}" }.join("; ") || ""
response = YT_POOL.client &.post("/sorry/index", headers: headers, form: inputs)
headers = HTTP::Headers{
"Cookie" => URI.parse(response.headers["location"]).query_params["google_abuse"].split(";")[0],
}
cookies = HTTP::Cookies.from_headers(headers)
cookies.each { |cookie| config.cookies << cookie }
# Persist cookies between runs
File.write("config/config.yml", config.to_yaml)
end
end
rescue ex
logger.puts("Exception: #{ex.message}")
ensure
sleep 1.minute
Fiber.yield
end
end
end
end

View File

@ -0,0 +1,24 @@
class Invidious::Jobs::NotificationJob < Invidious::Jobs::BaseJob
private getter connection_channel : Channel({Bool, Channel(PQ::Notification)})
private getter pg_url : URI
def initialize(@connection_channel, @pg_url)
end
def begin
connections = [] of Channel(PQ::Notification)
PG.connect_listen(pg_url, "notifications") { |event| connections.each(&.send(event)) }
loop do
action, connection = connection_channel.receive
case action
when true
connections << connection
when false
connections.delete(connection)
end
end
end
end

View File

@ -0,0 +1,27 @@
class Invidious::Jobs::PullPopularVideosJob < Invidious::Jobs::BaseJob
QUERY = <<-SQL
SELECT DISTINCT ON (ucid) *
FROM channel_videos
WHERE ucid IN (SELECT channel FROM (SELECT UNNEST(subscriptions) AS channel FROM users) AS d
GROUP BY channel ORDER BY COUNT(channel) DESC LIMIT 40)
ORDER BY ucid, published DESC
SQL
POPULAR_VIDEOS = Atomic.new([] of ChannelVideo)
private getter db : DB::Database
def initialize(@db)
end
def begin
loop do
videos = db.query_all(QUERY, as: ChannelVideo)
.sort_by(&.published)
.reverse
POPULAR_VIDEOS.set(videos)
sleep 1.minute
Fiber.yield
end
end
end

View File

@ -0,0 +1,59 @@
class Invidious::Jobs::RefreshChannelsJob < Invidious::Jobs::BaseJob
private getter db : DB::Database
private getter logger : Invidious::LogHandler
private getter config : Config
def initialize(@db, @logger, @config)
end
def begin
max_threads = config.channel_threads
lim_threads = max_threads
active_threads = 0
active_channel = Channel(Bool).new
backoff = 1.seconds
loop do
db.query("SELECT id FROM channels ORDER BY updated") do |rs|
rs.each do
id = rs.read(String)
if active_threads >= lim_threads
if active_channel.receive
active_threads -= 1
end
end
active_threads += 1
spawn do
begin
channel = fetch_channel(id, db, config.full_refresh)
lim_threads = max_threads
db.exec("UPDATE channels SET updated = $1, author = $2, deleted = false WHERE id = $3", Time.utc, channel.author, id)
rescue ex
logger.puts("#{id} : #{ex.message}")
if ex.message == "Deleted or invalid channel"
db.exec("UPDATE channels SET updated = $1, deleted = true WHERE id = $2", Time.utc, id)
else
lim_threads = 1
logger.puts("#{id} : backing off for #{backoff}s")
sleep backoff
if backoff < 1.days
backoff += backoff
else
backoff = 1.days
end
end
end
active_channel.send(true)
end
end
end
sleep 1.minute
Fiber.yield
end
end
end

View File

@ -0,0 +1,77 @@
class Invidious::Jobs::RefreshFeedsJob < Invidious::Jobs::BaseJob
private getter db : DB::Database
private getter logger : Invidious::LogHandler
private getter config : Config
def initialize(@db, @logger, @config)
end
def begin
max_threads = config.feed_threads
active_threads = 0
active_channel = Channel(Bool).new
loop do
db.query("SELECT email FROM users WHERE feed_needs_update = true OR feed_needs_update IS NULL") do |rs|
rs.each do
email = rs.read(String)
view_name = "subscriptions_#{sha256(email)}"
if active_threads >= max_threads
if active_channel.receive
active_threads -= 1
end
end
active_threads += 1
spawn do
begin
# Drop outdated views
column_array = get_column_array(db, view_name)
ChannelVideo.type_array.each_with_index do |name, i|
if name != column_array[i]?
logger.puts("DROP MATERIALIZED VIEW #{view_name}")
db.exec("DROP MATERIALIZED VIEW #{view_name}")
raise "view does not exist"
end
end
if !db.query_one("SELECT pg_get_viewdef('#{view_name}')", as: String).includes? "WHERE ((cv.ucid = ANY (u.subscriptions))"
logger.puts("Materialized view #{view_name} is out-of-date, recreating...")
db.exec("DROP MATERIALIZED VIEW #{view_name}")
end
db.exec("REFRESH MATERIALIZED VIEW #{view_name}")
db.exec("UPDATE users SET feed_needs_update = false WHERE email = $1", email)
rescue ex
# Rename old views
begin
legacy_view_name = "subscriptions_#{sha256(email)[0..7]}"
db.exec("SELECT * FROM #{legacy_view_name} LIMIT 0")
logger.puts("RENAME MATERIALIZED VIEW #{legacy_view_name}")
db.exec("ALTER MATERIALIZED VIEW #{legacy_view_name} RENAME TO #{view_name}")
rescue ex
begin
# While iterating through, we may have an email stored from a deleted account
if db.query_one?("SELECT true FROM users WHERE email = $1", email, as: Bool)
logger.puts("CREATE #{view_name}")
db.exec("CREATE MATERIALIZED VIEW #{view_name} AS #{MATERIALIZED_VIEW_SQL.call(email)}")
db.exec("UPDATE users SET feed_needs_update = false WHERE email = $1", email)
end
rescue ex
logger.puts("REFRESH #{email} : #{ex.message}")
end
end
end
active_channel.send(true)
end
end
end
sleep 5.seconds
Fiber.yield
end
end
end

View File

@ -0,0 +1,59 @@
class Invidious::Jobs::StatisticsRefreshJob < Invidious::Jobs::BaseJob
STATISTICS = {
"version" => "2.0",
"software" => {
"name" => "invidious",
"version" => "",
"branch" => "",
},
"openRegistrations" => true,
"usage" => {
"users" => {
"total" => 0_i64,
"activeHalfyear" => 0_i64,
"activeMonth" => 0_i64,
},
},
"metadata" => {
"updatedAt" => Time.utc.to_unix,
"lastChannelRefreshedAt" => 0_i64,
},
}
private getter db : DB::Database
private getter config : Config
def initialize(@db, @config, @software_config : Hash(String, String))
end
def begin
load_initial_stats
loop do
refresh_stats
sleep 1.minute
Fiber.yield
end
end
# should only be called once at the very beginning
private def load_initial_stats
STATISTICS["software"] = {
"name" => @software_config["name"],
"version" => @software_config["version"],
"branch" => @software_config["branch"],
}
STATISTICS["openRegistration"] = config.registration_enabled
end
private def refresh_stats
users = STATISTICS.dig("usage", "users").as(Hash(String, Int64))
users["total"] = db.query_one("SELECT count(*) FROM users", as: Int64)
users["activeHalfyear"] = db.query_one("SELECT count(*) FROM users WHERE CURRENT_TIMESTAMP - updated < '6 months'", as: Int64)
users["activeMonth"] = db.query_one("SELECT count(*) FROM users WHERE CURRENT_TIMESTAMP - updated < '1 month'", as: Int64)
STATISTICS["metadata"] = {
"updatedAt" => Time.utc.to_unix,
"lastChannelRefreshedAt" => db.query_one?("SELECT updated FROM channels ORDER BY updated DESC LIMIT 1", as: Time).try &.to_unix || 0_i64,
}
end
end

View File

@ -0,0 +1,52 @@
class Invidious::Jobs::SubscribeToFeedsJob < Invidious::Jobs::BaseJob
private getter db : DB::Database
private getter logger : Invidious::LogHandler
private getter hmac_key : String
private getter config : Config
def initialize(@db, @logger, @config, @hmac_key)
end
def begin
max_threads = 1
if config.use_pubsub_feeds.is_a?(Int32)
max_threads = config.use_pubsub_feeds.as(Int32)
end
active_threads = 0
active_channel = Channel(Bool).new
loop do
db.query_all("SELECT id FROM channels WHERE CURRENT_TIMESTAMP - subscribed > interval '4 days' OR subscribed IS NULL") do |rs|
rs.each do
ucid = rs.read(String)
if active_threads >= max_threads.as(Int32)
if active_channel.receive
active_threads -= 1
end
end
active_threads += 1
spawn do
begin
response = subscribe_pubsub(ucid, hmac_key, config)
if response.status_code >= 400
logger.puts("#{ucid} : #{response.body}")
end
rescue ex
logger.puts("#{ucid} : #{ex.message}")
end
active_channel.send(true)
end
end
end
sleep 1.minute
Fiber.yield
end
end
end

View File

@ -0,0 +1,19 @@
class Invidious::Jobs::UpdateDecryptFunctionJob < Invidious::Jobs::BaseJob
DECRYPT_FUNCTION = [] of {SigProc, Int32}
def begin
loop do
begin
decrypt_function = fetch_decrypt_function
DECRYPT_FUNCTION.clear
decrypt_function.each { |df| DECRYPT_FUNCTION << df }
rescue ex
# TODO: Log error
next
ensure
sleep 1.minute
Fiber.yield
end
end
end
end

View File

@ -1,32 +1,32 @@
struct MixVideo struct MixVideo
db_mapping({ include DB::Serializable
title: String,
id: String, property title : String
author: String, property id : String
ucid: String, property author : String
length_seconds: Int32, property ucid : String
index: Int32, property length_seconds : Int32
rdid: String, property index : Int32
}) property rdid : String
end end
struct Mix struct Mix
db_mapping({ include DB::Serializable
title: String,
id: String, property title : String
videos: Array(MixVideo), property id : String
}) property videos : Array(MixVideo)
end end
def fetch_mix(rdid, video_id, cookies = nil, locale = nil) def fetch_mix(rdid, video_id, cookies = nil, locale = nil)
headers = HTTP::Headers.new headers = HTTP::Headers.new
headers["User-Agent"] = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36"
if cookies if cookies
headers = cookies.add_request_headers(headers) headers = cookies.add_request_headers(headers)
end end
response = YT_POOL.client &.get("/watch?v=#{video_id}&list=#{rdid}&gl=US&hl=en&has_verified=1&bpctr=9999999999", headers)
video_id = "CvFH_6DNRCY" if rdid.starts_with? "OLAK5uy_"
response = YT_POOL.client &.get("/watch?v=#{video_id}&list=#{rdid}&gl=US&hl=en", headers)
initial_data = extract_initial_data(response.body) initial_data = extract_initial_data(response.body)
if !initial_data["contents"]["twoColumnWatchNextResults"]["playlist"]? if !initial_data["contents"]["twoColumnWatchNextResults"]["playlist"]?
@ -49,23 +49,22 @@ def fetch_mix(rdid, video_id, cookies = nil, locale = nil)
id = item["videoId"].as_s id = item["videoId"].as_s
title = item["title"]?.try &.["simpleText"].as_s title = item["title"]?.try &.["simpleText"].as_s
if !title next if !title
next
end
author = item["longBylineText"]["runs"][0]["text"].as_s author = item["longBylineText"]["runs"][0]["text"].as_s
ucid = item["longBylineText"]["runs"][0]["navigationEndpoint"]["browseEndpoint"]["browseId"].as_s ucid = item["longBylineText"]["runs"][0]["navigationEndpoint"]["browseEndpoint"]["browseId"].as_s
length_seconds = decode_length_seconds(item["lengthText"]["simpleText"].as_s) length_seconds = decode_length_seconds(item["lengthText"]["simpleText"].as_s)
index = item["navigationEndpoint"]["watchEndpoint"]["index"].as_i index = item["navigationEndpoint"]["watchEndpoint"]["index"].as_i
videos << MixVideo.new( videos << MixVideo.new({
title, title: title,
id, id: id,
author, author: author,
ucid, ucid: ucid,
length_seconds, length_seconds: length_seconds,
index, index: index,
rdid rdid: rdid,
) })
end end
if !cookies if !cookies
@ -75,7 +74,11 @@ def fetch_mix(rdid, video_id, cookies = nil, locale = nil)
videos.uniq! { |video| video.id } videos.uniq! { |video| video.id }
videos = videos.first(50) videos = videos.first(50)
return Mix.new(mix_title, rdid, videos) return Mix.new({
title: mix_title,
id: rdid,
videos: videos,
})
end end
def template_mix(mix) def template_mix(mix)

View File

@ -1,26 +1,38 @@
struct PlaylistVideo struct PlaylistVideo
def to_xml(host_url, auto_generated, xml : XML::Builder) include DB::Serializable
property title : String
property id : String
property author : String
property ucid : String
property length_seconds : Int32
property published : Time
property plid : String
property index : Int64
property live_now : Bool
def to_xml(auto_generated, xml : XML::Builder)
xml.element("entry") do xml.element("entry") do
xml.element("id") { xml.text "yt:video:#{self.id}" } xml.element("id") { xml.text "yt:video:#{self.id}" }
xml.element("yt:videoId") { xml.text self.id } xml.element("yt:videoId") { xml.text self.id }
xml.element("yt:channelId") { xml.text self.ucid } xml.element("yt:channelId") { xml.text self.ucid }
xml.element("title") { xml.text self.title } xml.element("title") { xml.text self.title }
xml.element("link", rel: "alternate", href: "#{host_url}/watch?v=#{self.id}") xml.element("link", rel: "alternate", href: "#{HOST_URL}/watch?v=#{self.id}")
xml.element("author") do xml.element("author") do
if auto_generated if auto_generated
xml.element("name") { xml.text self.author } xml.element("name") { xml.text self.author }
xml.element("uri") { xml.text "#{host_url}/channel/#{self.ucid}" } xml.element("uri") { xml.text "#{HOST_URL}/channel/#{self.ucid}" }
else else
xml.element("name") { xml.text author } xml.element("name") { xml.text author }
xml.element("uri") { xml.text "#{host_url}/channel/#{ucid}" } xml.element("uri") { xml.text "#{HOST_URL}/channel/#{ucid}" }
end end
end end
xml.element("content", type: "xhtml") do xml.element("content", type: "xhtml") do
xml.element("div", xmlns: "http://www.w3.org/1999/xhtml") do xml.element("div", xmlns: "http://www.w3.org/1999/xhtml") do
xml.element("a", href: "#{host_url}/watch?v=#{self.id}") do xml.element("a", href: "#{HOST_URL}/watch?v=#{self.id}") do
xml.element("img", src: "#{host_url}/vi/#{self.id}/mqdefault.jpg") xml.element("img", src: "#{HOST_URL}/vi/#{self.id}/mqdefault.jpg")
end end
end end
end end
@ -29,23 +41,23 @@ struct PlaylistVideo
xml.element("media:group") do xml.element("media:group") do
xml.element("media:title") { xml.text self.title } xml.element("media:title") { xml.text self.title }
xml.element("media:thumbnail", url: "#{host_url}/vi/#{self.id}/mqdefault.jpg", xml.element("media:thumbnail", url: "#{HOST_URL}/vi/#{self.id}/mqdefault.jpg",
width: "320", height: "180") width: "320", height: "180")
end end
end end
end end
def to_xml(host_url, auto_generated, xml : XML::Builder? = nil) def to_xml(auto_generated, xml : XML::Builder? = nil)
if xml if xml
to_xml(host_url, auto_generated, xml) to_xml(auto_generated, xml)
else else
XML.build do |json| XML.build do |json|
to_xml(host_url, auto_generated, xml) to_xml(auto_generated, xml)
end end
end end
end end
def to_json(locale, config, kemal_config, json : JSON::Builder, index : Int32?) def to_json(locale, json : JSON::Builder, index : Int32?)
json.object do json.object do
json.field "title", self.title json.field "title", self.title
json.field "videoId", self.id json.field "videoId", self.id
@ -55,7 +67,7 @@ struct PlaylistVideo
json.field "authorUrl", "/channel/#{self.ucid}" json.field "authorUrl", "/channel/#{self.ucid}"
json.field "videoThumbnails" do json.field "videoThumbnails" do
generate_thumbnails(json, self.id, config, kemal_config) generate_thumbnails(json, self.id)
end end
if index if index
@ -69,31 +81,32 @@ struct PlaylistVideo
end end
end end
def to_json(locale, config, kemal_config, json : JSON::Builder? = nil, index : Int32? = nil) def to_json(locale, json : JSON::Builder? = nil, index : Int32? = nil)
if json if json
to_json(locale, config, kemal_config, json, index: index) to_json(locale, json, index: index)
else else
JSON.build do |json| JSON.build do |json|
to_json(locale, config, kemal_config, json, index: index) to_json(locale, json, index: index)
end end
end end
end end
db_mapping({
title: String,
id: String,
author: String,
ucid: String,
length_seconds: Int32,
published: Time,
plid: String,
index: Int64,
live_now: Bool,
})
end end
struct Playlist struct Playlist
def to_json(offset, locale, config, kemal_config, json : JSON::Builder, continuation : String? = nil) include DB::Serializable
property title : String
property id : String
property author : String
property author_thumbnail : String
property ucid : String
property description : String
property video_count : Int32
property views : Int64
property updated : Time
property thumbnail : String?
def to_json(offset, locale, json : JSON::Builder, continuation : String? = nil)
json.object do json.object do
json.field "type", "playlist" json.field "type", "playlist"
json.field "title", self.title json.field "title", self.title
@ -118,7 +131,7 @@ struct Playlist
end end
end end
json.field "description", html_to_content(self.description_html) json.field "description", self.description
json.field "descriptionHtml", self.description_html json.field "descriptionHtml", self.description_html
json.field "videoCount", self.video_count json.field "videoCount", self.video_count
@ -130,39 +143,30 @@ struct Playlist
json.array do json.array do
videos = get_playlist_videos(PG_DB, self, offset: offset, locale: locale, continuation: continuation) videos = get_playlist_videos(PG_DB, self, offset: offset, locale: locale, continuation: continuation)
videos.each_with_index do |video, index| videos.each_with_index do |video, index|
video.to_json(locale, config, Kemal.config, json) video.to_json(locale, json)
end end
end end
end end
end end
end end
def to_json(offset, locale, config, kemal_config, json : JSON::Builder? = nil, continuation : String? = nil) def to_json(offset, locale, json : JSON::Builder? = nil, continuation : String? = nil)
if json if json
to_json(offset, locale, config, kemal_config, json, continuation: continuation) to_json(offset, locale, json, continuation: continuation)
else else
JSON.build do |json| JSON.build do |json|
to_json(offset, locale, config, kemal_config, json, continuation: continuation) to_json(offset, locale, json, continuation: continuation)
end end
end end
end end
db_mapping({
title: String,
id: String,
author: String,
author_thumbnail: String,
ucid: String,
description_html: String,
video_count: Int32,
views: Int64,
updated: Time,
thumbnail: String?,
})
def privacy def privacy
PlaylistPrivacy::Public PlaylistPrivacy::Public
end end
def description_html
HTML.escape(self.description).gsub("\n", "<br>")
end
end end
enum PlaylistPrivacy enum PlaylistPrivacy
@ -172,7 +176,30 @@ enum PlaylistPrivacy
end end
struct InvidiousPlaylist struct InvidiousPlaylist
def to_json(offset, locale, config, kemal_config, json : JSON::Builder, continuation : String? = nil) include DB::Serializable
property title : String
property id : String
property author : String
property description : String = ""
property video_count : Int32
property created : Time
property updated : Time
@[DB::Field(converter: InvidiousPlaylist::PlaylistPrivacyConverter)]
property privacy : PlaylistPrivacy = PlaylistPrivacy::Private
property index : Array(Int64)
@[DB::Field(ignore: true)]
property thumbnail_id : String?
module PlaylistPrivacyConverter
def self.from_rs(rs)
return PlaylistPrivacy.parse(String.new(rs.read(Slice(UInt8))))
end
end
def to_json(offset, locale, json : JSON::Builder, continuation : String? = nil)
json.object do json.object do
json.field "type", "invidiousPlaylist" json.field "type", "invidiousPlaylist"
json.field "title", self.title json.field "title", self.title
@ -195,43 +222,23 @@ struct InvidiousPlaylist
json.array do json.array do
videos = get_playlist_videos(PG_DB, self, offset: offset, locale: locale, continuation: continuation) videos = get_playlist_videos(PG_DB, self, offset: offset, locale: locale, continuation: continuation)
videos.each_with_index do |video, index| videos.each_with_index do |video, index|
video.to_json(locale, config, Kemal.config, json, offset + index) video.to_json(locale, json, offset + index)
end end
end end
end end
end end
end end
def to_json(offset, locale, config, kemal_config, json : JSON::Builder? = nil, continuation : String? = nil) def to_json(offset, locale, json : JSON::Builder? = nil, continuation : String? = nil)
if json if json
to_json(offset, locale, config, kemal_config, json, continuation: continuation) to_json(offset, locale, json, continuation: continuation)
else else
JSON.build do |json| JSON.build do |json|
to_json(offset, locale, config, kemal_config, json, continuation: continuation) to_json(offset, locale, json, continuation: continuation)
end end
end end
end end
property thumbnail_id
module PlaylistPrivacyConverter
def self.from_rs(rs)
return PlaylistPrivacy.parse(String.new(rs.read(Slice(UInt8))))
end
end
db_mapping({
title: String,
id: String,
author: String,
description: {type: String, default: ""},
video_count: Int32,
created: Time,
updated: Time,
privacy: {type: PlaylistPrivacy, default: PlaylistPrivacy::Private, converter: PlaylistPrivacyConverter},
index: Array(Int64),
})
def thumbnail def thumbnail
@thumbnail_id ||= PG_DB.query_one?("SELECT id FROM playlist_videos WHERE plid = $1 ORDER BY array_position($2, index) LIMIT 1", self.id, self.index, as: String) || "-----------" @thumbnail_id ||= PG_DB.query_one?("SELECT id FROM playlist_videos WHERE plid = $1 ORDER BY array_position($2, index) LIMIT 1", self.id, self.index, as: String) || "-----------"
"/vi/#{@thumbnail_id}/mqdefault.jpg" "/vi/#{@thumbnail_id}/mqdefault.jpg"
@ -257,17 +264,17 @@ end
def create_playlist(db, title, privacy, user) def create_playlist(db, title, privacy, user)
plid = "IVPL#{Random::Secure.urlsafe_base64(24)[0, 31]}" plid = "IVPL#{Random::Secure.urlsafe_base64(24)[0, 31]}"
playlist = InvidiousPlaylist.new( playlist = InvidiousPlaylist.new({
title: title.byte_slice(0, 150), title: title.byte_slice(0, 150),
id: plid, id: plid,
author: user.email, author: user.email,
description: "", # Max 5000 characters description: "", # Max 5000 characters
video_count: 0, video_count: 0,
created: Time.utc, created: Time.utc,
updated: Time.utc, updated: Time.utc,
privacy: privacy, privacy: privacy,
index: [] of Int64, index: [] of Int64,
) })
playlist_array = playlist.to_a playlist_array = playlist.to_a
args = arg_array(playlist_array) args = arg_array(playlist_array)
@ -277,50 +284,25 @@ def create_playlist(db, title, privacy, user)
return playlist return playlist
end end
def extract_playlist(plid, nodeset, index) def subscribe_playlist(db, user, playlist)
videos = [] of PlaylistVideo playlist = InvidiousPlaylist.new({
title: playlist.title.byte_slice(0, 150),
id: playlist.id,
author: user.email,
description: "", # Max 5000 characters
video_count: playlist.video_count,
created: Time.utc,
updated: playlist.updated,
privacy: PlaylistPrivacy::Private,
index: [] of Int64,
})
nodeset.each_with_index do |video, offset| playlist_array = playlist.to_a
anchor = video.xpath_node(%q(.//td[@class="pl-video-title"])) args = arg_array(playlist_array)
if !anchor
next
end
title = anchor.xpath_node(%q(.//a)).not_nil!.content.strip(" \n") db.exec("INSERT INTO playlists VALUES (#{args})", args: playlist_array)
id = anchor.xpath_node(%q(.//a)).not_nil!["href"].lchop("/watch?v=")[0, 11]
anchor = anchor.xpath_node(%q(.//div[@class="pl-video-owner"]/a)) return playlist
if anchor
author = anchor.content
ucid = anchor["href"].split("/")[2]
else
author = ""
ucid = ""
end
anchor = video.xpath_node(%q(.//td[@class="pl-video-time"]/div/div[1]))
if anchor && !anchor.content.empty?
length_seconds = decode_length_seconds(anchor.content)
live_now = false
else
length_seconds = 0
live_now = true
end
videos << PlaylistVideo.new(
title: title,
id: id,
author: author,
ucid: ucid,
length_seconds: length_seconds,
published: Time.utc,
plid: plid,
index: (index + offset).to_i64,
live_now: live_now
)
end
return videos
end end
def produce_playlist_url(id, index) def produce_playlist_url(id, index)
@ -368,58 +350,64 @@ def fetch_playlist(plid, locale)
plid = "UU#{plid.lchop("UC")}" plid = "UU#{plid.lchop("UC")}"
end end
response = YT_POOL.client &.get("/playlist?list=#{plid}&hl=en&disable_polymer=1") response = YT_POOL.client &.get("/playlist?list=#{plid}&hl=en")
if response.status_code != 200 if response.status_code != 200
raise translate(locale, "Not a playlist.") if response.headers["location"]?.try &.includes? "/sorry/index"
raise "Could not extract playlist info. Instance is likely blocked."
else
raise translate(locale, "Not a playlist.")
end
end end
body = response.body.gsub(/<button[^>]+><span[^>]+>\s*less\s*<img[^>]+>\n<\/span><\/button>/, "") initial_data = extract_initial_data(response.body)
document = XML.parse_html(body) playlist_info = initial_data["sidebar"]?.try &.["playlistSidebarRenderer"]?.try &.["items"]?.try &.[0]["playlistSidebarPrimaryInfoRenderer"]?
title = document.xpath_node(%q(//h1[@class="pl-header-title"])) raise "Could not extract playlist info" if !playlist_info
if !title title = playlist_info["title"]?.try &.["runs"][0]?.try &.["text"]?.try &.as_s || ""
raise translate(locale, "Playlist does not exist.")
desc_item = playlist_info["description"]?
description = desc_item.try &.["runs"]?.try &.as_a.map(&.["text"].as_s).join("") || desc_item.try &.["simpleText"]?.try &.as_s || ""
thumbnail = playlist_info["thumbnailRenderer"]?.try &.["playlistVideoThumbnailRenderer"]?
.try &.["thumbnail"]["thumbnails"][0]["url"]?.try &.as_s
views = 0_i64
updated = Time.utc
video_count = 0
playlist_info["stats"]?.try &.as_a.each do |stat|
text = stat["runs"]?.try &.as_a.map(&.["text"].as_s).join("") || stat["simpleText"]?.try &.as_s
next if !text
if text.includes? "video"
video_count = text.gsub(/\D/, "").to_i? || 0
elsif text.includes? "view"
views = text.gsub(/\D/, "").to_i64? || 0_i64
else
updated = decode_date(text.lchop("Last updated on ").lchop("Updated "))
end
end end
title = title.content.strip(" \n")
description_html = document.xpath_node(%q(//span[@class="pl-header-description-text"]/div/div[1])).try &.to_s || author_info = initial_data["sidebar"]?.try &.["playlistSidebarRenderer"]?.try &.["items"]?.try &.[1]["playlistSidebarSecondaryInfoRenderer"]?
document.xpath_node(%q(//span[@class="pl-header-description-text"])).try &.to_s || "" .try &.["videoOwner"]["videoOwnerRenderer"]?
playlist_thumbnail = document.xpath_node(%q(//div[@class="pl-header-thumb"]/img)).try &.["data-thumb"]? || raise "Could not extract author info" if !author_info
document.xpath_node(%q(//div[@class="pl-header-thumb"]/img)).try &.["src"]
# YouTube allows anonymous playlists, so most of this can be empty or optional author_thumbnail = author_info["thumbnail"]["thumbnails"][0]["url"]?.try &.as_s || ""
anchor = document.xpath_node(%q(//ul[@class="pl-header-details"])) author = author_info["title"]["runs"][0]["text"]?.try &.as_s || ""
author = anchor.try &.xpath_node(%q(.//li[1]/a)).try &.content ucid = author_info["title"]["runs"][0]["navigationEndpoint"]["browseEndpoint"]["browseId"]?.try &.as_s || ""
author ||= ""
author_thumbnail = document.xpath_node(%q(//img[@class="channel-header-profile-image"])).try &.["src"]
author_thumbnail ||= ""
ucid = anchor.try &.xpath_node(%q(.//li[1]/a)).try &.["href"].split("/")[-1]
ucid ||= ""
video_count = anchor.try &.xpath_node(%q(.//li[2])).try &.content.gsub(/\D/, "").to_i? return Playlist.new({
video_count ||= 0 title: title,
id: plid,
views = anchor.try &.xpath_node(%q(.//li[3])).try &.content.gsub(/\D/, "").to_i64? author: author,
views ||= 0_i64
updated = anchor.try &.xpath_node(%q(.//li[4])).try &.content.lchop("Last updated on ").lchop("Updated ").try { |date| decode_date(date) }
updated ||= Time.utc
playlist = Playlist.new(
title: title,
id: plid,
author: author,
author_thumbnail: author_thumbnail, author_thumbnail: author_thumbnail,
ucid: ucid, ucid: ucid,
description_html: description_html, description: description,
video_count: video_count, video_count: video_count,
views: views, views: views,
updated: updated, updated: updated,
thumbnail: playlist_thumbnail, thumbnail: thumbnail,
) })
return playlist
end end
def get_playlist_videos(db, playlist, offset, locale = nil, continuation = nil) def get_playlist_videos(db, playlist, offset, locale = nil, continuation = nil)
@ -437,35 +425,26 @@ end
def fetch_playlist_videos(plid, video_count, offset = 0, locale = nil, continuation = nil) def fetch_playlist_videos(plid, video_count, offset = 0, locale = nil, continuation = nil)
if continuation if continuation
html = YT_POOL.client &.get("/watch?v=#{continuation}&list=#{plid}&gl=US&hl=en&disable_polymer=1&has_verified=1&bpctr=9999999999") response = YT_POOL.client &.get("/watch?v=#{continuation}&list=#{plid}&gl=US&hl=en")
html = XML.parse_html(html.body) initial_data = extract_initial_data(response.body)
offset = initial_data["currentVideoEndpoint"]?.try &.["watchEndpoint"]?.try &.["index"]?.try &.as_i64 || offset
index = html.xpath_node(%q(//span[@id="playlist-current-index"])).try &.content.to_i?.try &.- 1
offset = index || offset
end end
if video_count > 100 if video_count > 100
url = produce_playlist_url(plid, offset) url = produce_playlist_url(plid, offset)
response = YT_POOL.client &.get(url) response = YT_POOL.client &.get(url)
response = JSON.parse(response.body) initial_data = JSON.parse(response.body).as_a.find(&.as_h.["response"]?).try &.as_h
if !response["content_html"]? || response["content_html"].as_s.empty?
raise translate(locale, "Empty playlist")
end
document = XML.parse_html(response["content_html"].as_s)
nodeset = document.xpath_nodes(%q(.//tr[contains(@class, "pl-video")]))
videos = extract_playlist(plid, nodeset, offset)
elsif offset > 100 elsif offset > 100
return [] of PlaylistVideo return [] of PlaylistVideo
else # Extract first page of videos else # Extract first page of videos
response = YT_POOL.client &.get("/playlist?list=#{plid}&gl=US&hl=en&disable_polymer=1") response = YT_POOL.client &.get("/playlist?list=#{plid}&gl=US&hl=en")
document = XML.parse_html(response.body) initial_data = extract_initial_data(response.body)
nodeset = document.xpath_nodes(%q(.//tr[contains(@class, "pl-video")]))
videos = extract_playlist(plid, nodeset, 0)
end end
return [] of PlaylistVideo if !initial_data
videos = extract_playlist_videos(initial_data)
until videos.empty? || videos[0].index == offset until videos.empty? || videos[0].index == offset
videos.shift videos.shift
end end
@ -473,6 +452,45 @@ def fetch_playlist_videos(plid, video_count, offset = 0, locale = nil, continuat
return videos return videos
end end
def extract_playlist_videos(initial_data : Hash(String, JSON::Any))
videos = [] of PlaylistVideo
(initial_data["contents"]?.try &.["twoColumnBrowseResultsRenderer"]["tabs"].as_a.select(&.["tabRenderer"]["selected"]?.try &.as_bool)[0]["tabRenderer"]["content"]["sectionListRenderer"]["contents"][0]["itemSectionRenderer"]["contents"][0]["playlistVideoListRenderer"]["contents"].as_a ||
initial_data["response"]?.try &.["continuationContents"]["playlistVideoListContinuation"]["contents"].as_a).try &.each do |item|
if i = item["playlistVideoRenderer"]?
video_id = i["navigationEndpoint"]["watchEndpoint"]["videoId"].as_s
plid = i["navigationEndpoint"]["watchEndpoint"]["playlistId"].as_s
index = i["navigationEndpoint"]["watchEndpoint"]["index"].as_i64
thumbnail = i["thumbnail"]["thumbnails"][0]["url"].as_s
title = i["title"].try { |t| t["simpleText"]? || t["runs"]?.try &.[0]["text"]? }.try &.as_s || ""
author = i["shortBylineText"]?.try &.["runs"][0]["text"].as_s || ""
ucid = i["shortBylineText"]?.try &.["runs"][0]["navigationEndpoint"]["browseEndpoint"]["browseId"].as_s || ""
length_seconds = i["lengthSeconds"]?.try &.as_s.to_i
live = false
if !length_seconds
live = true
length_seconds = 0
end
videos << PlaylistVideo.new({
title: title,
id: video_id,
author: author,
ucid: ucid,
length_seconds: length_seconds,
published: Time.utc,
plid: plid,
live_now: live,
index: index,
})
end
end
return videos
end
def template_playlist(playlist) def template_playlist(playlist)
html = <<-END_HTML html = <<-END_HTML
<h3> <h3>

View File

@ -0,0 +1,9 @@
abstract class Invidious::Routes::BaseRoute
private getter config : Config
private getter logger : Invidious::LogHandler
def initialize(@config, @logger)
end
abstract def handle(env)
end

View File

@ -0,0 +1,27 @@
class Invidious::Routes::Embed::Index < Invidious::Routes::BaseRoute
def handle(env)
locale = LOCALES[env.get("preferences").as(Preferences).locale]?
if plid = env.params.query["list"]?.try &.gsub(/[^a-zA-Z0-9_-]/, "")
begin
playlist = get_playlist(PG_DB, plid, locale: locale)
offset = env.params.query["index"]?.try &.to_i? || 0
videos = get_playlist_videos(PG_DB, playlist, offset: offset, locale: locale)
rescue ex
error_message = ex.message
env.response.status_code = 500
return templated "error"
end
url = "/embed/#{videos[0].id}?#{env.params.query}"
if env.params.query.size > 0
url += "?#{env.params.query}"
end
else
url = "/"
end
env.redirect url
end
end

View File

@ -0,0 +1,174 @@
class Invidious::Routes::Embed::Show < Invidious::Routes::BaseRoute
def handle(env)
locale = LOCALES[env.get("preferences").as(Preferences).locale]?
id = env.params.url["id"]
plid = env.params.query["list"]?.try &.gsub(/[^a-zA-Z0-9_-]/, "")
continuation = process_continuation(PG_DB, env.params.query, plid, id)
if md = env.params.query["playlist"]?
.try &.match(/[a-zA-Z0-9_-]{11}(,[a-zA-Z0-9_-]{11})*/)
video_series = md[0].split(",")
env.params.query.delete("playlist")
end
preferences = env.get("preferences").as(Preferences)
if id.includes?("%20") || id.includes?("+") || env.params.query.to_s.includes?("%20") || env.params.query.to_s.includes?("+")
id = env.params.url["id"].gsub("%20", "").delete("+")
url = "/embed/#{id}"
if env.params.query.size > 0
url += "?#{env.params.query.to_s.gsub("%20", "").delete("+")}"
end
return env.redirect url
end
# YouTube embed supports `videoseries` with either `list=PLID`
# or `playlist=VIDEO_ID,VIDEO_ID`
case id
when "videoseries"
url = ""
if plid
begin
playlist = get_playlist(PG_DB, plid, locale: locale)
offset = env.params.query["index"]?.try &.to_i? || 0
videos = get_playlist_videos(PG_DB, playlist, offset: offset, locale: locale)
rescue ex
error_message = ex.message
env.response.status_code = 500
return templated "error"
end
url = "/embed/#{videos[0].id}"
elsif video_series
url = "/embed/#{video_series.shift}"
env.params.query["playlist"] = video_series.join(",")
else
return env.redirect "/"
end
if env.params.query.size > 0
url += "?#{env.params.query}"
end
return env.redirect url
when "live_stream"
response = YT_POOL.client &.get("/embed/live_stream?channel=#{env.params.query["channel"]? || ""}")
video_id = response.body.match(/"video_id":"(?<video_id>[a-zA-Z0-9_-]{11})"/).try &.["video_id"]
env.params.query.delete_all("channel")
if !video_id || video_id == "live_stream"
error_message = "Video is unavailable."
return templated "error"
end
url = "/embed/#{video_id}"
if env.params.query.size > 0
url += "?#{env.params.query}"
end
return env.redirect url
when id.size > 11
url = "/embed/#{id[0, 11]}"
if env.params.query.size > 0
url += "?#{env.params.query}"
end
return env.redirect url
else nil # Continue
end
params = process_video_params(env.params.query, preferences)
user = env.get?("user").try &.as(User)
if user
subscriptions = user.subscriptions
watched = user.watched
notifications = user.notifications
end
subscriptions ||= [] of String
begin
video = get_video(id, PG_DB, region: params.region)
rescue ex : VideoRedirect
return env.redirect env.request.resource.gsub(id, ex.video_id)
rescue ex
error_message = ex.message
env.response.status_code = 500
return templated "error"
end
if preferences.annotations_subscribed &&
subscriptions.includes?(video.ucid) &&
(env.params.query["iv_load_policy"]? || "1") == "1"
params.annotations = true
end
# if watched && !watched.includes? id
# PG_DB.exec("UPDATE users SET watched = array_append(watched, $1) WHERE email = $2", id, user.as(User).email)
# end
if notifications && notifications.includes? id
PG_DB.exec("UPDATE users SET notifications = array_remove(notifications, $1) WHERE email = $2", id, user.as(User).email)
env.get("user").as(User).notifications.delete(id)
notifications.delete(id)
end
fmt_stream = video.fmt_stream
adaptive_fmts = video.adaptive_fmts
if params.local
fmt_stream.each { |fmt| fmt["url"] = JSON::Any.new(URI.parse(fmt["url"].as_s).full_path) }
adaptive_fmts.each { |fmt| fmt["url"] = JSON::Any.new(URI.parse(fmt["url"].as_s).full_path) }
end
video_streams = video.video_streams
audio_streams = video.audio_streams
if audio_streams.empty? && !video.live_now
if params.quality == "dash"
env.params.query.delete_all("quality")
return env.redirect "/embed/#{id}?#{env.params.query}"
elsif params.listen
env.params.query.delete_all("listen")
env.params.query["listen"] = "0"
return env.redirect "/embed/#{id}?#{env.params.query}"
end
end
captions = video.captions
preferred_captions = captions.select { |caption|
params.preferred_captions.includes?(caption.name.simpleText) ||
params.preferred_captions.includes?(caption.languageCode.split("-")[0])
}
preferred_captions.sort_by! { |caption|
(params.preferred_captions.index(caption.name.simpleText) ||
params.preferred_captions.index(caption.languageCode.split("-")[0])).not_nil!
}
captions = captions - preferred_captions
aspect_ratio = nil
thumbnail = "/vi/#{video.id}/maxres.jpg"
if params.raw
url = fmt_stream[0]["url"].as_s
fmt_stream.each do |fmt|
url = fmt["url"].as_s if fmt["quality"].as_s == params.quality
end
return env.redirect url
end
rendered "embed"
end
end

View File

@ -0,0 +1,34 @@
class Invidious::Routes::Home < Invidious::Routes::BaseRoute
def handle(env)
preferences = env.get("preferences").as(Preferences)
locale = LOCALES[preferences.locale]?
user = env.get? "user"
case preferences.default_home
when ""
templated "empty"
when "Popular"
templated "popular"
when "Trending"
env.redirect "/feed/trending"
when "Subscriptions"
if user
env.redirect "/feed/subscriptions"
else
templated "popular"
end
when "Playlists"
if user
env.redirect "/view_all_playlists"
else
templated "popular"
end
else
templated "empty"
end
end
private def popular_videos
Jobs::PullPopularVideosJob::POPULAR_VIDEOS.get
end
end

View File

@ -0,0 +1,6 @@
class Invidious::Routes::Licenses < Invidious::Routes::BaseRoute
def handle(env)
locale = LOCALES[env.get("preferences").as(Preferences).locale]?
rendered "licenses"
end
end

View File

@ -0,0 +1,6 @@
class Invidious::Routes::Privacy < Invidious::Routes::BaseRoute
def handle(env)
locale = LOCALES[env.get("preferences").as(Preferences).locale]?
templated "privacy"
end
end

View File

@ -0,0 +1,186 @@
class Invidious::Routes::Watch < Invidious::Routes::BaseRoute
def handle(env)
locale = LOCALES[env.get("preferences").as(Preferences).locale]?
region = env.params.query["region"]?
if env.params.query.to_s.includes?("%20") || env.params.query.to_s.includes?("+")
url = "/watch?" + env.params.query.to_s.gsub("%20", "").delete("+")
return env.redirect url
end
if env.params.query["v"]?
id = env.params.query["v"]
if env.params.query["v"].empty?
error_message = "Invalid parameters."
env.response.status_code = 400
return templated "error"
end
if id.size > 11
url = "/watch?v=#{id[0, 11]}"
env.params.query.delete_all("v")
if env.params.query.size > 0
url += "&#{env.params.query}"
end
return env.redirect url
end
else
return env.redirect "/"
end
plid = env.params.query["list"]?.try &.gsub(/[^a-zA-Z0-9_-]/, "")
continuation = process_continuation(PG_DB, env.params.query, plid, id)
nojs = env.params.query["nojs"]?
nojs ||= "0"
nojs = nojs == "1"
preferences = env.get("preferences").as(Preferences)
user = env.get?("user").try &.as(User)
if user
subscriptions = user.subscriptions
watched = user.watched
notifications = user.notifications
end
subscriptions ||= [] of String
params = process_video_params(env.params.query, preferences)
env.params.query.delete_all("listen")
begin
video = get_video(id, PG_DB, region: params.region)
rescue ex : VideoRedirect
return env.redirect env.request.resource.gsub(id, ex.video_id)
rescue ex
error_message = ex.message
env.response.status_code = 500
logger.puts("#{id} : #{ex.message}")
return templated "error"
end
if preferences.annotations_subscribed &&
subscriptions.includes?(video.ucid) &&
(env.params.query["iv_load_policy"]? || "1") == "1"
params.annotations = true
end
env.params.query.delete_all("iv_load_policy")
if watched && !watched.includes? id
PG_DB.exec("UPDATE users SET watched = array_append(watched, $1) WHERE email = $2", id, user.as(User).email)
end
if notifications && notifications.includes? id
PG_DB.exec("UPDATE users SET notifications = array_remove(notifications, $1) WHERE email = $2", id, user.as(User).email)
env.get("user").as(User).notifications.delete(id)
notifications.delete(id)
end
if nojs
if preferences
source = preferences.comments[0]
if source.empty?
source = preferences.comments[1]
end
if source == "youtube"
begin
comment_html = JSON.parse(fetch_youtube_comments(id, PG_DB, nil, "html", locale, preferences.thin_mode, region))["contentHtml"]
rescue ex
if preferences.comments[1] == "reddit"
comments, reddit_thread = fetch_reddit_comments(id)
comment_html = template_reddit_comments(comments, locale)
comment_html = fill_links(comment_html, "https", "www.reddit.com")
comment_html = replace_links(comment_html)
end
end
elsif source == "reddit"
begin
comments, reddit_thread = fetch_reddit_comments(id)
comment_html = template_reddit_comments(comments, locale)
comment_html = fill_links(comment_html, "https", "www.reddit.com")
comment_html = replace_links(comment_html)
rescue ex
if preferences.comments[1] == "youtube"
comment_html = JSON.parse(fetch_youtube_comments(id, PG_DB, nil, "html", locale, preferences.thin_mode, region))["contentHtml"]
end
end
end
else
comment_html = JSON.parse(fetch_youtube_comments(id, PG_DB, nil, "html", locale, preferences.thin_mode, region))["contentHtml"]
end
comment_html ||= ""
end
fmt_stream = video.fmt_stream
adaptive_fmts = video.adaptive_fmts
if params.local
fmt_stream.each { |fmt| fmt["url"] = JSON::Any.new(URI.parse(fmt["url"].as_s).full_path) }
adaptive_fmts.each { |fmt| fmt["url"] = JSON::Any.new(URI.parse(fmt["url"].as_s).full_path) }
end
video_streams = video.video_streams
audio_streams = video.audio_streams
# Older videos may not have audio sources available.
# We redirect here so they're not unplayable
if audio_streams.empty? && !video.live_now
if params.quality == "dash"
env.params.query.delete_all("quality")
env.params.query["quality"] = "medium"
return env.redirect "/watch?#{env.params.query}"
elsif params.listen
env.params.query.delete_all("listen")
env.params.query["listen"] = "0"
return env.redirect "/watch?#{env.params.query}"
end
end
captions = video.captions
preferred_captions = captions.select { |caption|
params.preferred_captions.includes?(caption.name.simpleText) ||
params.preferred_captions.includes?(caption.languageCode.split("-")[0])
}
preferred_captions.sort_by! { |caption|
(params.preferred_captions.index(caption.name.simpleText) ||
params.preferred_captions.index(caption.languageCode.split("-")[0])).not_nil!
}
captions = captions - preferred_captions
aspect_ratio = "16:9"
thumbnail = "/vi/#{video.id}/maxres.jpg"
if params.raw
if params.listen
url = audio_streams[0]["url"].as_s
audio_streams.each do |fmt|
if fmt["bitrate"].as_i == params.quality.rchop("k").to_i
url = fmt["url"].as_s
end
end
else
url = fmt_stream[0]["url"].as_s
fmt_stream.each do |fmt|
if fmt["quality"].as_s == params.quality
url = fmt["url"].as_s
end
end
end
return env.redirect url
end
templated "watch"
end
end

8
src/invidious/routing.cr Normal file
View File

@ -0,0 +1,8 @@
module Invidious::Routing
macro get(path, controller)
get {{ path }} do |env|
controller_instance = {{ controller }}.new(config, logger)
controller_instance.handle(env)
end
end
end

View File

@ -1,5 +1,20 @@
struct SearchVideo struct SearchVideo
def to_xml(host_url, auto_generated, query_params, xml : XML::Builder) include DB::Serializable
property title : String
property id : String
property author : String
property ucid : String
property published : Time
property views : Int64
property description_html : String
property length_seconds : Int32
property live_now : Bool
property paid : Bool
property premium : Bool
property premiere_timestamp : Time?
def to_xml(auto_generated, query_params, xml : XML::Builder)
query_params["v"] = self.id query_params["v"] = self.id
xml.element("entry") do xml.element("entry") do
@ -7,22 +22,22 @@ struct SearchVideo
xml.element("yt:videoId") { xml.text self.id } xml.element("yt:videoId") { xml.text self.id }
xml.element("yt:channelId") { xml.text self.ucid } xml.element("yt:channelId") { xml.text self.ucid }
xml.element("title") { xml.text self.title } xml.element("title") { xml.text self.title }
xml.element("link", rel: "alternate", href: "#{host_url}/watch?#{query_params}") xml.element("link", rel: "alternate", href: "#{HOST_URL}/watch?#{query_params}")
xml.element("author") do xml.element("author") do
if auto_generated if auto_generated
xml.element("name") { xml.text self.author } xml.element("name") { xml.text self.author }
xml.element("uri") { xml.text "#{host_url}/channel/#{self.ucid}" } xml.element("uri") { xml.text "#{HOST_URL}/channel/#{self.ucid}" }
else else
xml.element("name") { xml.text author } xml.element("name") { xml.text author }
xml.element("uri") { xml.text "#{host_url}/channel/#{ucid}" } xml.element("uri") { xml.text "#{HOST_URL}/channel/#{ucid}" }
end end
end end
xml.element("content", type: "xhtml") do xml.element("content", type: "xhtml") do
xml.element("div", xmlns: "http://www.w3.org/1999/xhtml") do xml.element("div", xmlns: "http://www.w3.org/1999/xhtml") do
xml.element("a", href: "#{host_url}/watch?#{query_params}") do xml.element("a", href: "#{HOST_URL}/watch?#{query_params}") do
xml.element("img", src: "#{host_url}/vi/#{self.id}/mqdefault.jpg") xml.element("img", src: "#{HOST_URL}/vi/#{self.id}/mqdefault.jpg")
end end
xml.element("p", style: "word-break:break-word;white-space:pre-wrap") { xml.text html_to_content(self.description_html) } xml.element("p", style: "word-break:break-word;white-space:pre-wrap") { xml.text html_to_content(self.description_html) }
@ -33,7 +48,7 @@ struct SearchVideo
xml.element("media:group") do xml.element("media:group") do
xml.element("media:title") { xml.text self.title } xml.element("media:title") { xml.text self.title }
xml.element("media:thumbnail", url: "#{host_url}/vi/#{self.id}/mqdefault.jpg", xml.element("media:thumbnail", url: "#{HOST_URL}/vi/#{self.id}/mqdefault.jpg",
width: "320", height: "180") width: "320", height: "180")
xml.element("media:description") { xml.text html_to_content(self.description_html) } xml.element("media:description") { xml.text html_to_content(self.description_html) }
end end
@ -44,17 +59,17 @@ struct SearchVideo
end end
end end
def to_xml(host_url, auto_generated, query_params, xml : XML::Builder | Nil = nil) def to_xml(auto_generated, query_params, xml : XML::Builder | Nil = nil)
if xml if xml
to_xml(host_url, auto_generated, query_params, xml) to_xml(HOST_URL, auto_generated, query_params, xml)
else else
XML.build do |json| XML.build do |json|
to_xml(host_url, auto_generated, query_params, xml) to_xml(HOST_URL, auto_generated, query_params, xml)
end end
end end
end end
def to_json(locale, config, kemal_config, json : JSON::Builder) def to_json(locale, json : JSON::Builder)
json.object do json.object do
json.field "type", "video" json.field "type", "video"
json.field "title", self.title json.field "title", self.title
@ -65,7 +80,7 @@ struct SearchVideo
json.field "authorUrl", "/channel/#{self.ucid}" json.field "authorUrl", "/channel/#{self.ucid}"
json.field "videoThumbnails" do json.field "videoThumbnails" do
generate_thumbnails(json, self.id, config, kemal_config) generate_thumbnails(json, self.id)
end end
json.field "description", html_to_content(self.description_html) json.field "description", html_to_content(self.description_html)
@ -78,45 +93,49 @@ struct SearchVideo
json.field "liveNow", self.live_now json.field "liveNow", self.live_now
json.field "paid", self.paid json.field "paid", self.paid
json.field "premium", self.premium json.field "premium", self.premium
end json.field "isUpcoming", self.is_upcoming
end
def to_json(locale, config, kemal_config, json : JSON::Builder | Nil = nil) if self.premiere_timestamp
if json json.field "premiereTimestamp", self.premiere_timestamp.try &.to_unix
to_json(locale, config, kemal_config, json)
else
JSON.build do |json|
to_json(locale, config, kemal_config, json)
end end
end end
end end
db_mapping({ def to_json(locale, json : JSON::Builder | Nil = nil)
title: String, if json
id: String, to_json(locale, json)
author: String, else
ucid: String, JSON.build do |json|
published: Time, to_json(locale, json)
views: Int64, end
description_html: String, end
length_seconds: Int32, end
live_now: Bool,
paid: Bool, def is_upcoming
premium: Bool, premiere_timestamp ? true : false
premiere_timestamp: Time?, end
})
end end
struct SearchPlaylistVideo struct SearchPlaylistVideo
db_mapping({ include DB::Serializable
title: String,
id: String, property title : String
length_seconds: Int32, property id : String
}) property length_seconds : Int32
end end
struct SearchPlaylist struct SearchPlaylist
def to_json(locale, config, kemal_config, json : JSON::Builder) include DB::Serializable
property title : String
property id : String
property author : String
property ucid : String
property video_count : Int32
property videos : Array(SearchPlaylistVideo)
property thumbnail : String?
def to_json(locale, json : JSON::Builder)
json.object do json.object do
json.field "type", "playlist" json.field "type", "playlist"
json.field "title", self.title json.field "title", self.title
@ -137,7 +156,7 @@ struct SearchPlaylist
json.field "lengthSeconds", video.length_seconds json.field "lengthSeconds", video.length_seconds
json.field "videoThumbnails" do json.field "videoThumbnails" do
generate_thumbnails(json, video.id, config, Kemal.config) generate_thumbnails(json, video.id)
end end
end end
end end
@ -146,29 +165,29 @@ struct SearchPlaylist
end end
end end
def to_json(locale, config, kemal_config, json : JSON::Builder | Nil = nil) def to_json(locale, json : JSON::Builder | Nil = nil)
if json if json
to_json(locale, config, kemal_config, json) to_json(locale, json)
else else
JSON.build do |json| JSON.build do |json|
to_json(locale, config, kemal_config, json) to_json(locale, json)
end end
end end
end end
db_mapping({
title: String,
id: String,
author: String,
ucid: String,
video_count: Int32,
videos: Array(SearchPlaylistVideo),
thumbnail: String?,
})
end end
struct SearchChannel struct SearchChannel
def to_json(locale, config, kemal_config, json : JSON::Builder) include DB::Serializable
property author : String
property ucid : String
property author_thumbnail : String
property subscriber_count : Int32
property video_count : Int32
property description_html : String
property auto_generated : Bool
def to_json(locale, json : JSON::Builder)
json.object do json.object do
json.field "type", "channel" json.field "type", "channel"
json.field "author", self.author json.field "author", self.author
@ -198,85 +217,50 @@ struct SearchChannel
end end
end end
def to_json(locale, config, kemal_config, json : JSON::Builder | Nil = nil) def to_json(locale, json : JSON::Builder | Nil = nil)
if json if json
to_json(locale, config, kemal_config, json) to_json(locale, json)
else else
JSON.build do |json| JSON.build do |json|
to_json(locale, config, kemal_config, json) to_json(locale, json)
end end
end end
end end
db_mapping({
author: String,
ucid: String,
author_thumbnail: String,
subscriber_count: Int32,
video_count: Int32,
description_html: String,
auto_generated: Bool,
})
end end
alias SearchItem = SearchVideo | SearchChannel | SearchPlaylist alias SearchItem = SearchVideo | SearchChannel | SearchPlaylist
def channel_search(query, page, channel) def channel_search(query, page, channel)
response = YT_POOL.client &.get("/channel/#{channel}?disable_polymer=1&hl=en&gl=US") response = YT_POOL.client &.get("/channel/#{channel}?hl=en&gl=US")
document = XML.parse_html(response.body) response = YT_POOL.client &.get("/user/#{channel}?hl=en&gl=US") if response.headers["location"]?
canonical = document.xpath_node(%q(//link[@rel="canonical"])) response = YT_POOL.client &.get("/c/#{channel}?hl=en&gl=US") if response.headers["location"]?
if !canonical ucid = response.body.match(/\\"channelId\\":\\"(?<ucid>[^\\]+)\\"/).try &.["ucid"]?
response = YT_POOL.client &.get("/c/#{channel}?disable_polymer=1&hl=en&gl=US")
document = XML.parse_html(response.body)
canonical = document.xpath_node(%q(//link[@rel="canonical"]))
end
if !canonical return 0, [] of SearchItem if !ucid
response = YT_POOL.client &.get("/user/#{channel}?disable_polymer=1&hl=en&gl=US")
document = XML.parse_html(response.body)
canonical = document.xpath_node(%q(//link[@rel="canonical"]))
end
if !canonical
return 0, [] of SearchItem
end
ucid = canonical["href"].split("/")[-1]
url = produce_channel_search_url(ucid, query, page) url = produce_channel_search_url(ucid, query, page)
response = YT_POOL.client &.get(url) response = YT_POOL.client &.get(url)
json = JSON.parse(response.body) initial_data = JSON.parse(response.body).as_a.find &.["response"]?
return 0, [] of SearchItem if !initial_data
author = initial_data["response"]?.try &.["metadata"]?.try &.["channelMetadataRenderer"]?.try &.["title"]?.try &.as_s
items = extract_items(initial_data.as_h, author, ucid)
if json["content_html"]? && !json["content_html"].as_s.empty? return items.size, items
document = XML.parse_html(json["content_html"].as_s)
nodeset = document.xpath_nodes(%q(//li[contains(@class, "feed-item-container")]))
count = nodeset.size
items = extract_items(nodeset)
else
count = 0
items = [] of SearchItem
end
return count, items
end end
def search(query, page = 1, search_params = produce_search_params(content_type: "all"), region = nil) def search(query, page = 1, search_params = produce_search_params(content_type: "all"), region = nil)
if query.empty? return 0, [] of SearchItem if query.empty?
return {0, [] of SearchItem}
end
html = YT_POOL.client(region, &.get("/results?q=#{URI.encode_www_form(query)}&page=#{page}&sp=#{search_params}&hl=en&disable_polymer=1").body) body = YT_POOL.client(region, &.get("/results?q=#{URI.encode_www_form(query)}&page=#{page}&sp=#{search_params}&hl=en").body)
if html.empty? return 0, [] of SearchItem if body.empty?
return {0, [] of SearchItem}
end
html = XML.parse_html(html) initial_data = extract_initial_data(body)
nodeset = html.xpath_nodes(%q(//ol[@class="item-section"]/li)) items = extract_items(initial_data)
items = extract_items(nodeset)
return {nodeset.size, items} # initial_data["estimatedResults"]?.try &.as_s.to_i64
return items.size, items
end end
def produce_search_params(sort : String = "relevance", date : String = "", content_type : String = "", def produce_search_params(sort : String = "relevance", date : String = "", content_type : String = "",
@ -310,6 +294,7 @@ def produce_search_params(sort : String = "relevance", date : String = "", conte
object["2:embedded"].as(Hash)["1:varint"] = 4_i64 object["2:embedded"].as(Hash)["1:varint"] = 4_i64
when "year" when "year"
object["2:embedded"].as(Hash)["1:varint"] = 5_i64 object["2:embedded"].as(Hash)["1:varint"] = 5_i64
else nil # Ignore
end end
case content_type case content_type
@ -334,6 +319,7 @@ def produce_search_params(sort : String = "relevance", date : String = "", conte
object["2:embedded"].as(Hash)["3:varint"] = 1_i64 object["2:embedded"].as(Hash)["3:varint"] = 1_i64
when "long" when "long"
object["2:embedded"].as(Hash)["3:varint"] = 2_i64 object["2:embedded"].as(Hash)["3:varint"] = 2_i64
else nil # Ignore
end end
features.each do |feature| features.each do |feature|
@ -358,6 +344,7 @@ def produce_search_params(sort : String = "relevance", date : String = "", conte
object["2:embedded"].as(Hash)["23:varint"] = 1_i64 object["2:embedded"].as(Hash)["23:varint"] = 1_i64
when "hdr" when "hdr"
object["2:embedded"].as(Hash)["25:varint"] = 1_i64 object["2:embedded"].as(Hash)["25:varint"] = 1_i64
else nil # Ignore
end end
end end
@ -379,12 +366,9 @@ def produce_channel_search_url(ucid, query, page)
"2:string" => ucid, "2:string" => ucid,
"3:base64" => { "3:base64" => {
"2:string" => "search", "2:string" => "search",
"6:varint" => 2_i64,
"7:varint" => 1_i64, "7:varint" => 1_i64,
"12:varint" => 1_i64,
"13:string" => "",
"23:varint" => 0_i64,
"15:string" => "#{page}", "15:string" => "#{page}",
"23:varint" => 0_i64,
}, },
"11:string" => query, "11:string" => query,
}, },

View File

@ -1,7 +1,4 @@
def fetch_trending(trending_type, region, locale) def fetch_trending(trending_type, region, locale)
headers = HTTP::Headers.new
headers["User-Agent"] = "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36"
region ||= "US" region ||= "US"
region = region.upcase region = region.upcase
@ -11,7 +8,7 @@ def fetch_trending(trending_type, region, locale)
if trending_type && trending_type != "Default" if trending_type && trending_type != "Default"
trending_type = trending_type.downcase.capitalize trending_type = trending_type.downcase.capitalize
response = YT_POOL.client &.get("/feed/trending?gl=#{region}&hl=en", headers).body response = YT_POOL.client &.get("/feed/trending?gl=#{region}&hl=en").body
initial_data = extract_initial_data(response) initial_data = extract_initial_data(response)
@ -21,51 +18,28 @@ def fetch_trending(trending_type, region, locale)
if url if url
url["channelListSubMenuAvatarRenderer"]["navigationEndpoint"]["commandMetadata"]["webCommandMetadata"]["url"] url["channelListSubMenuAvatarRenderer"]["navigationEndpoint"]["commandMetadata"]["webCommandMetadata"]["url"]
url = url["channelListSubMenuAvatarRenderer"]["navigationEndpoint"]["commandMetadata"]["webCommandMetadata"]["url"].as_s url = url["channelListSubMenuAvatarRenderer"]["navigationEndpoint"]["commandMetadata"]["webCommandMetadata"]["url"].as_s
url += "&disable_polymer=1&gl=#{region}&hl=en" url = "#{url}&gl=#{region}&hl=en"
trending = YT_POOL.client &.get(url).body trending = YT_POOL.client &.get(url).body
plid = extract_plid(url) plid = extract_plid(url)
else else
trending = YT_POOL.client &.get("/feed/trending?gl=#{region}&hl=en&disable_polymer=1").body trending = YT_POOL.client &.get("/feed/trending?gl=#{region}&hl=en").body
end end
else else
trending = YT_POOL.client &.get("/feed/trending?gl=#{region}&hl=en&disable_polymer=1").body trending = YT_POOL.client &.get("/feed/trending?gl=#{region}&hl=en").body
end end
trending = XML.parse_html(trending) initial_data = extract_initial_data(trending)
nodeset = trending.xpath_nodes(%q(//ul/li[@class="expanded-shelf-content-item-wrapper"])) trending = extract_videos(initial_data)
trending = extract_videos(nodeset)
return {trending, plid} return {trending, plid}
end end
def extract_plid(url) def extract_plid(url)
wrapper = HTTP::Params.parse(URI.parse(url).query.not_nil!)["bp"] return url.try { |i| URI.parse(i).query }
.try { |i| HTTP::Params.parse(i)["bp"] }
wrapper = URI.decode_www_form(wrapper) .try { |i| URI.decode_www_form(i) }
wrapper = Base64.decode(wrapper) .try { |i| Base64.decode(i) }
.try { |i| IO::Memory.new(i) }
# 0xe2 0x02 0x2e .try { |i| Protodec::Any.parse(i) }
wrapper += 3 .try &.["44:0:embedded"]?.try &.["2:1:string"]?.try &.as_s
# 0x0a
wrapper += 1
# Looks like "/m/[a-z0-9]{5}", not sure what it does here
item_size = wrapper[0]
wrapper += 1
item = wrapper[0, item_size]
wrapper += item.size
# 0x12
wrapper += 1
plid_size = wrapper[0]
wrapper += 1
plid = wrapper[0, plid_size]
wrapper += plid.size
plid = String.new(plid)
return plid
end end

View File

@ -4,6 +4,20 @@ require "crypto/bcrypt/password"
MATERIALIZED_VIEW_SQL = ->(email : String) { "SELECT cv.* FROM channel_videos cv WHERE EXISTS (SELECT subscriptions FROM users u WHERE cv.ucid = ANY (u.subscriptions) AND u.email = E'#{email.gsub({'\'' => "\\'", '\\' => "\\\\"})}') ORDER BY published DESC" } MATERIALIZED_VIEW_SQL = ->(email : String) { "SELECT cv.* FROM channel_videos cv WHERE EXISTS (SELECT subscriptions FROM users u WHERE cv.ucid = ANY (u.subscriptions) AND u.email = E'#{email.gsub({'\'' => "\\'", '\\' => "\\\\"})}') ORDER BY published DESC" }
struct User struct User
include DB::Serializable
property updated : Time
property notifications : Array(String)
property subscriptions : Array(String)
property email : String
@[DB::Field(converter: User::PreferencesConverter)]
property preferences : Preferences
property password : String?
property token : String
property watched : Array(String)
property feed_needs_update : Bool?
module PreferencesConverter module PreferencesConverter
def self.from_rs(rs) def self.from_rs(rs)
begin begin
@ -13,31 +27,78 @@ struct User
end end
end end
end end
db_mapping({
updated: Time,
notifications: Array(String),
subscriptions: Array(String),
email: String,
preferences: {
type: Preferences,
converter: PreferencesConverter,
},
password: String?,
token: String,
watched: Array(String),
feed_needs_update: Bool?,
})
end end
struct Preferences struct Preferences
module ProcessString include JSON::Serializable
include YAML::Serializable
property annotations : Bool = CONFIG.default_user_preferences.annotations
property annotations_subscribed : Bool = CONFIG.default_user_preferences.annotations_subscribed
property autoplay : Bool = CONFIG.default_user_preferences.autoplay
@[JSON::Field(converter: Preferences::StringToArray)]
@[YAML::Field(converter: Preferences::StringToArray)]
property captions : Array(String) = CONFIG.default_user_preferences.captions
@[JSON::Field(converter: Preferences::StringToArray)]
@[YAML::Field(converter: Preferences::StringToArray)]
property comments : Array(String) = CONFIG.default_user_preferences.comments
property continue : Bool = CONFIG.default_user_preferences.continue
property continue_autoplay : Bool = CONFIG.default_user_preferences.continue_autoplay
@[JSON::Field(converter: Preferences::BoolToString)]
@[YAML::Field(converter: Preferences::BoolToString)]
property dark_mode : String = CONFIG.default_user_preferences.dark_mode
property latest_only : Bool = CONFIG.default_user_preferences.latest_only
property listen : Bool = CONFIG.default_user_preferences.listen
property local : Bool = CONFIG.default_user_preferences.local
@[JSON::Field(converter: Preferences::ProcessString)]
property locale : String = CONFIG.default_user_preferences.locale
@[JSON::Field(converter: Preferences::ClampInt)]
property max_results : Int32 = CONFIG.default_user_preferences.max_results
property notifications_only : Bool = CONFIG.default_user_preferences.notifications_only
@[JSON::Field(converter: Preferences::ProcessString)]
property player_style : String = CONFIG.default_user_preferences.player_style
@[JSON::Field(converter: Preferences::ProcessString)]
property quality : String = CONFIG.default_user_preferences.quality
property default_home : String = CONFIG.default_user_preferences.default_home
property feed_menu : Array(String) = CONFIG.default_user_preferences.feed_menu
property related_videos : Bool = CONFIG.default_user_preferences.related_videos
@[JSON::Field(converter: Preferences::ProcessString)]
property sort : String = CONFIG.default_user_preferences.sort
property speed : Float32 = CONFIG.default_user_preferences.speed
property thin_mode : Bool = CONFIG.default_user_preferences.thin_mode
property unseen_only : Bool = CONFIG.default_user_preferences.unseen_only
property video_loop : Bool = CONFIG.default_user_preferences.video_loop
property volume : Int32 = CONFIG.default_user_preferences.volume
module BoolToString
def self.to_json(value : String, json : JSON::Builder) def self.to_json(value : String, json : JSON::Builder)
json.string value json.string value
end end
def self.from_json(value : JSON::PullParser) : String def self.from_json(value : JSON::PullParser) : String
HTML.escape(value.read_string[0, 100]) begin
result = value.read_string
if result.empty?
CONFIG.default_user_preferences.dark_mode
else
result
end
rescue ex
if value.read_bool
"dark"
else
"light"
end
end
end end
def self.to_yaml(value : String, yaml : YAML::Nodes::Builder) def self.to_yaml(value : String, yaml : YAML::Nodes::Builder)
@ -45,7 +106,20 @@ struct Preferences
end end
def self.from_yaml(ctx : YAML::ParseContext, node : YAML::Nodes::Node) : String def self.from_yaml(ctx : YAML::ParseContext, node : YAML::Nodes::Node) : String
HTML.escape(node.value[0, 100]) unless node.is_a?(YAML::Nodes::Scalar)
node.raise "Expected scalar, not #{node.class}"
end
case node.value
when "true"
"dark"
when "false"
"light"
when ""
CONFIG.default_user_preferences.dark_mode
else
node.value
end
end end
end end
@ -67,33 +141,130 @@ struct Preferences
end end
end end
json_mapping({ module FamilyConverter
annotations: {type: Bool, default: CONFIG.default_user_preferences.annotations}, def self.to_yaml(value : Socket::Family, yaml : YAML::Nodes::Builder)
annotations_subscribed: {type: Bool, default: CONFIG.default_user_preferences.annotations_subscribed}, case value
autoplay: {type: Bool, default: CONFIG.default_user_preferences.autoplay}, when Socket::Family::UNSPEC
captions: {type: Array(String), default: CONFIG.default_user_preferences.captions, converter: ConfigPreferences::StringToArray}, yaml.scalar nil
comments: {type: Array(String), default: CONFIG.default_user_preferences.comments, converter: ConfigPreferences::StringToArray}, when Socket::Family::INET
continue: {type: Bool, default: CONFIG.default_user_preferences.continue}, yaml.scalar "ipv4"
continue_autoplay: {type: Bool, default: CONFIG.default_user_preferences.continue_autoplay}, when Socket::Family::INET6
dark_mode: {type: String, default: CONFIG.default_user_preferences.dark_mode, converter: ConfigPreferences::BoolToString}, yaml.scalar "ipv6"
latest_only: {type: Bool, default: CONFIG.default_user_preferences.latest_only}, when Socket::Family::UNIX
listen: {type: Bool, default: CONFIG.default_user_preferences.listen}, raise "Invalid socket family #{value}"
local: {type: Bool, default: CONFIG.default_user_preferences.local}, end
locale: {type: String, default: CONFIG.default_user_preferences.locale, converter: ProcessString}, end
max_results: {type: Int32, default: CONFIG.default_user_preferences.max_results, converter: ClampInt},
notifications_only: {type: Bool, default: CONFIG.default_user_preferences.notifications_only}, def self.from_yaml(ctx : YAML::ParseContext, node : YAML::Nodes::Node) : Socket::Family
player_style: {type: String, default: CONFIG.default_user_preferences.player_style, converter: ProcessString}, if node.is_a?(YAML::Nodes::Scalar)
quality: {type: String, default: CONFIG.default_user_preferences.quality, converter: ProcessString}, case node.value.downcase
default_home: {type: String, default: CONFIG.default_user_preferences.default_home}, when "ipv4"
feed_menu: {type: Array(String), default: CONFIG.default_user_preferences.feed_menu}, Socket::Family::INET
related_videos: {type: Bool, default: CONFIG.default_user_preferences.related_videos}, when "ipv6"
sort: {type: String, default: CONFIG.default_user_preferences.sort, converter: ProcessString}, Socket::Family::INET6
speed: {type: Float32, default: CONFIG.default_user_preferences.speed}, else
thin_mode: {type: Bool, default: CONFIG.default_user_preferences.thin_mode}, Socket::Family::UNSPEC
unseen_only: {type: Bool, default: CONFIG.default_user_preferences.unseen_only}, end
video_loop: {type: Bool, default: CONFIG.default_user_preferences.video_loop}, else
volume: {type: Int32, default: CONFIG.default_user_preferences.volume}, node.raise "Expected scalar, not #{node.class}"
}) end
end
end
module ProcessString
def self.to_json(value : String, json : JSON::Builder)
json.string value
end
def self.from_json(value : JSON::PullParser) : String
HTML.escape(value.read_string[0, 100])
end
def self.to_yaml(value : String, yaml : YAML::Nodes::Builder)
yaml.scalar value
end
def self.from_yaml(ctx : YAML::ParseContext, node : YAML::Nodes::Node) : String
HTML.escape(node.value[0, 100])
end
end
module StringToArray
def self.to_json(value : Array(String), json : JSON::Builder)
json.array do
value.each do |element|
json.string element
end
end
end
def self.from_json(value : JSON::PullParser) : Array(String)
begin
result = [] of String
value.read_array do
result << HTML.escape(value.read_string[0, 100])
end
rescue ex
result = [HTML.escape(value.read_string[0, 100]), ""]
end
result
end
def self.to_yaml(value : Array(String), yaml : YAML::Nodes::Builder)
yaml.sequence do
value.each do |element|
yaml.scalar element
end
end
end
def self.from_yaml(ctx : YAML::ParseContext, node : YAML::Nodes::Node) : Array(String)
begin
unless node.is_a?(YAML::Nodes::Sequence)
node.raise "Expected sequence, not #{node.class}"
end
result = [] of String
node.nodes.each do |item|
unless item.is_a?(YAML::Nodes::Scalar)
node.raise "Expected scalar, not #{item.class}"
end
result << HTML.escape(item.value[0, 100])
end
rescue ex
if node.is_a?(YAML::Nodes::Scalar)
result = [HTML.escape(node.value[0, 100]), ""]
else
result = ["", ""]
end
end
result
end
end
module StringToCookies
def self.to_yaml(value : HTTP::Cookies, yaml : YAML::Nodes::Builder)
(value.map { |c| "#{c.name}=#{c.value}" }).join("; ").to_yaml(yaml)
end
def self.from_yaml(ctx : YAML::ParseContext, node : YAML::Nodes::Node) : HTTP::Cookies
unless node.is_a?(YAML::Nodes::Scalar)
node.raise "Expected scalar, not #{node.class}"
end
cookies = HTTP::Cookies.new
node.value.split(";").each do |cookie|
next if cookie.strip.empty?
name, value = cookie.split("=", 2)
cookies << HTTP::Cookie.new(name.strip, value.strip)
end
cookies
end
end
end end
def get_user(sid, headers, db, refresh = true) def get_user(sid, headers, db, refresh = true)
@ -103,8 +274,7 @@ def get_user(sid, headers, db, refresh = true)
if refresh && Time.utc - user.updated > 1.minute if refresh && Time.utc - user.updated > 1.minute
user, sid = fetch_user(sid, headers, db) user, sid = fetch_user(sid, headers, db)
user_array = user.to_a user_array = user.to_a
user_array[4] = user_array[4].to_json # User preferences
user_array[4] = user_array[4].to_json
args = arg_array(user_array) args = arg_array(user_array)
db.exec("INSERT INTO users VALUES (#{args}) \ db.exec("INSERT INTO users VALUES (#{args}) \
@ -122,8 +292,7 @@ def get_user(sid, headers, db, refresh = true)
else else
user, sid = fetch_user(sid, headers, db) user, sid = fetch_user(sid, headers, db)
user_array = user.to_a user_array = user.to_a
user_array[4] = user_array[4].to_json # User preferences
user_array[4] = user_array[4].to_json
args = arg_array(user.to_a) args = arg_array(user.to_a)
db.exec("INSERT INTO users VALUES (#{args}) \ db.exec("INSERT INTO users VALUES (#{args}) \
@ -166,7 +335,17 @@ def fetch_user(sid, headers, db)
token = Base64.urlsafe_encode(Random::Secure.random_bytes(32)) token = Base64.urlsafe_encode(Random::Secure.random_bytes(32))
user = User.new(Time.utc, [] of String, channels, email, CONFIG.default_user_preferences, nil, token, [] of String, true) user = User.new({
updated: Time.utc,
notifications: [] of String,
subscriptions: channels,
email: email,
preferences: Preferences.new(CONFIG.default_user_preferences.to_tuple),
password: nil,
token: token,
watched: [] of String,
feed_needs_update: true,
})
return user, sid return user, sid
end end
@ -174,7 +353,17 @@ def create_user(sid, email, password)
password = Crypto::Bcrypt::Password.create(password, cost: 10) password = Crypto::Bcrypt::Password.create(password, cost: 10)
token = Base64.urlsafe_encode(Random::Secure.random_bytes(32)) token = Base64.urlsafe_encode(Random::Secure.random_bytes(32))
user = User.new(Time.utc, [] of String, [] of String, email, CONFIG.default_user_preferences, password.to_s, token, [] of String, true) user = User.new({
updated: Time.utc,
notifications: [] of String,
subscriptions: [] of String,
email: email,
preferences: Preferences.new(CONFIG.default_user_preferences.to_tuple),
password: password.to_s,
token: token,
watched: [] of String,
feed_needs_update: true,
})
return user, sid return user, sid
end end
@ -267,7 +456,7 @@ def subscribe_ajax(channel_id, action, env_headers)
end end
headers = cookies.add_request_headers(headers) headers = cookies.add_request_headers(headers)
if match = html.body.match(/'XSRF_TOKEN': "(?<session_token>[A-Za-z0-9\_\-\=]+)"/) if match = html.body.match(/'XSRF_TOKEN': "(?<session_token>[^"]+)"/)
session_token = match["session_token"] session_token = match["session_token"]
headers["content-type"] = "application/x-www-form-urlencoded" headers["content-type"] = "application/x-www-form-urlencoded"
@ -281,48 +470,6 @@ def subscribe_ajax(channel_id, action, env_headers)
end end
end end
# TODO: Playlist stub, sync with YouTube for Google accounts
# def playlist_ajax(video_ids, source_playlist_id, name, privacy, action, env_headers)
# headers = HTTP::Headers.new
# headers["Cookie"] = env_headers["Cookie"]
#
# html = YT_POOL.client &.get("/view_all_playlists?disable_polymer=1", headers)
#
# cookies = HTTP::Cookies.from_headers(headers)
# html.cookies.each do |cookie|
# if {"VISITOR_INFO1_LIVE", "YSC", "SIDCC"}.includes? cookie.name
# if cookies[cookie.name]?
# cookies[cookie.name] = cookie
# else
# cookies << cookie
# end
# end
# end
# headers = cookies.add_request_headers(headers)
#
# if match = html.body.match(/'XSRF_TOKEN': "(?<session_token>[A-Za-z0-9\_\-\=]+)"/)
# session_token = match["session_token"]
#
# headers["content-type"] = "application/x-www-form-urlencoded"
#
# post_req = {
# video_ids: [] of String,
# source_playlist_id: "",
# n: name,
# p: privacy,
# session_token: session_token,
# }
# post_url = "/playlist_ajax?#{action}=1"
#
# response = client.post(post_url, headers, form: post_req)
# if response.status_code == 200
# return JSON.parse(response.body)["result"]["playlistId"].as_s
# else
# return nil
# end
# end
# end
def get_subscription_feed(db, user, max_results = 40, page = 1) def get_subscription_feed(db, user, max_results = 40, page = 1)
limit = max_results.clamp(0, MAX_ITEMS_PER_PAGE) limit = max_results.clamp(0, MAX_ITEMS_PER_PAGE)
offset = (page - 1) * limit offset = (page - 1) * limit
@ -350,6 +497,7 @@ def get_subscription_feed(db, user, max_results = 40, page = 1)
notifications.sort_by! { |video| video.author } notifications.sort_by! { |video| video.author }
when "channel name - reverse" when "channel name - reverse"
notifications.sort_by! { |video| video.author }.reverse! notifications.sort_by! { |video| video.author }.reverse!
else nil # Ignore
end end
else else
if user.preferences.latest_only if user.preferences.latest_only
@ -398,6 +546,7 @@ def get_subscription_feed(db, user, max_results = 40, page = 1)
videos.sort_by! { |video| video.author } videos.sort_by! { |video| video.author }
when "channel name - reverse" when "channel name - reverse"
videos.sort_by! { |video| video.author }.reverse! videos.sort_by! { |video| video.author }.reverse!
else nil # Ignore
end end
notifications = PG_DB.query_one("SELECT notifications FROM users WHERE email = $1", user.email, as: Array(String)) notifications = PG_DB.query_one("SELECT notifications FROM users WHERE email = $1", user.email, as: Array(String))

File diff suppressed because it is too large Load Diff

View File

@ -20,12 +20,14 @@
<div class="pure-u-1 pure-u-lg-1-5"></div> <div class="pure-u-1 pure-u-lg-1-5"></div>
</div> </div>
<script> <script id="playlist_data" type="application/json">
var playlist_data = { <%=
csrf_token: '<%= URI.encode_www_form(env.get?("csrf_token").try &.as(String) || "") %>', {
} "csrf_token" => URI.encode_www_form(env.get?("csrf_token").try &.as(String) || "")
}.to_pretty_json
%>
</script> </script>
<script src="/js/playlist_widget.js"></script> <script src="/js/playlist_widget.js?v=<%= ASSET_COMMIT %>"></script>
<div class="pure-g"> <div class="pure-g">
<% videos.each_slice(4) do |slice| %> <% videos.each_slice(4) do |slice| %>

View File

@ -28,7 +28,7 @@
</div> </div>
<div class="h-box"> <div class="h-box">
<p><span style="white-space:pre-wrap"><%= XML.parse_html(channel.description_html).xpath_node(%q(.//pre)).try &.content %></span></p> <p><span style="white-space:pre-wrap"><%= channel.description_html %></span></p>
</div> </div>
<div class="h-box"> <div class="h-box">
@ -92,7 +92,7 @@
<div class="pure-g h-box"> <div class="pure-g h-box">
<div class="pure-u-1 pure-u-lg-1-5"> <div class="pure-u-1 pure-u-lg-1-5">
<% if page > 1 %> <% if page > 1 %>
<a href="/channel/<%= channel.ucid %>?page=<%= page - 1 %><% if sort_by != "newest" %>&sort_by=<%= sort_by %><% end %>"> <a href="/channel/<%= channel.ucid %>?page=<%= page - 1 %><% if sort_by != "newest" %>&sort_by=<%= HTML.escape(sort_by) %><% end %>">
<%= translate(locale, "Previous page") %> <%= translate(locale, "Previous page") %>
</a> </a>
<% end %> <% end %>
@ -100,7 +100,7 @@
<div class="pure-u-1 pure-u-lg-3-5"></div> <div class="pure-u-1 pure-u-lg-3-5"></div>
<div class="pure-u-1 pure-u-lg-1-5" style="text-align:right"> <div class="pure-u-1 pure-u-lg-1-5" style="text-align:right">
<% if count == 60 %> <% if count == 60 %>
<a href="/channel/<%= channel.ucid %>?page=<%= page + 1 %><% if sort_by != "newest" %>&sort_by=<%= sort_by %><% end %>"> <a href="/channel/<%= channel.ucid %>?page=<%= page + 1 %><% if sort_by != "newest" %>&sort_by=<%= HTML.escape(sort_by) %><% end %>">
<%= translate(locale, "Next page") %> <%= translate(locale, "Next page") %>
</a> </a>
<% end %> <% end %>

View File

@ -71,14 +71,16 @@
</div> </div>
<% end %> <% end %>
<script> <script id="community_data" type="application/json">
var community_data = { <%=
ucid: '<%= channel.ucid %>', {
youtube_comments_text: '<%= HTML.escape(translate(locale, "View YouTube comments")) %>', "ucid" => channel.ucid,
comments_text: '<%= HTML.escape(translate(locale, "View `x` comments", "{commentCount}")) %>', "youtube_comments_text" => HTML.escape(translate(locale, "View YouTube comments")),
hide_replies_text: '<%= HTML.escape(translate(locale, "Hide replies")) %>', "comments_text" => HTML.escape(translate(locale, "View `x` comments", "{commentCount}")),
show_replies_text: '<%= HTML.escape(translate(locale, "Show replies")) %>', "hide_replies_text" => HTML.escape(translate(locale, "Hide replies")),
preferences: <%= env.get("preferences").as(Preferences).to_json %>, "show_replies_text" => HTML.escape(translate(locale, "Show replies")),
} "preferences" => env.get("preferences").as(Preferences)
}.to_pretty_json
%>
</script> </script>
<script src="/js/community.js?v=<%= ASSET_COMMIT %>"></script> <script src="/js/community.js?v=<%= ASSET_COMMIT %>"></script>

View File

@ -1,19 +1,11 @@
<div class="h-box pure-g"> <div class="feed-menu">
<div class="pure-u-1 pure-u-md-1-4"></div> <% feed_menu = env.get("preferences").as(Preferences).feed_menu.dup %>
<div class="pure-u-1 pure-u-md-1-2"> <% if !env.get?("user") %>
<div class="pure-g"> <% feed_menu.reject! {|item| {"Subscriptions", "Playlists"}.includes? item} %>
<% feed_menu = env.get("preferences").as(Preferences).feed_menu.dup %> <% end %>
<% if !env.get?("user") %> <% feed_menu.each do |feed| %>
<% feed_menu.reject! {|item| {"Subscriptions", "Playlists"}.includes? item} %> <a href="/feed/<%= feed.downcase %>" class="feed-menu-item pure-menu-heading">
<% end %> <%= translate(locale, feed) %>
<% feed_menu.each do |feed| %> </a>
<div class="pure-u-1-2 pure-u-md-1-<%= feed_menu.size %>"> <% end %>
<a href="/feed/<%= feed.downcase %>" class="pure-menu-heading" style="text-align:center">
<%= translate(locale, feed) %>
</a>
</div>
<% end %>
</div>
</div>
<div class="pure-u-1 pure-u-md-1-4"></div>
</div> </div>

Some files were not shown because too many files have changed in this diff Show More