Old Extractors left behind:
VLivePlaylistIE
YoutubeSearchURLIE
YoutubeShowIE
YoutubeFavouritesIE
If removing old extractors, make corresponding changes in
docs/supportedsites.md
youtube_dlc/extractor/extractors.py
Not merged:
.github/ISSUE_TEMPLATE/1_broken_site.md
.github/ISSUE_TEMPLATE/2_site_support_request.md
.github/ISSUE_TEMPLATE/3_site_feature_request.md
.github/ISSUE_TEMPLATE/4_bug_report.md
.github/ISSUE_TEMPLATE/5_feature_request.md
test/test_all_urls.py
youtube_dlc/version.py
Changelog
deobfuscates the video URL using a reverse engineered version of KVS
player's algorithm. This was tested against version 4.0.4, 5.0.1,
5.1.1.4 and 5.2.0.4 of the player and a warning will be issued if the
major version changes.
We already had a few copies of Polymer-style pagination handling logic
for certain circumstances, but now we're forced into using it for all
playlists since we can no longer disable Polymer. Refactor the logic to
move it to the parent class for all entry lists (including e.g. search
results, feeds, and list of playlists), and generify a bit to cover the
child classes' use cases.
Not my changes, but from @franhp that didn't get merged on yt-dl on time
It supports BTCC new pages' schema from 2019 an on (/articles/ instead of /races/)
live_chat_continuation['continuations'][0]['liveChatReplayContinuationData']['continuation'] can not exist.
So catch the KeyError.
Traceback:
$ tubeup 'https://youtube.com/watch?v=JyE9OF03cao'
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dlc version 2020.10.25
[debug] Python version 3.7.3 (CPython) - Linux-5.8.0-0.bpo.2-amd64-x86_64-with-debian-10.6
[debug] exe versions: ffmpeg 3.3.9, ffprobe 3.3.9
[debug] Proxy map: {}
There are no annotations to write.
[download] 452.59KiB at 615.35KiB/s (00:01)ERROR: 'liveChatReplayContinuationData'
Traceback (most recent call last):
File "/mnt/data2/Backup/Wiki/.local/lib/python3.7/site-packages/youtube_dlc/YoutubeDL.py", line 846, in extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/mnt/data2/Backup/Wiki/.local/lib/python3.7/site-packages/youtube_dlc/YoutubeDL.py", line 901, in process_ie_result
return self.process_video_result(ie_result, download=download)
File "/mnt/data2/Backup/Wiki/.local/lib/python3.7/site-packages/youtube_dlc/YoutubeDL.py", line 1696, in process_video_result
self.process_info(new_info)
File "/mnt/data2/Backup/Wiki/.local/lib/python3.7/site-packages/youtube_dlc/YoutubeDL.py", line 1894, in process_info
dl(sub_filename, sub_info, subtitle=True)
File "/mnt/data2/Backup/Wiki/.local/lib/python3.7/site-packages/youtube_dlc/YoutubeDL.py", line 1866, in dl
return fd.download(name, info, subtitle)
File "/mnt/data2/Backup/Wiki/.local/lib/python3.7/site-packages/youtube_dlc/downloader/common.py", line 375, in download
return self.real_download(filename, info_dict)
File "/mnt/data2/Backup/Wiki/.local/lib/python3.7/site-packages/youtube_dlc/downloader/youtube_live_chat.py", line 85, in real_download
continuation_id = live_chat_continuation['continuations'][0]['liveChatReplayContinuationData']['continuation']
KeyError: 'liveChatReplayContinuationData'
This can happen if another software is using yt-dlc'API (ie: tubeup).
The stack trace would be:
$ tubeup 'https://youtube.com/watch?v=JyE9OF03cao'
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] youtube-dlc version 2020.10.25
[debug] Python version 3.7.3 (CPython) - Linux-5.8.0-0.bpo.2-amd64-x86_64-with-debian-10.6
[debug] exe versions: ffmpeg 3.3.9, ffprobe 3.3.9
[debug] Proxy map: {}
There are no annotations to write.
ERROR: '>' not supported between instances of 'NoneType' and 'int'
Traceback (most recent call last):
File "/mnt/data2/Backup/Wiki/.local/lib/python3.7/site-packages/youtube_dlc/YoutubeDL.py", line 846, in extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/mnt/data2/Backup/Wiki/.local/lib/python3.7/site-packages/youtube_dlc/YoutubeDL.py", line 901, in process_ie_result
return self.process_video_result(ie_result, download=download)
File "/mnt/data2/Backup/Wiki/.local/lib/python3.7/site-packages/youtube_dlc/YoutubeDL.py", line 1696, in process_video_result
self.process_info(new_info)
File "/mnt/data2/Backup/Wiki/.local/lib/python3.7/site-packages/youtube_dlc/YoutubeDL.py", line 1894, in process_info
dl(sub_filename, sub_info, subtitle=True)
File "/mnt/data2/Backup/Wiki/.local/lib/python3.7/site-packages/youtube_dlc/YoutubeDL.py", line 1866, in dl
return fd.download(name, info, subtitle)
File "/mnt/data2/Backup/Wiki/.local/lib/python3.7/site-packages/youtube_dlc/downloader/common.py", line 367, in download
if self.params.get('sleep_interval_subtitles') > 0:
TypeError: '>' not supported between instances of 'NoneType' and 'int'
This fix had been proposed on yt-dl for a lengthy period of time but was never merged. It has been thoroughly tested but a large section of the community.
In the event when there are no available sources due to DRM controlled sources, return a DRM error and don't proceed with trying.
#28 reports that an ExtractorError "No video formats found". Which is true, because the formats list is empty, however it's empty because they are all locked. This provides a more informative message to the end-user.
# TESTING
Tried the URL provided in #28 and confirmed a DRM messages is returned.
Fix a problem introduced in 320724f964 where is extracted the ID from the url with self._match_id but the problem is that ID is not always present in the url passed so the title should be extracted as proposed by the fix (and like is done in _real_extract (see line 337))
This doesn't result in an elegant, perfectly balanced search tree,
but it's absolutely good enough. This commit completely mitigates
the worst-case scenario where the archive file is sorted.
Signed-off-by: Jody Bruchon <jody@jodybruchon.com>
The old behavior was to open and scan the entire archive file for
every single video download. This resulted in horrible performance
for archives of any remotely large size, especially since all new
video IDs are appended to the end of the archive. For anyone who
uses the archive feature to maintain archives of entire video
playlists or channels, this meant that all such lists with newer
downloads would have to scan close to the end of the archive file
before the potential download was rejected. For archives with tens
of thousands of lines, this easily resulted in millions of line
reads and checks over the course of scanning a single channel or
playlist that had been seen previously.
The new behavior in this commit is to preload the archive file
into a binary search tree and scan the tree instead of constantly
scanning the file on disk for every file. When a new download is
appended to the archive file, it is also added to this tree. The
performance is massively better using this strategy over the more
"naive" line-by-line archive file parsing strategy.
The only negative consequence of this change is that the archive
in memory will not be synchronized with the archive file on disk.
Running multiple instances of the program at the same time that
all use the same archive file may result in duplicate archive
entries or duplicated downloads. This is unlikely to be a serious
issue for the vast majority of users. If the instances are not
likely to try to download identical video IDs then this should
not be a problem anyway; for example, having two instances pull
two completely different YouTube channels at once should be fine.
Signed-off-by: Jody Bruchon <jody@jodybruchon.com>
With this change, the merge operator may join any number of media streams,
video or audio. The streams are downloaded in the order specified.
Also, fix the metadata post-processor so that it doesn't leave out
any streams.