Compare commits

..

39 Commits

Author SHA1 Message Date
github-actions
84a251e1f5 [version] update
Created by: pukkandan

:ci skip all :ci run dl
2022-06-29 01:41:48 +00:00
pukkandan
9d339c41e2
Release 2022.06.29 2022-06-29 07:09:51 +05:30
pukkandan
ae61d108dd
[cleanup] Misc cleanup 2022-06-29 06:43:27 +05:30
pukkandan
47046464fa
[extractor] Fix empty BaseURL in MPD
Closes #4113
2022-06-29 06:43:26 +05:30
pukkandan
b1f94422cc
[update] Ability to set a maximum version for specific variants 2022-06-29 06:43:24 +05:30
pukkandan
c2c8921b41
[build] Draft release until complete
Related: #4133

:ci skip
2022-06-29 05:45:02 +05:30
nomevi
844086505f
[extractor/livestreamfails] Add extractor (#4204)
Authored by: nomevi
2022-06-29 05:41:38 +05:30
Stefan Lobbenmeier
63da2d0911
Fix bug in 6d916fe709 (#4219)
Update only to legacy version on old MacOS

Authored by: StefanLobbenmeier
2022-06-29 05:39:32 +05:30
FestplattenSchnitzel
1db1461272
[extractor/ViMP] Add playlist extractor (#4147)
Authored by: FestplattenSchnitzel
2022-06-29 05:36:25 +05:30
HobbyistDev
5fb450a64c
[extractor/steam] Add broadcast extractor (#4137)
Closes #4083

Authored by: HobbyistDev
2022-06-28 18:21:18 +05:30
Stefan Lobbenmeier
6d916fe709
[build] Standalone x64 builds for MacOS 10.9 (#4106)
Authored by: StefanLobbenmeier
2022-06-28 18:06:30 +05:30
Abubukker Chaudhary
2c60eae899
[extractor/Scrolller] Add extractor (#4010)
Closes #3635
Authored by: LunarFang416
2022-06-28 17:40:43 +05:30
crazymoose77756
962ffcf89c
[cleanup] Fix some typos (#4194)
Authored by: crazymoose77756
2022-06-26 17:50:06 -07:00
MMM
8a40bffaf9
[exractor/lbry] Use HEAD request for redirect URL (#4181)
and misc cleanup 

Authored by: flashdagger
2022-06-26 17:33:31 -07:00
pukkandan
e08f72e675
[extractor/mediaset] Improve _VALID_URL
Fixes https://github.com/yt-dlp/yt-dlp/issues/4141#issuecomment-1166521057
2022-06-26 18:49:34 +05:30
pukkandan
1685d46007
[extractor/ertflix] Improve _VALID_URL
Closes #4180
2022-06-26 17:30:01 +05:30
ischmidt20
8d214c484c
[extractor/CWTV] Extract thumbnail (#4185)
Authored by: ischmidt20
2022-06-25 17:37:36 -07:00
pukkandan
9eef7c4e55
Sanitize chapters
Closes #4182
2022-06-26 04:49:33 +05:30
pukkandan
bbae437723
[hls] Warn user when trying to download live HLS
We do not automatically switch to ffmpeg because the detection is not 100% accurate
2022-06-26 04:48:41 +05:30
HobbyistDev
30d22d775b
[extractor/premiershiprugby] Add extractor (#4129)
Closes #2980
Authored by: HobbyistDev
2022-06-25 07:43:32 -07:00
pukkandan
c043c24625
[extractor] Fix _create_request when headers is None
Closes #4164
2022-06-25 19:41:22 +05:30
FestplattenSchnitzel
74900105be
[extractor/ViMP] Add thumbnail and support more sites (#4147)
Authored by: FestplattenSchnitzel
2022-06-25 19:06:24 +05:30
HobbyistDev
d1bf2e199c
[extractor/fuyin] Add extractor (#4151)
Closes #2871

Authored by: HobbyistDev
2022-06-25 06:14:58 -07:00
pukkandan
c800598cd1
[options] Fix parse_known_args for --
Closes #4167
2022-06-25 08:38:52 +05:30
pukkandan
14f25df2b6
[compat] Remove deprecated functions from core code 2022-06-25 00:14:12 +05:30
pukkandan
54007a45f1
[cleanup] Consistent style for file heads 2022-06-25 00:08:58 +05:30
pukkandan
ac66811112
[compat] Remove more functions
Removing any more will require changes to a large number of extractors
2022-06-25 00:08:55 +05:30
pukkandan
3c5386cd71
[compat] Fix compat.WINDOWS_VT_MODE 2022-06-25 00:08:52 +05:30
pukkandan
bc40160883
Fix section_end of clips
Closes #4165
2022-06-25 00:08:49 +05:30
coletdev
379a4f161d
[utils] Fix inconsistent default handling between HTTP and HTTPS requests (#4158)
Default headers such as `Content-Type` were only being added for HTTPS requests among other handling.

Fixes bug in be4a824d74

Authored-by: coletdjnz
2022-06-24 03:29:28 +00:00
Brett824
06cc8f103b
[extractor/youtube] Mark videos as fully watched (#4146)
* Also fixes videos appearing as shorts in watch history

Closes #2555
Authored by: Brett824
2022-06-23 16:30:17 -07:00
Jelle Besseling
34baaced11
[extractor/dropout] Support cookies and login only as needed (#4075)
Closes #4035
Authored by: pingiun, pukkandan
2022-06-23 16:21:03 -07:00
pukkandan
9809740ba5
[extractor, cleanup] Reduce direct use of _downloader 2022-06-23 09:57:26 +05:30
pukkandan
f67baae17e
[ffmpeg] Write full output to debug on error
Bug in f0c9fb9682
2022-06-23 09:17:34 +05:30
zenerdi0de
37e40d693b
[extractor/tennistv] Rewrite extractor (#2324)
Closes #2177
Authored by: zenerdi0de, pukkandan
2022-06-22 19:01:34 -07:00
pukkandan
0c36dc00d7
[extractor/npr] Implement e50c3500b4 differently
Closes #4141
2022-06-23 01:46:49 +05:30
pukkandan
28163422a6
Fix --downloader native
Bug in 7b2c3f47c6
2022-06-22 19:14:36 +05:30
pukkandan
1ac4fd80c8
Fix playlist error handling
Bug in 7e88d7d78f
2022-06-22 08:39:14 +05:30
pukkandan
885fe351fb
[build] Fix release tag commit
bug in b5899f4f19
2022-06-22 07:50:46 +05:30
153 changed files with 1687 additions and 1091 deletions

View File

@ -11,7 +11,7 @@ body:
options: options:
- label: I'm reporting a broken site - label: I'm reporting a broken site
required: true required: true
- label: I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2022.06.29** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@ -51,12 +51,12 @@ body:
[debug] Portable config file: yt-dlp.conf [debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i'] [debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252 [debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2022.06.22.1 (exe) [debug] yt-dlp version 2022.06.29 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0 [debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1 [debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets [debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {} [debug] Proxy map: {}
yt-dlp is up to date (2022.06.22.1) yt-dlp is up to date (2022.06.29)
<more lines> <more lines>
render: shell render: shell
validations: validations:

View File

@ -11,7 +11,7 @@ body:
options: options:
- label: I'm reporting a new site support request - label: I'm reporting a new site support request
required: true required: true
- label: I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2022.06.29** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@ -62,12 +62,12 @@ body:
[debug] Portable config file: yt-dlp.conf [debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i'] [debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252 [debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2022.06.22.1 (exe) [debug] yt-dlp version 2022.06.29 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0 [debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1 [debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets [debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {} [debug] Proxy map: {}
yt-dlp is up to date (2022.06.22.1) yt-dlp is up to date (2022.06.29)
<more lines> <more lines>
render: shell render: shell
validations: validations:

View File

@ -11,7 +11,7 @@ body:
options: options:
- label: I'm requesting a site-specific feature - label: I'm requesting a site-specific feature
required: true required: true
- label: I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2022.06.29** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@ -60,12 +60,12 @@ body:
[debug] Portable config file: yt-dlp.conf [debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i'] [debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252 [debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2022.06.22.1 (exe) [debug] yt-dlp version 2022.06.29 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0 [debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1 [debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets [debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {} [debug] Proxy map: {}
yt-dlp is up to date (2022.06.22.1) yt-dlp is up to date (2022.06.29)
<more lines> <more lines>
render: shell render: shell
validations: validations:

View File

@ -11,7 +11,7 @@ body:
options: options:
- label: I'm reporting a bug unrelated to a specific site - label: I'm reporting a bug unrelated to a specific site
required: true required: true
- label: I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2022.06.29** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've checked that all provided URLs are playable in a browser with the same IP and same login details - label: I've checked that all provided URLs are playable in a browser with the same IP and same login details
required: true required: true
@ -45,12 +45,12 @@ body:
[debug] Portable config file: yt-dlp.conf [debug] Portable config file: yt-dlp.conf
[debug] Portable config: ['-i'] [debug] Portable config: ['-i']
[debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252 [debug] Encodings: locale cp1252, fs utf-8, stdout utf-8, stderr utf-8, pref cp1252
[debug] yt-dlp version 2022.06.22.1 (exe) [debug] yt-dlp version 2022.06.29 (exe)
[debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0 [debug] Python version 3.8.8 (CPython 64bit) - Windows-10-10.0.19041-SP0
[debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1 [debug] exe versions: ffmpeg 3.0.1, ffprobe 3.0.1
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets [debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {} [debug] Proxy map: {}
yt-dlp is up to date (2022.06.22.1) yt-dlp is up to date (2022.06.29)
<more lines> <more lines>
render: shell render: shell
validations: validations:

View File

@ -13,7 +13,7 @@ body:
required: true required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true required: true
- label: I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2022.06.29** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates - label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
required: true required: true

View File

@ -13,7 +13,7 @@ body:
required: true required: true
- label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme) - label: I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
required: true required: true
- label: I've verified that I'm running yt-dlp version **2022.06.22.1** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit) - label: I've verified that I'm running yt-dlp version **2022.06.29** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
required: true required: true
- label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions including closed ones. DO NOT post duplicates - label: I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions including closed ones. DO NOT post duplicates
required: true required: true

View File

@ -8,6 +8,7 @@ jobs:
version_suffix: ${{ steps.version_suffix.outputs.version_suffix }} version_suffix: ${{ steps.version_suffix.outputs.version_suffix }}
ytdlp_version: ${{ steps.bump_version.outputs.ytdlp_version }} ytdlp_version: ${{ steps.bump_version.outputs.ytdlp_version }}
upload_url: ${{ steps.create_release.outputs.upload_url }} upload_url: ${{ steps.create_release.outputs.upload_url }}
release_id: ${{ steps.create_release.outputs.id }}
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
with: with:
@ -29,6 +30,7 @@ jobs:
make issuetemplates make issuetemplates
- name: Push to release - name: Push to release
id: push_release
run: | run: |
git config --global user.name github-actions git config --global user.name github-actions
git config --global user.email github-actions@example.com git config --global user.email github-actions@example.com
@ -57,15 +59,19 @@ jobs:
tag_name: ${{ steps.bump_version.outputs.ytdlp_version }} tag_name: ${{ steps.bump_version.outputs.ytdlp_version }}
release_name: yt-dlp ${{ steps.bump_version.outputs.ytdlp_version }} release_name: yt-dlp ${{ steps.bump_version.outputs.ytdlp_version }}
commitish: ${{ steps.push_release.outputs.head_sha }} commitish: ${{ steps.push_release.outputs.head_sha }}
draft: true
prerelease: false
body: | body: |
#### [A description of the various files]((https://github.com/yt-dlp/yt-dlp#release-files)) are in the README #### [A description of the various files]((https://github.com/yt-dlp/yt-dlp#release-files)) are in the README
--- ---
<details open><summary><h3>Changelog</summary>
<p>
### Changelog:
${{ env.changelog }} ${{ env.changelog }}
draft: false
prerelease: false </p>
</details>
build_unix: build_unix:
@ -235,6 +241,52 @@ jobs:
asset_content_type: application/zip asset_content_type: application/zip
build_macos_legacy:
runs-on: macos-latest
needs: create_release
steps:
- uses: actions/checkout@v2
- name: Install Python
# We need the official Python, because the GA ones only support newer macOS versions
env:
PYTHON_VERSION: 3.10.5
MACOSX_DEPLOYMENT_TARGET: 10.9 # Used up by the Python build tools
run: |
# Hack to get the latest patch version. Uncomment if needed
#brew install python@3.10
#export PYTHON_VERSION=$( $(brew --prefix)/opt/python@3.10/bin/python3 --version | cut -d ' ' -f 2 )
curl https://www.python.org/ftp/python/${PYTHON_VERSION}/python-${PYTHON_VERSION}-macos11.pkg -o "python.pkg"
sudo installer -pkg python.pkg -target /
python3 --version
- name: Install Requirements
run: |
brew install coreutils
python3 -m pip install -U --user pip Pyinstaller -r requirements.txt
- name: Prepare
run: |
python3 devscripts/update-version.py ${{ needs.create_release.outputs.version_suffix }}
python3 devscripts/make_lazy_extractors.py
- name: Build
run: |
python3 pyinst.py
- name: Get SHA2-SUMS
id: get_sha
run: |
echo "::set-output name=sha256_macos_legacy::$(sha256sum dist/yt-dlp_macos | awk '{print $1}')"
echo "::set-output name=sha512_macos_legacy::$(sha512sum dist/yt-dlp_macos | awk '{print $1}')"
- name: Upload standalone binary
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.create_release.outputs.upload_url }}
asset_path: ./dist/yt-dlp_macos
asset_name: yt-dlp_macos_legacy
asset_content_type: application/octet-stream
build_windows: build_windows:
runs-on: windows-latest runs-on: windows-latest
needs: create_release needs: create_release
@ -350,7 +402,7 @@ jobs:
finish: finish:
runs-on: ubuntu-latest runs-on: ubuntu-latest
needs: [create_release, build_unix, build_windows, build_windows32, build_macos] needs: [create_release, build_unix, build_windows, build_windows32, build_macos, build_macos_legacy]
steps: steps:
- name: Make SHA2-SUMS files - name: Make SHA2-SUMS files
@ -365,6 +417,7 @@ jobs:
echo "${{ needs.build_windows.outputs.sha256_win_zip }} yt-dlp_win.zip" >> SHA2-256SUMS echo "${{ needs.build_windows.outputs.sha256_win_zip }} yt-dlp_win.zip" >> SHA2-256SUMS
echo "${{ needs.build_macos.outputs.sha256_macos }} yt-dlp_macos" >> SHA2-256SUMS echo "${{ needs.build_macos.outputs.sha256_macos }} yt-dlp_macos" >> SHA2-256SUMS
echo "${{ needs.build_macos.outputs.sha256_macos_zip }} yt-dlp_macos.zip" >> SHA2-256SUMS echo "${{ needs.build_macos.outputs.sha256_macos_zip }} yt-dlp_macos.zip" >> SHA2-256SUMS
echo "${{ needs.build_macos_legacy.outputs.sha256_macos_legacy }} yt-dlp_macos_legacy" >> SHA2-256SUMS
echo "${{ needs.build_unix.outputs.sha512_bin }} yt-dlp" >> SHA2-512SUMS echo "${{ needs.build_unix.outputs.sha512_bin }} yt-dlp" >> SHA2-512SUMS
echo "${{ needs.build_unix.outputs.sha512_tar }} yt-dlp.tar.gz" >> SHA2-512SUMS echo "${{ needs.build_unix.outputs.sha512_tar }} yt-dlp.tar.gz" >> SHA2-512SUMS
echo "${{ needs.build_unix.outputs.sha512_linux }} yt-dlp_linux" >> SHA2-512SUMS echo "${{ needs.build_unix.outputs.sha512_linux }} yt-dlp_linux" >> SHA2-512SUMS
@ -375,6 +428,7 @@ jobs:
echo "${{ needs.build_windows.outputs.sha512_win_zip }} yt-dlp_win.zip" >> SHA2-512SUMS echo "${{ needs.build_windows.outputs.sha512_win_zip }} yt-dlp_win.zip" >> SHA2-512SUMS
echo "${{ needs.build_macos.outputs.sha512_macos }} yt-dlp_macos" >> SHA2-512SUMS echo "${{ needs.build_macos.outputs.sha512_macos }} yt-dlp_macos" >> SHA2-512SUMS
echo "${{ needs.build_macos.outputs.sha512_macos_zip }} yt-dlp_macos.zip" >> SHA2-512SUMS echo "${{ needs.build_macos.outputs.sha512_macos_zip }} yt-dlp_macos.zip" >> SHA2-512SUMS
echo "${{ needs.build_macos_legacy.outputs.sha512_macos_legacy }} yt-dlp_macos_legacy" >> SHA2-512SUMS
- name: Upload SHA2-256SUMS file - name: Upload SHA2-256SUMS file
uses: actions/upload-release-asset@v1 uses: actions/upload-release-asset@v1
@ -394,3 +448,24 @@ jobs:
asset_path: ./SHA2-512SUMS asset_path: ./SHA2-512SUMS
asset_name: SHA2-512SUMS asset_name: SHA2-512SUMS
asset_content_type: text/plain asset_content_type: text/plain
- name: Make Update spec
run: |
echo "# This file is used for regulating self-update" >> _update_spec
- name: Upload update spec
uses: actions/upload-release-asset@v1
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
upload_url: ${{ needs.create_release.outputs.upload_url }}
asset_path: ./_update_spec
asset_name: _update_spec
asset_content_type: text/plain
- name: Finalize release
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh api -X PATCH -H "Accept: application/vnd.github.v3+json" \
/repos/${{ github.repository }}/releases/${{ needs.create_release.outputs.release_id }} \
-F draft=false

View File

@ -457,7 +457,7 @@ title = self._search_regex( # incorrect
webpage, 'title', group='title') webpage, 'title', group='title')
``` ```
Here the presence or absence of other attributes including `style` is irrelevent for the data we need, and so the regex must not depend on it Here the presence or absence of other attributes including `style` is irrelevant for the data we need, and so the regex must not depend on it
#### Keep the regular expressions as simple as possible, but no simpler #### Keep the regular expressions as simple as possible, but no simpler
@ -501,7 +501,7 @@ There is a soft limit to keep lines of code under 100 characters long. This mean
For example, you should **never** split long string literals like URLs or some other often copied entities over multiple lines to fit this limit: For example, you should **never** split long string literals like URLs or some other often copied entities over multiple lines to fit this limit:
Conversely, don't unecessarily split small lines further. As a rule of thumb, if removing the line split keeps the code under 80 characters, it should be a single line. Conversely, don't unnecessarily split small lines further. As a rule of thumb, if removing the line split keeps the code under 80 characters, it should be a single line.
##### Examples ##### Examples

View File

@ -267,3 +267,8 @@ sqrtNOT
bubbleguuum bubbleguuum
darkxex darkxex
miseran miseran
StefanLobbenmeier
crazymoose77756
nomevi
Brett824
pingiun

View File

@ -11,6 +11,45 @@
--> -->
### 2022.06.29
* Fix `--downloader native`
* Fix `section_end` of clips
* Fix playlist error handling
* Sanitize `chapters`
* [extractor] Fix `_create_request` when headers is None
* [extractor] Fix empty `BaseURL` in MPD
* [ffmpeg] Write full output to debug on error
* [hls] Warn user when trying to download live HLS
* [options] Fix `parse_known_args` for `--`
* [utils] Fix inconsistent default handling between HTTP and HTTPS requests by [coletdjnz](https://github.com/coletdjnz)
* [build] Draft release until complete
* [build] Fix release tag commit
* [build] Standalone x64 builds for MacOS 10.9 by [StefanLobbenmeier](https://github.com/StefanLobbenmeier)
* [update] Ability to set a maximum version for specific variants
* [compat] Fix `compat.WINDOWS_VT_MODE`
* [compat] Remove deprecated functions from core code
* [compat] Remove more functions
* [cleanup, extractor] Reduce direct use of `_downloader`
* [cleanup] Consistent style for file heads
* [cleanup] Fix some typos by [crazymoose77756](https://github.com/crazymoose77756)
* [cleanup] Misc fixes and cleanup
* [extractor/Scrolller] Add extractor by [LunarFang416](https://github.com/LunarFang416)
* [extractor/ViMP] Add playlist extractor by [FestplattenSchnitzel](https://github.com/FestplattenSchnitzel)
* [extractor/fuyin] Add extractor by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/livestreamfails] Add extractor by [nomevi](https://github.com/nomevi)
* [extractor/premiershiprugby] Add extractor by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/steam] Add broadcast extractor by [HobbyistDev](https://github.com/HobbyistDev)
* [extractor/youtube] Mark videos as fully watched by [Brett824](https://github.com/Brett824)
* [extractor/CWTV] Extract thumbnail by [ischmidt20](https://github.com/ischmidt20)
* [extractor/ViMP] Add thumbnail and support more sites by [FestplattenSchnitzel](https://github.com/FestplattenSchnitzel)
* [extractor/dropout] Support cookies and login only as needed by [pingiun](https://github.com/pingiun), [pukkandan](https://github.com/pukkandan)
* [extractor/ertflix] Improve `_VALID_URL`
* [extractor/lbry] Use HEAD request for redirect URL by [flashdagger](https://github.com/flashdagger)
* [extractor/mediaset] Improve `_VALID_URL`
* [extractor/npr] Implement [e50c350](https://github.com/yt-dlp/yt-dlp/commit/e50c3500b43d80e4492569c4b4523c4379c6fbb2) differently
* [extractor/tennistv] Rewrite extractor by [pukkandan](https://github.com/pukkandan), [zenerdi0de](https://github.com/zenerdi0de)
### 2022.06.22.1 ### 2022.06.22.1
* [build] Fix updating homebrew formula * [build] Fix updating homebrew formula
@ -544,7 +583,7 @@
* [downloader/ffmpeg] Handle unknown formats better * [downloader/ffmpeg] Handle unknown formats better
* [outtmpl] Handle `-o ""` better * [outtmpl] Handle `-o ""` better
* [outtmpl] Handle hard-coded file extension better * [outtmpl] Handle hard-coded file extension better
* [extractor] Add convinience function `_yes_playlist` * [extractor] Add convenience function `_yes_playlist`
* [extractor] Allow non-fatal `title` extraction * [extractor] Allow non-fatal `title` extraction
* [extractor] Extract video inside `Article` json_ld * [extractor] Extract video inside `Article` json_ld
* [generic] Allow further processing of json_ld URL * [generic] Allow further processing of json_ld URL
@ -1678,7 +1717,7 @@
* [utils] Generalize `traverse_dict` to `traverse_obj` * [utils] Generalize `traverse_dict` to `traverse_obj`
* [downloader/ffmpeg] Hide FFmpeg banner unless in verbose mode by [fstirlitz](https://github.com/fstirlitz) * [downloader/ffmpeg] Hide FFmpeg banner unless in verbose mode by [fstirlitz](https://github.com/fstirlitz)
* [build] Release `yt-dlp.tar.gz` * [build] Release `yt-dlp.tar.gz`
* [build,update] Add GNU-style SHA512 and prepare updater for simlar SHA256 by [nihil-admirari](https://github.com/nihil-admirari) * [build,update] Add GNU-style SHA512 and prepare updater for similar SHA256 by [nihil-admirari](https://github.com/nihil-admirari)
* [pyinst] Show Python version in exe metadata by [nihil-admirari](https://github.com/nihil-admirari) * [pyinst] Show Python version in exe metadata by [nihil-admirari](https://github.com/nihil-admirari)
* [docs] Improve documentation of dependencies * [docs] Improve documentation of dependencies
* [cleanup] Mark unused files * [cleanup] Mark unused files

View File

@ -71,7 +71,7 @@ yt-dlp is a [youtube-dl](https://github.com/ytdl-org/youtube-dl) fork based on t
# NEW FEATURES # NEW FEATURES
* Based on **youtube-dl 2021.12.17 [commit/8a158a9](https://github.com/ytdl-org/youtube-dl/commit/8a158a936c8b002ef536e9e2b778ded02c09c0fa)**<!--([exceptions](https://github.com/yt-dlp/yt-dlp/issues/21))--> and **youtube-dlc 2020.11.11-3 [commit/f9401f2](https://github.com/blackjack4494/yt-dlc/commit/f9401f2a91987068139c5f757b12fc711d4c0cee)**: You get all the features and patches of [youtube-dlc](https://github.com/blackjack4494/yt-dlc) in addition to the latest [youtube-dl](https://github.com/ytdl-org/youtube-dl) * Merged with **youtube-dl v2021.12.17+ [commit/a03b977](https://github.com/ytdl-org/youtube-dl/commit/a03b9775d544b06a5b4f2aa630214c7c22fc2229)**<!--([exceptions](https://github.com/yt-dlp/yt-dlp/issues/21))--> and **youtube-dlc v2020.11.11-3+ [commit/f9401f2](https://github.com/blackjack4494/yt-dlc/commit/f9401f2a91987068139c5f757b12fc711d4c0cee)**: You get all the features and patches of [youtube-dlc](https://github.com/blackjack4494/yt-dlc) in addition to the latest [youtube-dl](https://github.com/ytdl-org/youtube-dl)
* **[SponsorBlock Integration](#sponsorblock-options)**: You can mark/remove sponsor sections in youtube videos by utilizing the [SponsorBlock](https://sponsor.ajay.app) API * **[SponsorBlock Integration](#sponsorblock-options)**: You can mark/remove sponsor sections in youtube videos by utilizing the [SponsorBlock](https://sponsor.ajay.app) API
@ -79,18 +79,13 @@ yt-dlp is a [youtube-dl](https://github.com/ytdl-org/youtube-dl) fork based on t
* **Merged with animelover1984/youtube-dl**: You get most of the features and improvements from [animelover1984/youtube-dl](https://github.com/animelover1984/youtube-dl) including `--write-comments`, `BiliBiliSearch`, `BilibiliChannel`, Embedding thumbnail in mp4/ogg/opus, playlist infojson etc. Note that the NicoNico livestreams are not available. See [#31](https://github.com/yt-dlp/yt-dlp/pull/31) for details. * **Merged with animelover1984/youtube-dl**: You get most of the features and improvements from [animelover1984/youtube-dl](https://github.com/animelover1984/youtube-dl) including `--write-comments`, `BiliBiliSearch`, `BilibiliChannel`, Embedding thumbnail in mp4/ogg/opus, playlist infojson etc. Note that the NicoNico livestreams are not available. See [#31](https://github.com/yt-dlp/yt-dlp/pull/31) for details.
* **Youtube improvements**: * **YouTube improvements**:
* All Feeds (`:ytfav`, `:ytwatchlater`, `:ytsubs`, `:ythistory`, `:ytrec`, `:ytnotif`) and private playlists supports downloading multiple pages of content * Supports Clips, Stories (`ytstories:<channel UCID>`), Search (including filters)**\***, YouTube Music Search, Channel-specific search, Search prefixes (`ytsearch:`, `ytsearchdate:`)**\***, Mixes, YouTube Music Albums/Channels ([except self-uploaded music](https://github.com/yt-dlp/yt-dlp/issues/723)), and Feeds (`:ytfav`, `:ytwatchlater`, `:ytsubs`, `:ythistory`, `:ytrec`, `:ytnotif`)
* Search (`ytsearch:`, `ytsearchdate:`), search URLs and in-channel search works * Fix for [n-sig based throttling](https://github.com/ytdl-org/youtube-dl/issues/29326) **\***
* Mixes supports downloading multiple pages of content * Supports some (but not all) age-gated content without cookies
* Some (but not all) age-gated content can be downloaded without cookies * Download livestreams from the start using `--live-from-start` (*experimental*)
* Fix for [n-sig based throttling](https://github.com/ytdl-org/youtube-dl/issues/29326) * `255kbps` audio is extracted (if available) from YouTube Music when premium cookies are given
* Redirect channel's home URL automatically to `/video` to preserve the old behaviour * Redirect channel's home URL automatically to `/video` to preserve the old behaviour
* `255kbps` audio is extracted (if available) from youtube music when premium cookies are given
* Youtube music Albums, channels etc can be downloaded ([except self-uploaded music](https://github.com/yt-dlp/yt-dlp/issues/723))
* Download livestreams from the start using `--live-from-start` (experimental)
* Support for downloading stories (`ytstories:<channel UCID>`)
* Support for downloading clips
* **Cookies from browser**: Cookies can be automatically extracted from all major web browsers using `--cookies-from-browser BROWSER[+KEYRING][:PROFILE]` * **Cookies from browser**: Cookies can be automatically extracted from all major web browsers using `--cookies-from-browser BROWSER[+KEYRING][:PROFILE]`
@ -124,6 +119,8 @@ yt-dlp is a [youtube-dl](https://github.com/ytdl-org/youtube-dl) fork based on t
See [changelog](Changelog.md) or [commits](https://github.com/yt-dlp/yt-dlp/commits) for the full list of changes See [changelog](Changelog.md) or [commits](https://github.com/yt-dlp/yt-dlp/commits) for the full list of changes
Features marked with a **\*** have been back-ported to youtube-dl
### Differences in default behavior ### Differences in default behavior
Some of yt-dlp's default options are different from that of youtube-dl and youtube-dlc: Some of yt-dlp's default options are different from that of youtube-dl and youtube-dlc:
@ -150,7 +147,7 @@ Some of yt-dlp's default options are different from that of youtube-dl and youtu
* Some private fields such as filenames are removed by default from the infojson. Use `--no-clean-infojson` or `--compat-options no-clean-infojson` to revert this * Some private fields such as filenames are removed by default from the infojson. Use `--no-clean-infojson` or `--compat-options no-clean-infojson` to revert this
* When `--embed-subs` and `--write-subs` are used together, the subtitles are written to disk and also embedded in the media file. You can use just `--embed-subs` to embed the subs and automatically delete the separate file. See [#630 (comment)](https://github.com/yt-dlp/yt-dlp/issues/630#issuecomment-893659460) for more info. `--compat-options no-keep-subs` can be used to revert this * When `--embed-subs` and `--write-subs` are used together, the subtitles are written to disk and also embedded in the media file. You can use just `--embed-subs` to embed the subs and automatically delete the separate file. See [#630 (comment)](https://github.com/yt-dlp/yt-dlp/issues/630#issuecomment-893659460) for more info. `--compat-options no-keep-subs` can be used to revert this
* `certifi` will be used for SSL root certificates, if installed. If you want to use only system certificates, use `--compat-options no-certifi` * `certifi` will be used for SSL root certificates, if installed. If you want to use only system certificates, use `--compat-options no-certifi`
* youtube-dl tries to remove some superfluous punctuations from filenames. While this can sometimes be helpfull, it is often undesirable. So yt-dlp tries to keep the fields in the filenames as close to their original values as possible. You can use `--compat-options filename-sanitization` to revert to youtube-dl's behavior * youtube-dl tries to remove some superfluous punctuations from filenames. While this can sometimes be helpful, it is often undesirable. So yt-dlp tries to keep the fields in the filenames as close to their original values as possible. You can use `--compat-options filename-sanitization` to revert to youtube-dl's behavior
For ease of use, a few more compat options are available: For ease of use, a few more compat options are available:
@ -239,7 +236,7 @@ If you [installed using Homebrew](#with-homebrew), run `brew upgrade yt-dlp/taps
File|Description File|Description
:---|:--- :---|:---
[yt-dlp](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp)|Platform-independant [zipimport](https://docs.python.org/3/library/zipimport.html) binary. Needs Python (recommended for **Linux/BSD**) [yt-dlp](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp)|Platform-independent [zipimport](https://docs.python.org/3/library/zipimport.html) binary. Needs Python (recommended for **Linux/BSD**)
[yt-dlp.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.exe)|Windows (Win7 SP1+) standalone x64 binary (recommended for **Windows**) [yt-dlp.exe](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp.exe)|Windows (Win7 SP1+) standalone x64 binary (recommended for **Windows**)
[yt-dlp_macos](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos)|MacOS (10.15+) standalone executable (recommended for **MacOS**) [yt-dlp_macos](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos)|MacOS (10.15+) standalone executable (recommended for **MacOS**)
@ -253,6 +250,7 @@ File|Description
[yt-dlp_linux.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux.zip)|Unpackaged Unix executable (no auto-update) [yt-dlp_linux.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_linux.zip)|Unpackaged Unix executable (no auto-update)
[yt-dlp_win.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win.zip)|Unpackaged Windows executable (no auto-update) [yt-dlp_win.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_win.zip)|Unpackaged Windows executable (no auto-update)
[yt-dlp_macos.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos.zip)|Unpackaged MacOS (10.15+) executable (no auto-update) [yt-dlp_macos.zip](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos.zip)|Unpackaged MacOS (10.15+) executable (no auto-update)
[yt-dlp_macos_legacy](https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp_macos_legacy)|MacOS (10.9+) standalone x64 executable
#### Misc #### Misc
@ -433,7 +431,7 @@ You can also fork the project on github and run your fork's [build workflow](.gi
"-S=aext:ARG0,abr -x --audio-format ARG0". "-S=aext:ARG0,abr -x --audio-format ARG0".
All defined aliases are listed in the --help All defined aliases are listed in the --help
output. Alias options can trigger more output. Alias options can trigger more
aliases; so be carefull to avoid defining aliases; so be careful to avoid defining
recursive options. As a safety measure, each recursive options. As a safety measure, each
alias may be triggered a maximum of 100 alias may be triggered a maximum of 100
times. This option can be used multiple times times. This option can be used multiple times
@ -466,7 +464,7 @@ You can also fork the project on github and run your fork's [build workflow](.gi
explicitly provided IP block in CIDR notation explicitly provided IP block in CIDR notation
## Video Selection: ## Video Selection:
-I, --playlist-items ITEM_SPEC Comma seperated playlist_index of the videos -I, --playlist-items ITEM_SPEC Comma separated playlist_index of the videos
to download. You can specify a range using to download. You can specify a range using
"[START]:[STOP][:STEP]". For backward "[START]:[STOP][:STEP]". For backward
compatibility, START-STOP is also supported. compatibility, START-STOP is also supported.

View File

@ -1,9 +1,12 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution
import os import os
import sys import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import yt_dlp import yt_dlp
BASH_COMPLETION_FILE = "completions/bash/yt-dlp" BASH_COMPLETION_FILE = "completions/bash/yt-dlp"

View File

@ -13,9 +13,11 @@ import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import gettestcases
from yt_dlp.utils import compat_urllib_parse_urlparse, compat_urllib_request import urllib.parse
import urllib.request
from test.helper import gettestcases
if len(sys.argv) > 1: if len(sys.argv) > 1:
METHOD = 'LIST' METHOD = 'LIST'
@ -26,7 +28,7 @@ else:
for test in gettestcases(): for test in gettestcases():
if METHOD == 'EURISTIC': if METHOD == 'EURISTIC':
try: try:
webpage = compat_urllib_request.urlopen(test['url'], timeout=10).read() webpage = urllib.request.urlopen(test['url'], timeout=10).read()
except Exception: except Exception:
print('\nFail: {}'.format(test['name'])) print('\nFail: {}'.format(test['name']))
continue continue
@ -36,7 +38,7 @@ for test in gettestcases():
RESULT = 'porn' in webpage.lower() RESULT = 'porn' in webpage.lower()
elif METHOD == 'LIST': elif METHOD == 'LIST':
domain = compat_urllib_parse_urlparse(test['url']).netloc domain = urllib.parse.urlparse(test['url']).netloc
if not domain: if not domain:
print('\nFail: {}'.format(test['name'])) print('\nFail: {}'.format(test['name']))
continue continue

View File

@ -1,10 +1,14 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import optparse
# Allow direct execution
import os import os
import sys import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import optparse
import yt_dlp import yt_dlp
from yt_dlp.utils import shell_quote from yt_dlp.utils import shell_quote

View File

@ -1,11 +1,15 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import codecs
# Allow direct execution
import os import os
import subprocess
import sys import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import codecs
import subprocess
from yt_dlp.aes import aes_encrypt, key_expansion from yt_dlp.aes import aes_encrypt, key_expansion
from yt_dlp.utils import intlist_to_bytes from yt_dlp.utils import intlist_to_bytes

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import optparse import optparse
import re import re

View File

@ -1,4 +1,12 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import optparse import optparse
@ -7,7 +15,7 @@ def read(fname):
return f.read() return f.read()
# Get the version from yt_dlp/version.py without importing the package # Get the version without importing the package
def read_version(fname): def read_version(fname):
exec(compile(read(fname), fname, 'exec')) exec(compile(read(fname), fname, 'exec'))
return locals()['__version__'] return locals()['__version__']

View File

@ -1,12 +1,15 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import optparse
# Allow direct execution
import os import os
import sys import sys
from inspect import getsource
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import optparse
from inspect import getsource
NO_ATTR = object() NO_ATTR = object()
STATIC_CLASS_PROPERTIES = ['IE_NAME', 'IE_DESC', 'SEARCH_KEY', '_WORKING', '_NETRC_MACHINE', 'age_limit'] STATIC_CLASS_PROPERTIES = ['IE_NAME', 'IE_DESC', 'SEARCH_KEY', '_WORKING', '_NETRC_MACHINE', 'age_limit']
CLASS_METHODS = [ CLASS_METHODS = [

View File

@ -1,7 +1,11 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# yt-dlp --help | make_readme.py """
# This must be run in a console of correct width yt-dlp --help | make_readme.py
This must be run in a console of correct width
"""
import functools import functools
import re import re
import sys import sys

View File

@ -1,10 +1,14 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import optparse
# Allow direct execution
import os import os
import sys import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import optparse
from yt_dlp.extractor import list_extractor_classes from yt_dlp.extractor import list_extractor_classes

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import optparse import optparse
import os.path import os.path
import re import re
@ -23,7 +24,7 @@ yt\-dlp \- A youtube-dl fork with additional features and patches
def main(): def main():
parser = optparse.OptionParser(usage='%prog OUTFILE.md') parser = optparse.OptionParser(usage='%prog OUTFILE.md')
options, args = parser.parse_args() _, args = parser.parse_args()
if len(args) != 1: if len(args) != 1:
parser.error('Expected an output filename') parser.error('Expected an output filename')

View File

@ -1,12 +1,15 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import json
# Allow direct execution
import os import os
import re
import sys import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from yt_dlp.compat import compat_urllib_request
import json
import re
import urllib.request
# usage: python3 ./devscripts/update-formulae.py <path-to-formulae-rb> <version> # usage: python3 ./devscripts/update-formulae.py <path-to-formulae-rb> <version>
# version can be either 0-aligned (yt-dlp version) or normalized (PyPl version) # version can be either 0-aligned (yt-dlp version) or normalized (PyPl version)
@ -15,7 +18,7 @@ filename, version = sys.argv[1:]
normalized_version = '.'.join(str(int(x)) for x in version.split('.')) normalized_version = '.'.join(str(int(x)) for x in version.split('.'))
pypi_release = json.loads(compat_urllib_request.urlopen( pypi_release = json.loads(urllib.request.urlopen(
'https://pypi.org/pypi/yt-dlp/%s/json' % normalized_version 'https://pypi.org/pypi/yt-dlp/%s/json' % normalized_version
).read().decode()) ).read().decode())

View File

@ -1,4 +1,12 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution
import os
import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import subprocess import subprocess
import sys import sys
from datetime import datetime from datetime import datetime

View File

@ -1,9 +1,12 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution
import os import os
import sys import sys
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import yt_dlp import yt_dlp
ZSH_COMPLETION_FILE = "completions/zsh/_yt-dlp" ZSH_COMPLETION_FILE = "completions/zsh/_yt-dlp"

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import os import os
import platform import platform
import sys import sys
@ -43,7 +44,7 @@ def main():
def parse_options(): def parse_options():
# Compatability with older arguments # Compatibility with older arguments
opts = sys.argv[1:] opts = sys.argv[1:]
if opts[0:1] in (['32'], ['64']): if opts[0:1] in (['32'], ['64']):
if ARCH != opts[0]: if ARCH != opts[0]:

View File

@ -37,3 +37,5 @@ line_length = 80
reverse_relative = true reverse_relative = true
ensure_newline_before_comments = true ensure_newline_before_comments = true
include_trailing_comma = true include_trailing_comma = true
known_first_party =
test

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import os.path import os.path
import sys import sys
import warnings import warnings

View File

@ -418,6 +418,7 @@
- **Funk** - **Funk**
- **Fusion** - **Fusion**
- **Fux** - **Fux**
- **FuyinTV**
- **Gab** - **Gab**
- **GabTV** - **GabTV**
- **Gaia**: [<abbr title="netrc machine"><em>gaia</em></abbr>] - **Gaia**: [<abbr title="netrc machine"><em>gaia</em></abbr>]
@ -618,6 +619,7 @@
- **LiveJournal** - **LiveJournal**
- **livestream** - **livestream**
- **livestream:original** - **livestream:original**
- **Livestreamfails**
- **Lnk** - **Lnk**
- **LnkGo** - **LnkGo**
- **loc**: Library of Congress - **loc**: Library of Congress
@ -982,6 +984,7 @@
- **PornoVoisines** - **PornoVoisines**
- **PornoXO** - **PornoXO**
- **PornTube** - **PornTube**
- **PremiershipRugby**
- **PressTV** - **PressTV**
- **ProjectVeritas** - **ProjectVeritas**
- **prosiebensat1**: ProSiebenSat.1 Digital - **prosiebensat1**: ProSiebenSat.1 Digital
@ -1113,6 +1116,7 @@
- **ScreencastOMatic** - **ScreencastOMatic**
- **ScrippsNetworks** - **ScrippsNetworks**
- **scrippsnetworks:watch** - **scrippsnetworks:watch**
- **Scrolller**
- **SCTE**: [<abbr title="netrc machine"><em>scte</em></abbr>] - **SCTE**: [<abbr title="netrc machine"><em>scte</em></abbr>]
- **SCTECourse**: [<abbr title="netrc machine"><em>scte</em></abbr>] - **SCTECourse**: [<abbr title="netrc machine"><em>scte</em></abbr>]
- **Seeker** - **Seeker**
@ -1189,6 +1193,7 @@
- **stanfordoc**: Stanford Open ClassRoom - **stanfordoc**: Stanford Open ClassRoom
- **startv** - **startv**
- **Steam** - **Steam**
- **SteamCommunityBroadcast**
- **Stitcher** - **Stitcher**
- **StitcherShow** - **StitcherShow**
- **StoryFire** - **StoryFire**
@ -1427,7 +1432,8 @@
- **vimeo:watchlater**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] Vimeo watch later list, ":vimeowatchlater" keyword (requires authentication) - **vimeo:watchlater**: [<abbr title="netrc machine"><em>vimeo</em></abbr>] Vimeo watch later list, ":vimeowatchlater" keyword (requires authentication)
- **Vimm:recording** - **Vimm:recording**
- **Vimm:stream** - **Vimm:stream**
- **Vimp** - **ViMP**
- **ViMP:Playlist**
- **Vimple**: Vimple - one-click video hosting - **Vimple**: Vimple - one-click video hosting
- **Vine** - **Vine**
- **vine:user** - **vine:user**

View File

@ -9,7 +9,7 @@ import types
import yt_dlp.extractor import yt_dlp.extractor
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
from yt_dlp.compat import compat_os_name, compat_str from yt_dlp.compat import compat_os_name
from yt_dlp.utils import preferredencoding, write_string from yt_dlp.utils import preferredencoding, write_string
if 'pytest' in sys.modules: if 'pytest' in sys.modules:
@ -96,29 +96,29 @@ md5 = lambda s: hashlib.md5(s.encode()).hexdigest()
def expect_value(self, got, expected, field): def expect_value(self, got, expected, field):
if isinstance(expected, compat_str) and expected.startswith('re:'): if isinstance(expected, str) and expected.startswith('re:'):
match_str = expected[len('re:'):] match_str = expected[len('re:'):]
match_rex = re.compile(match_str) match_rex = re.compile(match_str)
self.assertTrue( self.assertTrue(
isinstance(got, compat_str), isinstance(got, str),
f'Expected a {compat_str.__name__} object, but got {type(got).__name__} for field {field}') f'Expected a {str.__name__} object, but got {type(got).__name__} for field {field}')
self.assertTrue( self.assertTrue(
match_rex.match(got), match_rex.match(got),
f'field {field} (value: {got!r}) should match {match_str!r}') f'field {field} (value: {got!r}) should match {match_str!r}')
elif isinstance(expected, compat_str) and expected.startswith('startswith:'): elif isinstance(expected, str) and expected.startswith('startswith:'):
start_str = expected[len('startswith:'):] start_str = expected[len('startswith:'):]
self.assertTrue( self.assertTrue(
isinstance(got, compat_str), isinstance(got, str),
f'Expected a {compat_str.__name__} object, but got {type(got).__name__} for field {field}') f'Expected a {str.__name__} object, but got {type(got).__name__} for field {field}')
self.assertTrue( self.assertTrue(
got.startswith(start_str), got.startswith(start_str),
f'field {field} (value: {got!r}) should start with {start_str!r}') f'field {field} (value: {got!r}) should start with {start_str!r}')
elif isinstance(expected, compat_str) and expected.startswith('contains:'): elif isinstance(expected, str) and expected.startswith('contains:'):
contains_str = expected[len('contains:'):] contains_str = expected[len('contains:'):]
self.assertTrue( self.assertTrue(
isinstance(got, compat_str), isinstance(got, str),
f'Expected a {compat_str.__name__} object, but got {type(got).__name__} for field {field}') f'Expected a {str.__name__} object, but got {type(got).__name__} for field {field}')
self.assertTrue( self.assertTrue(
contains_str in got, contains_str in got,
f'field {field} (value: {got!r}) should contain {contains_str!r}') f'field {field} (value: {got!r}) should contain {contains_str!r}')
@ -142,12 +142,12 @@ def expect_value(self, got, expected, field):
index, field, type_expected, type_got)) index, field, type_expected, type_got))
expect_value(self, item_got, item_expected, field) expect_value(self, item_got, item_expected, field)
else: else:
if isinstance(expected, compat_str) and expected.startswith('md5:'): if isinstance(expected, str) and expected.startswith('md5:'):
self.assertTrue( self.assertTrue(
isinstance(got, compat_str), isinstance(got, str),
f'Expected field {field} to be a unicode object, but got value {got!r} of type {type(got)!r}') f'Expected field {field} to be a unicode object, but got value {got!r} of type {type(got)!r}')
got = 'md5:' + md5(got) got = 'md5:' + md5(got)
elif isinstance(expected, compat_str) and re.match(r'^(?:min|max)?count:\d+', expected): elif isinstance(expected, str) and re.match(r'^(?:min|max)?count:\d+', expected):
self.assertTrue( self.assertTrue(
isinstance(got, (list, dict)), isinstance(got, (list, dict)),
f'Expected field {field} to be a list or a dict, but it is of type {type(got).__name__}') f'Expected field {field} to be a list or a dict, but it is of type {type(got).__name__}')
@ -236,7 +236,7 @@ def expect_info_dict(self, got_dict, expected_dict):
missing_keys = set(test_info_dict.keys()) - set(expected_dict.keys()) missing_keys = set(test_info_dict.keys()) - set(expected_dict.keys())
if missing_keys: if missing_keys:
def _repr(v): def _repr(v):
if isinstance(v, compat_str): if isinstance(v, str):
return "'%s'" % v.replace('\\', '\\\\').replace("'", "\\'").replace('\n', '\\n') return "'%s'" % v.replace('\\', '\\\\').replace("'", "\\'").replace('\n', '\\n')
elif isinstance(v, type): elif isinstance(v, type):
return v.__name__ return v.__name__

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@ -6,10 +7,12 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import threading
from test.helper import FakeYDL, expect_dict, expect_value, http_server_port
from yt_dlp.compat import compat_etree_fromstring, compat_http_server import http.server
import threading
from test.helper import FakeYDL, expect_dict, expect_value, http_server_port
from yt_dlp.compat import compat_etree_fromstring
from yt_dlp.extractor import YoutubeIE, get_info_extractor from yt_dlp.extractor import YoutubeIE, get_info_extractor
from yt_dlp.extractor.common import InfoExtractor from yt_dlp.extractor.common import InfoExtractor
from yt_dlp.utils import ( from yt_dlp.utils import (
@ -23,7 +26,7 @@ TEAPOT_RESPONSE_STATUS = 418
TEAPOT_RESPONSE_BODY = "<h1>418 I'm a teapot</h1>" TEAPOT_RESPONSE_BODY = "<h1>418 I'm a teapot</h1>"
class InfoExtractorTestRequestHandler(compat_http_server.BaseHTTPRequestHandler): class InfoExtractorTestRequestHandler(http.server.BaseHTTPRequestHandler):
def log_message(self, format, *args): def log_message(self, format, *args):
pass pass
@ -1655,7 +1658,7 @@ jwplayer("mediaplayer").setup({"abouttext":"Visit Indie DB","aboutlink":"http:\/
# or the underlying `_download_webpage_handle` returning no content # or the underlying `_download_webpage_handle` returning no content
# when a response matches `expected_status`. # when a response matches `expected_status`.
httpd = compat_http_server.HTTPServer( httpd = http.server.HTTPServer(
('127.0.0.1', 0), InfoExtractorTestRequestHandler) ('127.0.0.1', 0), InfoExtractorTestRequestHandler)
port = http_server_port(httpd) port = http_server_port(httpd)
server_thread = threading.Thread(target=httpd.serve_forever) server_thread = threading.Thread(target=httpd.serve_forever)

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@ -6,17 +7,14 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import copy import copy
import json import json
from test.helper import FakeYDL, assertRegexpMatches import urllib.error
from test.helper import FakeYDL, assertRegexpMatches
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
from yt_dlp.compat import ( from yt_dlp.compat import compat_os_name
compat_os_name,
compat_setenv,
compat_str,
compat_urllib_error,
)
from yt_dlp.extractor import YoutubeIE from yt_dlp.extractor import YoutubeIE
from yt_dlp.extractor.common import InfoExtractor from yt_dlp.extractor.common import InfoExtractor
from yt_dlp.postprocessor.common import PostProcessor from yt_dlp.postprocessor.common import PostProcessor
@ -841,14 +839,14 @@ class TestYoutubeDL(unittest.TestCase):
# test('%(foo|)s', ('', '_')) # fixme # test('%(foo|)s', ('', '_')) # fixme
# Environment variable expansion for prepare_filename # Environment variable expansion for prepare_filename
compat_setenv('__yt_dlp_var', 'expanded') os.environ['__yt_dlp_var'] = 'expanded'
envvar = '%__yt_dlp_var%' if compat_os_name == 'nt' else '$__yt_dlp_var' envvar = '%__yt_dlp_var%' if compat_os_name == 'nt' else '$__yt_dlp_var'
test(envvar, (envvar, 'expanded')) test(envvar, (envvar, 'expanded'))
if compat_os_name == 'nt': if compat_os_name == 'nt':
test('%s%', ('%s%', '%s%')) test('%s%', ('%s%', '%s%'))
compat_setenv('s', 'expanded') os.environ['s'] = 'expanded'
test('%s%', ('%s%', 'expanded')) # %s% should be expanded before escaping %s test('%s%', ('%s%', 'expanded')) # %s% should be expanded before escaping %s
compat_setenv('(test)s', 'expanded') os.environ['(test)s'] = 'expanded'
test('%(test)s%', ('NA%', 'expanded')) # Environment should take priority over template test('%(test)s%', ('NA%', 'expanded')) # Environment should take priority over template
# Path expansion and escaping # Path expansion and escaping
@ -1101,7 +1099,7 @@ class TestYoutubeDL(unittest.TestCase):
def test_urlopen_no_file_protocol(self): def test_urlopen_no_file_protocol(self):
# see https://github.com/ytdl-org/youtube-dl/issues/8227 # see https://github.com/ytdl-org/youtube-dl/issues/8227
ydl = YDL() ydl = YDL()
self.assertRaises(compat_urllib_error.URLError, ydl.urlopen, 'file:///etc/passwd') self.assertRaises(urllib.error.URLError, ydl.urlopen, 'file:///etc/passwd')
def test_do_not_override_ie_key_in_url_transparent(self): def test_do_not_override_ie_key_in_url_transparent(self):
ydl = YDL() ydl = YDL()
@ -1187,7 +1185,7 @@ class TestYoutubeDL(unittest.TestCase):
def _entries(self): def _entries(self):
for n in range(3): for n in range(3):
video_id = compat_str(n) video_id = str(n)
yield { yield {
'_type': 'url_transparent', '_type': 'url_transparent',
'ie_key': VideoIE.ie_key(), 'ie_key': VideoIE.ie_key(),

View File

@ -1,12 +1,16 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution
import os import os
import re
import sys import sys
import tempfile
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import re
import tempfile
from yt_dlp.utils import YoutubeDLCookieJar from yt_dlp.utils import YoutubeDLCookieJar

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@ -6,6 +7,7 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import base64 import base64
from yt_dlp.aes import ( from yt_dlp.aes import (

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@ -6,8 +7,8 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import is_download_test, try_rm
from test.helper import is_download_test, try_rm
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL

View File

@ -1,6 +1,6 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import collections
import os import os
import sys import sys
import unittest import unittest
@ -8,8 +8,9 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import gettestcases import collections
from test.helper import gettestcases
from yt_dlp.extractor import FacebookIE, YoutubeIE, gen_extractors from yt_dlp.extractor import FacebookIE, YoutubeIE, gen_extractors

View File

@ -1,15 +1,16 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import shutil
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL import shutil
from test.helper import FakeYDL
from yt_dlp.cache import Cache from yt_dlp.cache import Cache

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@ -7,16 +8,14 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import struct
import urllib.parse
from yt_dlp import compat from yt_dlp import compat
from yt_dlp.compat import ( from yt_dlp.compat import (
compat_etree_fromstring, compat_etree_fromstring,
compat_expanduser, compat_expanduser,
compat_getenv,
compat_setenv,
compat_str,
compat_struct_unpack,
compat_urllib_parse_unquote, compat_urllib_parse_unquote,
compat_urllib_parse_unquote_plus,
compat_urllib_parse_urlencode, compat_urllib_parse_urlencode,
) )
@ -26,28 +25,19 @@ class TestCompat(unittest.TestCase):
with self.assertWarns(DeprecationWarning): with self.assertWarns(DeprecationWarning):
compat.compat_basestring compat.compat_basestring
with self.assertWarns(DeprecationWarning):
compat.WINDOWS_VT_MODE
compat.asyncio.events # Must not raise error compat.asyncio.events # Must not raise error
def test_compat_getenv(self):
test_str = 'тест'
compat_setenv('yt_dlp_COMPAT_GETENV', test_str)
self.assertEqual(compat_getenv('yt_dlp_COMPAT_GETENV'), test_str)
def test_compat_setenv(self):
test_var = 'yt_dlp_COMPAT_SETENV'
test_str = 'тест'
compat_setenv(test_var, test_str)
compat_getenv(test_var)
self.assertEqual(compat_getenv(test_var), test_str)
def test_compat_expanduser(self): def test_compat_expanduser(self):
old_home = os.environ.get('HOME') old_home = os.environ.get('HOME')
test_str = R'C:\Documents and Settings\тест\Application Data' test_str = R'C:\Documents and Settings\тест\Application Data'
try: try:
compat_setenv('HOME', test_str) os.environ['HOME'] = test_str
self.assertEqual(compat_expanduser('~'), test_str) self.assertEqual(compat_expanduser('~'), test_str)
finally: finally:
compat_setenv('HOME', old_home or '') os.environ['HOME'] = old_home or ''
def test_compat_urllib_parse_unquote(self): def test_compat_urllib_parse_unquote(self):
self.assertEqual(compat_urllib_parse_unquote('abc%20def'), 'abc def') self.assertEqual(compat_urllib_parse_unquote('abc%20def'), 'abc def')
@ -69,8 +59,8 @@ class TestCompat(unittest.TestCase):
'''(^◣_◢^)っ︻デ═一 ⇀ ⇀ ⇀ ⇀ ⇀ ↶%I%Break%Things%''') '''(^◣_◢^)っ︻デ═一 ⇀ ⇀ ⇀ ⇀ ⇀ ↶%I%Break%Things%''')
def test_compat_urllib_parse_unquote_plus(self): def test_compat_urllib_parse_unquote_plus(self):
self.assertEqual(compat_urllib_parse_unquote_plus('abc%20def'), 'abc def') self.assertEqual(urllib.parse.unquote_plus('abc%20def'), 'abc def')
self.assertEqual(compat_urllib_parse_unquote_plus('%7e/abc+def'), '~/abc def') self.assertEqual(urllib.parse.unquote_plus('%7e/abc+def'), '~/abc def')
def test_compat_urllib_parse_urlencode(self): def test_compat_urllib_parse_urlencode(self):
self.assertEqual(compat_urllib_parse_urlencode({'abc': 'def'}), 'abc=def') self.assertEqual(compat_urllib_parse_urlencode({'abc': 'def'}), 'abc=def')
@ -91,11 +81,11 @@ class TestCompat(unittest.TestCase):
</root> </root>
''' '''
doc = compat_etree_fromstring(xml.encode()) doc = compat_etree_fromstring(xml.encode())
self.assertTrue(isinstance(doc.attrib['foo'], compat_str)) self.assertTrue(isinstance(doc.attrib['foo'], str))
self.assertTrue(isinstance(doc.attrib['spam'], compat_str)) self.assertTrue(isinstance(doc.attrib['spam'], str))
self.assertTrue(isinstance(doc.find('normal').text, compat_str)) self.assertTrue(isinstance(doc.find('normal').text, str))
self.assertTrue(isinstance(doc.find('chinese').text, compat_str)) self.assertTrue(isinstance(doc.find('chinese').text, str))
self.assertTrue(isinstance(doc.find('foo/bar').text, compat_str)) self.assertTrue(isinstance(doc.find('foo/bar').text, str))
def test_compat_etree_fromstring_doctype(self): def test_compat_etree_fromstring_doctype(self):
xml = '''<?xml version="1.0"?> xml = '''<?xml version="1.0"?>
@ -104,7 +94,7 @@ class TestCompat(unittest.TestCase):
compat_etree_fromstring(xml) compat_etree_fromstring(xml)
def test_struct_unpack(self): def test_struct_unpack(self):
self.assertEqual(compat_struct_unpack('!B', b'\x00'), (0,)) self.assertEqual(struct.unpack('!B', b'\x00'), (0,))
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -1,14 +1,19 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import hashlib
import json
import os import os
import socket
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import hashlib
import http.client
import json
import socket
import urllib.error
from test.helper import ( from test.helper import (
assertGreaterEqual, assertGreaterEqual,
expect_info_dict, expect_info_dict,
@ -20,12 +25,7 @@ from test.helper import (
try_rm, try_rm,
) )
import yt_dlp.YoutubeDL import yt_dlp.YoutubeDL # isort: split
from yt_dlp.compat import (
compat_http_client,
compat_HTTPError,
compat_urllib_error,
)
from yt_dlp.extractor import get_info_extractor from yt_dlp.extractor import get_info_extractor
from yt_dlp.utils import ( from yt_dlp.utils import (
DownloadError, DownloadError,
@ -167,7 +167,7 @@ def generator(test_case, tname):
force_generic_extractor=params.get('force_generic_extractor', False)) force_generic_extractor=params.get('force_generic_extractor', False))
except (DownloadError, ExtractorError) as err: except (DownloadError, ExtractorError) as err:
# Check if the exception is not a network related one # Check if the exception is not a network related one
if not err.exc_info[0] in (compat_urllib_error.URLError, socket.timeout, UnavailableVideoError, compat_http_client.BadStatusLine) or (err.exc_info[0] == compat_HTTPError and err.exc_info[1].code == 503): if not err.exc_info[0] in (urllib.error.URLError, socket.timeout, UnavailableVideoError, http.client.BadStatusLine) or (err.exc_info[0] == urllib.error.HTTPError and err.exc_info[1].code == 503):
raise raise
if try_num == RETRIES: if try_num == RETRIES:
@ -273,7 +273,11 @@ def batch_generator(name, num_tests):
def test_template(self): def test_template(self):
for i in range(num_tests): for i in range(num_tests):
getattr(self, f'test_{name}_{i}' if i else f'test_{name}')() test_name = f'test_{name}_{i}' if i else f'test_{name}'
try:
getattr(self, test_name)()
except unittest.SkipTest:
print(f'Skipped {test_name}')
return test_template return test_template

View File

@ -1,17 +1,19 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import re
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import threading
from test.helper import http_server_port, try_rm
import http.server
import re
import threading
from test.helper import http_server_port, try_rm
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
from yt_dlp.compat import compat_http_server
from yt_dlp.downloader.http import HttpFD from yt_dlp.downloader.http import HttpFD
from yt_dlp.utils import encodeFilename from yt_dlp.utils import encodeFilename
@ -21,7 +23,7 @@ TEST_DIR = os.path.dirname(os.path.abspath(__file__))
TEST_SIZE = 10 * 1024 TEST_SIZE = 10 * 1024
class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler): class HTTPTestRequestHandler(http.server.BaseHTTPRequestHandler):
def log_message(self, format, *args): def log_message(self, format, *args):
pass pass
@ -78,7 +80,7 @@ class FakeLogger:
class TestHttpFD(unittest.TestCase): class TestHttpFD(unittest.TestCase):
def setUp(self): def setUp(self):
self.httpd = compat_http_server.HTTPServer( self.httpd = http.server.HTTPServer(
('127.0.0.1', 0), HTTPTestRequestHandler) ('127.0.0.1', 0), HTTPTestRequestHandler)
self.port = http_server_port(self.httpd) self.port = http_server_port(self.httpd)
self.server_thread = threading.Thread(target=self.httpd.serve_forever) self.server_thread = threading.Thread(target=self.httpd.serve_forever)

View File

@ -1,12 +1,16 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
import contextlib
# Allow direct execution
import os import os
import subprocess
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import contextlib
import subprocess
from yt_dlp.utils import encodeArgument from yt_dlp.utils import encodeArgument
rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@ -6,17 +7,19 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import http.server
import ssl import ssl
import threading import threading
from test.helper import http_server_port import urllib.request
from test.helper import http_server_port
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
from yt_dlp.compat import compat_http_server, compat_urllib_request
TEST_DIR = os.path.dirname(os.path.abspath(__file__)) TEST_DIR = os.path.dirname(os.path.abspath(__file__))
class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler): class HTTPTestRequestHandler(http.server.BaseHTTPRequestHandler):
def log_message(self, format, *args): def log_message(self, format, *args):
pass pass
@ -53,7 +56,7 @@ class FakeLogger:
class TestHTTP(unittest.TestCase): class TestHTTP(unittest.TestCase):
def setUp(self): def setUp(self):
self.httpd = compat_http_server.HTTPServer( self.httpd = http.server.HTTPServer(
('127.0.0.1', 0), HTTPTestRequestHandler) ('127.0.0.1', 0), HTTPTestRequestHandler)
self.port = http_server_port(self.httpd) self.port = http_server_port(self.httpd)
self.server_thread = threading.Thread(target=self.httpd.serve_forever) self.server_thread = threading.Thread(target=self.httpd.serve_forever)
@ -64,7 +67,7 @@ class TestHTTP(unittest.TestCase):
class TestHTTPS(unittest.TestCase): class TestHTTPS(unittest.TestCase):
def setUp(self): def setUp(self):
certfn = os.path.join(TEST_DIR, 'testcert.pem') certfn = os.path.join(TEST_DIR, 'testcert.pem')
self.httpd = compat_http_server.HTTPServer( self.httpd = http.server.HTTPServer(
('127.0.0.1', 0), HTTPTestRequestHandler) ('127.0.0.1', 0), HTTPTestRequestHandler)
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.load_cert_chain(certfn, None) sslctx.load_cert_chain(certfn, None)
@ -90,7 +93,7 @@ class TestClientCert(unittest.TestCase):
certfn = os.path.join(TEST_DIR, 'testcert.pem') certfn = os.path.join(TEST_DIR, 'testcert.pem')
self.certdir = os.path.join(TEST_DIR, 'testdata', 'certificate') self.certdir = os.path.join(TEST_DIR, 'testdata', 'certificate')
cacertfn = os.path.join(self.certdir, 'ca.crt') cacertfn = os.path.join(self.certdir, 'ca.crt')
self.httpd = compat_http_server.HTTPServer(('127.0.0.1', 0), HTTPTestRequestHandler) self.httpd = http.server.HTTPServer(('127.0.0.1', 0), HTTPTestRequestHandler)
sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER) sslctx = ssl.SSLContext(ssl.PROTOCOL_TLS_SERVER)
sslctx.verify_mode = ssl.CERT_REQUIRED sslctx.verify_mode = ssl.CERT_REQUIRED
sslctx.load_verify_locations(cafile=cacertfn) sslctx.load_verify_locations(cafile=cacertfn)
@ -130,7 +133,7 @@ class TestClientCert(unittest.TestCase):
def _build_proxy_handler(name): def _build_proxy_handler(name):
class HTTPTestRequestHandler(compat_http_server.BaseHTTPRequestHandler): class HTTPTestRequestHandler(http.server.BaseHTTPRequestHandler):
proxy_name = name proxy_name = name
def log_message(self, format, *args): def log_message(self, format, *args):
@ -146,14 +149,14 @@ def _build_proxy_handler(name):
class TestProxy(unittest.TestCase): class TestProxy(unittest.TestCase):
def setUp(self): def setUp(self):
self.proxy = compat_http_server.HTTPServer( self.proxy = http.server.HTTPServer(
('127.0.0.1', 0), _build_proxy_handler('normal')) ('127.0.0.1', 0), _build_proxy_handler('normal'))
self.port = http_server_port(self.proxy) self.port = http_server_port(self.proxy)
self.proxy_thread = threading.Thread(target=self.proxy.serve_forever) self.proxy_thread = threading.Thread(target=self.proxy.serve_forever)
self.proxy_thread.daemon = True self.proxy_thread.daemon = True
self.proxy_thread.start() self.proxy_thread.start()
self.geo_proxy = compat_http_server.HTTPServer( self.geo_proxy = http.server.HTTPServer(
('127.0.0.1', 0), _build_proxy_handler('geo')) ('127.0.0.1', 0), _build_proxy_handler('geo'))
self.geo_port = http_server_port(self.geo_proxy) self.geo_port = http_server_port(self.geo_proxy)
self.geo_proxy_thread = threading.Thread(target=self.geo_proxy.serve_forever) self.geo_proxy_thread = threading.Thread(target=self.geo_proxy.serve_forever)
@ -170,7 +173,7 @@ class TestProxy(unittest.TestCase):
response = ydl.urlopen(url).read().decode() response = ydl.urlopen(url).read().decode()
self.assertEqual(response, f'normal: {url}') self.assertEqual(response, f'normal: {url}')
req = compat_urllib_request.Request(url) req = urllib.request.Request(url)
req.add_header('Ytdl-request-proxy', geo_proxy) req.add_header('Ytdl-request-proxy', geo_proxy)
response = ydl.urlopen(req).read().decode() response = ydl.urlopen(req).read().decode()
self.assertEqual(response, f'geo: {url}') self.assertEqual(response, f'geo: {url}')

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@ -6,8 +7,8 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL, is_download_test
from test.helper import FakeYDL, is_download_test
from yt_dlp.extractor import IqiyiIE from yt_dlp.extractor import IqiyiIE

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@ -6,6 +7,7 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from yt_dlp.jsinterp import JSInterpreter from yt_dlp.jsinterp import JSInterpreter

View File

@ -1,3 +1,6 @@
#!/usr/bin/env python3
# Allow direct execution
import os import os
import sys import sys
import unittest import unittest

View File

@ -1,11 +1,15 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution
import os import os
import subprocess
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import subprocess
from test.helper import is_download_test, try_rm from test.helper import is_download_test, try_rm
root_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) root_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

View File

@ -1,13 +1,15 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution
import os import os
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import get_params, is_download_test, try_rm
import yt_dlp.YoutubeDL from test.helper import get_params, is_download_test, try_rm
import yt_dlp.YoutubeDL # isort: split
from yt_dlp.utils import DownloadError from yt_dlp.utils import DownloadError

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@ -6,6 +7,7 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from yt_dlp import YoutubeDL from yt_dlp import YoutubeDL
from yt_dlp.compat import compat_shlex_quote from yt_dlp.compat import compat_shlex_quote
from yt_dlp.postprocessor import ( from yt_dlp.postprocessor import (

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@ -6,11 +7,12 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import random import random
import subprocess import subprocess
from test.helper import FakeYDL, get_params, is_download_test import urllib.request
from yt_dlp.compat import compat_str, compat_urllib_request from test.helper import FakeYDL, get_params, is_download_test
@is_download_test @is_download_test
@ -51,7 +53,7 @@ class TestMultipleSocks(unittest.TestCase):
if params is None: if params is None:
return return
ydl = FakeYDL() ydl = FakeYDL()
req = compat_urllib_request.Request('http://yt-dl.org/ip') req = urllib.request.Request('http://yt-dl.org/ip')
req.add_header('Ytdl-request-proxy', params['secondary_proxy']) req.add_header('Ytdl-request-proxy', params['secondary_proxy'])
self.assertEqual( self.assertEqual(
ydl.urlopen(req).read().decode(), ydl.urlopen(req).read().decode(),
@ -62,7 +64,7 @@ class TestMultipleSocks(unittest.TestCase):
if params is None: if params is None:
return return
ydl = FakeYDL() ydl = FakeYDL()
req = compat_urllib_request.Request('https://yt-dl.org/ip') req = urllib.request.Request('https://yt-dl.org/ip')
req.add_header('Ytdl-request-proxy', params['secondary_proxy']) req.add_header('Ytdl-request-proxy', params['secondary_proxy'])
self.assertEqual( self.assertEqual(
ydl.urlopen(req).read().decode(), ydl.urlopen(req).read().decode(),
@ -99,13 +101,13 @@ class TestSocks(unittest.TestCase):
return ydl.urlopen('http://yt-dl.org/ip').read().decode() return ydl.urlopen('http://yt-dl.org/ip').read().decode()
def test_socks4(self): def test_socks4(self):
self.assertTrue(isinstance(self._get_ip('socks4'), compat_str)) self.assertTrue(isinstance(self._get_ip('socks4'), str))
def test_socks4a(self): def test_socks4a(self):
self.assertTrue(isinstance(self._get_ip('socks4a'), compat_str)) self.assertTrue(isinstance(self._get_ip('socks4a'), str))
def test_socks5(self): def test_socks5(self):
self.assertTrue(isinstance(self._get_ip('socks5'), compat_str)) self.assertTrue(isinstance(self._get_ip('socks5'), str))
if __name__ == '__main__': if __name__ == '__main__':

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@ -6,8 +7,8 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL, is_download_test, md5
from test.helper import FakeYDL, is_download_test, md5
from yt_dlp.extractor import ( from yt_dlp.extractor import (
NPOIE, NPOIE,
NRKTVIE, NRKTVIE,

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys

View File

@ -1,6 +1,6 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import contextlib
import os import os
import sys import sys
import unittest import unittest
@ -8,19 +8,16 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
# Various small unit tests import contextlib
import io import io
import itertools import itertools
import json import json
import xml.etree.ElementTree import xml.etree.ElementTree
from yt_dlp.compat import ( from yt_dlp.compat import (
compat_chr,
compat_etree_fromstring, compat_etree_fromstring,
compat_getenv,
compat_HTMLParseError, compat_HTMLParseError,
compat_os_name, compat_os_name,
compat_setenv,
) )
from yt_dlp.utils import ( from yt_dlp.utils import (
Config, Config,
@ -266,20 +263,20 @@ class TestUtil(unittest.TestCase):
def env(var): def env(var):
return f'%{var}%' if sys.platform == 'win32' else f'${var}' return f'%{var}%' if sys.platform == 'win32' else f'${var}'
compat_setenv('yt_dlp_EXPATH_PATH', 'expanded') os.environ['yt_dlp_EXPATH_PATH'] = 'expanded'
self.assertEqual(expand_path(env('yt_dlp_EXPATH_PATH')), 'expanded') self.assertEqual(expand_path(env('yt_dlp_EXPATH_PATH')), 'expanded')
old_home = os.environ.get('HOME') old_home = os.environ.get('HOME')
test_str = R'C:\Documents and Settings\тест\Application Data' test_str = R'C:\Documents and Settings\тест\Application Data'
try: try:
compat_setenv('HOME', test_str) os.environ['HOME'] = test_str
self.assertEqual(expand_path(env('HOME')), compat_getenv('HOME')) self.assertEqual(expand_path(env('HOME')), os.getenv('HOME'))
self.assertEqual(expand_path('~'), compat_getenv('HOME')) self.assertEqual(expand_path('~'), os.getenv('HOME'))
self.assertEqual( self.assertEqual(
expand_path('~/%s' % env('yt_dlp_EXPATH_PATH')), expand_path('~/%s' % env('yt_dlp_EXPATH_PATH')),
'%s/expanded' % compat_getenv('HOME')) '%s/expanded' % os.getenv('HOME'))
finally: finally:
compat_setenv('HOME', old_home or '') os.environ['HOME'] = old_home or ''
def test_prepend_extension(self): def test_prepend_extension(self):
self.assertEqual(prepend_extension('abc.ext', 'temp'), 'abc.temp.ext') self.assertEqual(prepend_extension('abc.ext', 'temp'), 'abc.temp.ext')
@ -1128,7 +1125,7 @@ class TestUtil(unittest.TestCase):
self.assertEqual(extract_attributes('<e x="décompose&#769;">'), {'x': 'décompose\u0301'}) self.assertEqual(extract_attributes('<e x="décompose&#769;">'), {'x': 'décompose\u0301'})
# "Narrow" Python builds don't support unicode code points outside BMP. # "Narrow" Python builds don't support unicode code points outside BMP.
try: try:
compat_chr(0x10000) chr(0x10000)
supports_outside_bmp = True supports_outside_bmp = True
except ValueError: except ValueError:
supports_outside_bmp = False supports_outside_bmp = False

View File

@ -1,11 +1,15 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution
import os import os
import subprocess
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import subprocess
rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) rootDir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@ -6,11 +7,12 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import xml.etree.ElementTree import xml.etree.ElementTree
from test.helper import get_params, is_download_test, try_rm
import yt_dlp.extractor import yt_dlp.extractor
import yt_dlp.YoutubeDL import yt_dlp.YoutubeDL
from test.helper import get_params, is_download_test, try_rm
class YoutubeDL(yt_dlp.YoutubeDL): class YoutubeDL(yt_dlp.YoutubeDL):

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys
@ -6,8 +7,8 @@ import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from test.helper import FakeYDL, is_download_test
from test.helper import FakeYDL, is_download_test
from yt_dlp.extractor import YoutubeIE, YoutubeTabIE from yt_dlp.extractor import YoutubeIE, YoutubeTabIE

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import os import os
import sys import sys

View File

@ -1,18 +1,19 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Allow direct execution # Allow direct execution
import contextlib
import os import os
import sys import sys
import unittest import unittest
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
import contextlib
import re import re
import string import string
import urllib.request import urllib.request
from test.helper import FakeYDL, is_download_test
from yt_dlp.compat import compat_str from test.helper import FakeYDL, is_download_test
from yt_dlp.extractor import YoutubeIE from yt_dlp.extractor import YoutubeIE
from yt_dlp.jsinterp import JSInterpreter from yt_dlp.jsinterp import JSInterpreter
@ -157,7 +158,7 @@ def t_factory(name, sig_func, url_pattern):
def signature(jscode, sig_input): def signature(jscode, sig_input):
func = YoutubeIE(FakeYDL())._parse_sig_js(jscode) func = YoutubeIE(FakeYDL())._parse_sig_js(jscode)
src_sig = ( src_sig = (
compat_str(string.printable[:sig_input]) str(string.printable[:sig_input])
if isinstance(sig_input, int) else sig_input) if isinstance(sig_input, int) else sig_input)
return func(src_sig) return func(src_sig)

View File

@ -1,4 +1,3 @@
#!/usr/bin/env python3
import collections import collections
import contextlib import contextlib
import datetime import datetime
@ -11,7 +10,6 @@ import json
import locale import locale
import operator import operator
import os import os
import platform
import random import random
import re import re
import shutil import shutil
@ -26,15 +24,8 @@ import urllib.request
from string import ascii_letters from string import ascii_letters
from .cache import Cache from .cache import Cache
from .compat import ( from .compat import HAS_LEGACY as compat_has_legacy
HAS_LEGACY as compat_has_legacy, from .compat import compat_os_name, compat_shlex_quote
compat_get_terminal_size,
compat_os_name,
compat_shlex_quote,
compat_str,
compat_urllib_error,
compat_urllib_request,
)
from .cookies import load_cookies from .cookies import load_cookies
from .downloader import FFmpegFD, get_suitable_downloader, shorten_protocol_name from .downloader import FFmpegFD, get_suitable_downloader, shorten_protocol_name
from .downloader.rtmp import rtmpdump_version from .downloader.rtmp import rtmpdump_version
@ -118,7 +109,6 @@ from .utils import (
number_of_digits, number_of_digits,
orderedSet, orderedSet,
parse_filesize, parse_filesize,
platform_name,
preferredencoding, preferredencoding,
prepend_extension, prepend_extension,
register_socks_protocols, register_socks_protocols,
@ -134,6 +124,7 @@ from .utils import (
strftime_or_none, strftime_or_none,
subtitles_filename, subtitles_filename,
supports_terminal_sequences, supports_terminal_sequences,
system_identifier,
timetuple_from_msec, timetuple_from_msec,
to_high_limit_path, to_high_limit_path,
traverse_obj, traverse_obj,
@ -585,7 +576,9 @@ class YoutubeDL:
MIN_SUPPORTED, MIN_RECOMMENDED = (3, 6), (3, 7) MIN_SUPPORTED, MIN_RECOMMENDED = (3, 6), (3, 7)
current_version = sys.version_info[:2] current_version = sys.version_info[:2]
if current_version < MIN_RECOMMENDED: if current_version < MIN_RECOMMENDED:
msg = 'Support for Python version %d.%d has been deprecated and will break in future versions of yt-dlp' msg = ('Support for Python version %d.%d has been deprecated. '
'See https://github.com/yt-dlp/yt-dlp/issues/3764 for more details. '
'You will recieve only one more update on this version')
if current_version < MIN_SUPPORTED: if current_version < MIN_SUPPORTED:
msg = 'Python version %d.%d is no longer supported' msg = 'Python version %d.%d is no longer supported'
self.deprecation_warning( self.deprecation_warning(
@ -644,7 +637,7 @@ class YoutubeDL:
try: try:
import pty import pty
master, slave = pty.openpty() master, slave = pty.openpty()
width = compat_get_terminal_size().columns width = shutil.get_terminal_size().columns
width_args = [] if width is None else ['-w', str(width)] width_args = [] if width is None else ['-w', str(width)]
sp_kwargs = {'stdin': subprocess.PIPE, 'stdout': slave, 'stderr': self._out_files.error} sp_kwargs = {'stdin': subprocess.PIPE, 'stdout': slave, 'stderr': self._out_files.error}
try: try:
@ -799,7 +792,7 @@ class YoutubeDL:
return message return message
assert hasattr(self, '_output_process') assert hasattr(self, '_output_process')
assert isinstance(message, compat_str) assert isinstance(message, str)
line_count = message.count('\n') + 1 line_count = message.count('\n') + 1
self._output_process.stdin.write((message + '\n').encode()) self._output_process.stdin.write((message + '\n').encode())
self._output_process.stdin.flush() self._output_process.stdin.flush()
@ -835,7 +828,7 @@ class YoutubeDL:
def to_stderr(self, message, only_once=False): def to_stderr(self, message, only_once=False):
"""Print message to stderr""" """Print message to stderr"""
assert isinstance(message, compat_str) assert isinstance(message, str)
if self.params.get('logger'): if self.params.get('logger'):
self.params['logger'].error(message) self.params['logger'].error(message)
else: else:
@ -1570,7 +1563,7 @@ class YoutubeDL:
additional_urls = (ie_result or {}).get('additional_urls') additional_urls = (ie_result or {}).get('additional_urls')
if additional_urls: if additional_urls:
# TODO: Improve MetadataParserPP to allow setting a list # TODO: Improve MetadataParserPP to allow setting a list
if isinstance(additional_urls, compat_str): if isinstance(additional_urls, str):
additional_urls = [additional_urls] additional_urls = [additional_urls]
self.to_screen( self.to_screen(
'[info] %s: %d additional URL(s) requested' % (ie_result['id'], len(additional_urls))) '[info] %s: %d additional URL(s) requested' % (ie_result['id'], len(additional_urls)))
@ -1732,7 +1725,7 @@ class YoutubeDL:
resolved_entries.append((playlist_index, entry)) resolved_entries.append((playlist_index, entry))
# TODO: Add auto-generated fields # TODO: Add auto-generated fields
if self._match_entry(entry, incomplete=True) is not None: if not entry or self._match_entry(entry, incomplete=True) is not None:
continue continue
self.to_screen('[download] Downloading video %s of %s' % ( self.to_screen('[download] Downloading video %s of %s' % (
@ -2363,10 +2356,10 @@ class YoutubeDL:
def sanitize_string_field(info, string_field): def sanitize_string_field(info, string_field):
field = info.get(string_field) field = info.get(string_field)
if field is None or isinstance(field, compat_str): if field is None or isinstance(field, str):
return return
report_force_conversion(string_field, 'a string', 'string') report_force_conversion(string_field, 'a string', 'string')
info[string_field] = compat_str(field) info[string_field] = str(field)
def sanitize_numeric_fields(info): def sanitize_numeric_fields(info):
for numeric_field in self._NUMERIC_FIELDS: for numeric_field in self._NUMERIC_FIELDS:
@ -2383,6 +2376,15 @@ class YoutubeDL:
if (info_dict.get('duration') or 0) <= 0 and info_dict.pop('duration', None): if (info_dict.get('duration') or 0) <= 0 and info_dict.pop('duration', None):
self.report_warning('"duration" field is negative, there is an error in extractor') self.report_warning('"duration" field is negative, there is an error in extractor')
chapters = info_dict.get('chapters') or []
dummy_chapter = {'end_time': 0, 'start_time': info_dict.get('duration')}
for prev, current, next_ in zip(
(dummy_chapter, *chapters), chapters, (*chapters[1:], dummy_chapter)):
if current.get('start_time') is None:
current['start_time'] = prev.get('end_time')
if not current.get('end_time'):
current['end_time'] = next_.get('start_time')
if 'playlist' not in info_dict: if 'playlist' not in info_dict:
# It isn't part of a playlist # It isn't part of a playlist
info_dict['playlist'] = None info_dict['playlist'] = None
@ -2469,7 +2471,7 @@ class YoutubeDL:
sanitize_numeric_fields(format) sanitize_numeric_fields(format)
format['url'] = sanitize_url(format['url']) format['url'] = sanitize_url(format['url'])
if not format.get('format_id'): if not format.get('format_id'):
format['format_id'] = compat_str(i) format['format_id'] = str(i)
else: else:
# Sanitize format_id from characters used in format selector expression # Sanitize format_id from characters used in format selector expression
format['format_id'] = re.sub(r'[\s,/+\[\]()]', '_', format['format_id']) format['format_id'] = re.sub(r'[\s,/+\[\]()]', '_', format['format_id'])
@ -2619,7 +2621,7 @@ class YoutubeDL:
if chapter or offset: if chapter or offset:
new_info.update({ new_info.update({
'section_start': offset + chapter.get('start_time', 0), 'section_start': offset + chapter.get('start_time', 0),
'section_end': offset + min(chapter.get('end_time', 0), duration), 'section_end': offset + min(chapter.get('end_time', duration), duration),
'section_title': chapter.get('title'), 'section_title': chapter.get('title'),
'section_number': chapter.get('index'), 'section_number': chapter.get('index'),
}) })
@ -3531,7 +3533,7 @@ class YoutubeDL:
'none', '' if f.get('vcodec') == 'none' 'none', '' if f.get('vcodec') == 'none'
else self._format_out('video only', self.Styles.SUPPRESS)), else self._format_out('video only', self.Styles.SUPPRESS)),
format_field(f, 'abr', '\t%dk'), format_field(f, 'abr', '\t%dk'),
format_field(f, 'asr', '\t%dHz'), format_field(f, 'asr', '\t%s', func=format_decimal_suffix),
join_nonempty( join_nonempty(
self._format_out('UNSUPPORTED', 'light red') if f.get('ext') in ('f4f', 'f4m') else None, self._format_out('UNSUPPORTED', 'light red') if f.get('ext') in ('f4f', 'f4m') else None,
format_field(f, 'language', '[%s]'), format_field(f, 'language', '[%s]'),
@ -3655,17 +3657,7 @@ class YoutubeDL:
with contextlib.suppress(Exception): with contextlib.suppress(Exception):
sys.exc_clear() sys.exc_clear()
def python_implementation(): write_debug(system_identifier())
impl_name = platform.python_implementation()
if impl_name == 'PyPy' and hasattr(sys, 'pypy_version_info'):
return impl_name + ' version %d.%d.%d' % sys.pypy_version_info[:3]
return impl_name
write_debug('Python version %s (%s %s) - %s' % (
platform.python_version(),
python_implementation(),
platform.architecture()[0],
platform_name()))
exe_versions, ffmpeg_features = FFmpegPostProcessor.get_versions_and_features(self) exe_versions, ffmpeg_features = FFmpegPostProcessor.get_versions_and_features(self)
ffmpeg_features = {key for key, val in ffmpeg_features.items() if val} ffmpeg_features = {key for key, val in ffmpeg_features.items() if val}
@ -3724,7 +3716,7 @@ class YoutubeDL:
else: else:
proxies = {'http': opts_proxy, 'https': opts_proxy} proxies = {'http': opts_proxy, 'https': opts_proxy}
else: else:
proxies = compat_urllib_request.getproxies() proxies = urllib.request.getproxies()
# Set HTTPS proxy to HTTP one if given (https://github.com/ytdl-org/youtube-dl/issues/805) # Set HTTPS proxy to HTTP one if given (https://github.com/ytdl-org/youtube-dl/issues/805)
if 'http' in proxies and 'https' not in proxies: if 'http' in proxies and 'https' not in proxies:
proxies['https'] = proxies['http'] proxies['https'] = proxies['http']
@ -3740,13 +3732,13 @@ class YoutubeDL:
# default FileHandler and allows us to disable the file protocol, which # default FileHandler and allows us to disable the file protocol, which
# can be used for malicious purposes (see # can be used for malicious purposes (see
# https://github.com/ytdl-org/youtube-dl/issues/8227) # https://github.com/ytdl-org/youtube-dl/issues/8227)
file_handler = compat_urllib_request.FileHandler() file_handler = urllib.request.FileHandler()
def file_open(*args, **kwargs): def file_open(*args, **kwargs):
raise compat_urllib_error.URLError('file:// scheme is explicitly disabled in yt-dlp for security reasons') raise urllib.error.URLError('file:// scheme is explicitly disabled in yt-dlp for security reasons')
file_handler.file_open = file_open file_handler.file_open = file_open
opener = compat_urllib_request.build_opener( opener = urllib.request.build_opener(
proxy_handler, https_handler, cookie_processor, ydlh, redirect_handler, data_handler, file_handler) proxy_handler, https_handler, cookie_processor, ydlh, redirect_handler, data_handler, file_handler)
# Delete the default user-agent header, which would otherwise apply in # Delete the default user-agent header, which would otherwise apply in

View File

@ -1,15 +1,15 @@
#!/usr/bin/env python3
f'You are using an unsupported version of Python. Only Python versions 3.6 and above are supported by yt-dlp' # noqa: F541 f'You are using an unsupported version of Python. Only Python versions 3.6 and above are supported by yt-dlp' # noqa: F541
__license__ = 'Public Domain' __license__ = 'Public Domain'
import getpass
import itertools import itertools
import optparse import optparse
import os import os
import re import re
import sys import sys
from .compat import compat_getpass, compat_shlex_quote from .compat import compat_shlex_quote
from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS from .cookies import SUPPORTED_BROWSERS, SUPPORTED_KEYRINGS
from .downloader import FileDownloader from .downloader import FileDownloader
from .downloader.external import get_external_downloader from .downloader.external import get_external_downloader
@ -403,6 +403,8 @@ def validate_options(opts):
default_downloader = None default_downloader = None
for proto, path in opts.external_downloader.items(): for proto, path in opts.external_downloader.items():
if path == 'native':
continue
ed = get_external_downloader(path) ed = get_external_downloader(path)
if ed is None: if ed is None:
raise ValueError( raise ValueError(
@ -529,9 +531,9 @@ def validate_options(opts):
# Ask for passwords # Ask for passwords
if opts.username is not None and opts.password is None: if opts.username is not None and opts.password is None:
opts.password = compat_getpass('Type account password and press [Return]: ') opts.password = getpass.getpass('Type account password and press [Return]: ')
if opts.ap_username is not None and opts.ap_password is None: if opts.ap_username is not None and opts.ap_password is None:
opts.ap_password = compat_getpass('Type TV provider account password and press [Return]: ') opts.ap_password = getpass.getpass('Type TV provider account password and press [Return]: ')
return warnings, deprecation_warnings return warnings, deprecation_warnings

View File

@ -1,4 +1,5 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Execute with # Execute with
# $ python -m yt_dlp # $ python -m yt_dlp

View File

@ -1,6 +1,7 @@
import base64
from math import ceil from math import ceil
from .compat import compat_b64decode, compat_ord from .compat import compat_ord
from .dependencies import Cryptodome_AES from .dependencies import Cryptodome_AES
from .utils import bytes_to_intlist, intlist_to_bytes from .utils import bytes_to_intlist, intlist_to_bytes
@ -264,7 +265,7 @@ def aes_decrypt_text(data, password, key_size_bytes):
""" """
NONCE_LENGTH_BYTES = 8 NONCE_LENGTH_BYTES = 8
data = bytes_to_intlist(compat_b64decode(data)) data = bytes_to_intlist(base64.b64decode(data))
password = bytes_to_intlist(password.encode()) password = bytes_to_intlist(password.encode())
key = password[:key_size_bytes] + [0] * (key_size_bytes - len(password)) key = password[:key_size_bytes] + [0] * (key_size_bytes - len(password))

View File

@ -6,7 +6,6 @@ import re
import shutil import shutil
import traceback import traceback
from .compat import compat_getenv
from .utils import expand_path, write_json_file from .utils import expand_path, write_json_file
@ -17,7 +16,7 @@ class Cache:
def _get_root_dir(self): def _get_root_dir(self):
res = self._ydl.params.get('cachedir') res = self._ydl.params.get('cachedir')
if res is None: if res is None:
cache_root = compat_getenv('XDG_CACHE_HOME', '~/.cache') cache_root = os.getenv('XDG_CACHE_HOME', '~/.cache')
res = os.path.join(cache_root, 'yt-dlp') res = os.path.join(cache_root, 'yt-dlp')
return expand_path(res) return expand_path(res)

View File

@ -7,7 +7,6 @@ from . import re
from ._deprecated import * # noqa: F401, F403 from ._deprecated import * # noqa: F401, F403
from .compat_utils import passthrough_module from .compat_utils import passthrough_module
# XXX: Implement this the same way as other DeprecationWarnings without circular import # XXX: Implement this the same way as other DeprecationWarnings without circular import
try: try:
passthrough_module(__name__, '._legacy', callback=lambda attr: warnings.warn( passthrough_module(__name__, '._legacy', callback=lambda attr: warnings.warn(

View File

@ -1,52 +1,16 @@
"""Deprecated - New code should avoid these""" """Deprecated - New code should avoid these"""
import base64 import base64
import getpass import urllib.error
import html import urllib.parse
import html.parser
import http compat_str = str
import http.client
import http.cookiejar
import http.cookies
import http.server
import itertools
import os
import shutil
import struct
import tokenize
import urllib
compat_b64decode = base64.b64decode compat_b64decode = base64.b64decode
compat_chr = chr
compat_cookiejar = http.cookiejar
compat_cookiejar_Cookie = http.cookiejar.Cookie
compat_cookies_SimpleCookie = http.cookies.SimpleCookie
compat_get_terminal_size = shutil.get_terminal_size
compat_getenv = os.getenv
compat_getpass = getpass.getpass
compat_html_entities = html.entities
compat_html_entities_html5 = html.entities.html5
compat_HTMLParser = html.parser.HTMLParser
compat_http_client = http.client
compat_http_server = http.server
compat_HTTPError = urllib.error.HTTPError compat_HTTPError = urllib.error.HTTPError
compat_itertools_count = itertools.count compat_urlparse = urllib.parse
compat_parse_qs = urllib.parse.parse_qs compat_parse_qs = urllib.parse.parse_qs
compat_str = str
compat_struct_pack = struct.pack
compat_struct_unpack = struct.unpack
compat_tokenize_tokenize = tokenize.tokenize
compat_urllib_error = urllib.error
compat_urllib_parse_unquote = urllib.parse.unquote compat_urllib_parse_unquote = urllib.parse.unquote
compat_urllib_parse_unquote_plus = urllib.parse.unquote_plus
compat_urllib_parse_urlencode = urllib.parse.urlencode compat_urllib_parse_urlencode = urllib.parse.urlencode
compat_urllib_parse_urlparse = urllib.parse.urlparse compat_urllib_parse_urlparse = urllib.parse.urlparse
compat_urllib_request = urllib.request
compat_urlparse = compat_urllib_parse = urllib.parse
def compat_setenv(key, value, env=os.environ):
env[key] = value
__all__ = [x for x in globals() if x.startswith('compat_')]

View File

@ -2,18 +2,27 @@
import collections import collections
import ctypes import ctypes
import http import getpass
import html.entities
import html.parser
import http.client import http.client
import http.cookiejar import http.cookiejar
import http.cookies import http.cookies
import http.server import http.server
import itertools
import os
import shlex import shlex
import shutil
import socket import socket
import struct import struct
import urllib import tokenize
import urllib.error
import urllib.parse
import urllib.request
import xml.etree.ElementTree as etree import xml.etree.ElementTree as etree
from subprocess import DEVNULL from subprocess import DEVNULL
from .compat_utils import passthrough_module # isort: split
from .asyncio import run as compat_asyncio_run # noqa: F401 from .asyncio import run as compat_asyncio_run # noqa: F401
from .re import Pattern as compat_Pattern # noqa: F401 from .re import Pattern as compat_Pattern # noqa: F401
from .re import match as compat_Match # noqa: F401 from .re import match as compat_Match # noqa: F401
@ -21,6 +30,8 @@ from ..dependencies import Cryptodome_AES as compat_pycrypto_AES # noqa: F401
from ..dependencies import brotli as compat_brotli # noqa: F401 from ..dependencies import brotli as compat_brotli # noqa: F401
from ..dependencies import websockets as compat_websockets # noqa: F401 from ..dependencies import websockets as compat_websockets # noqa: F401
passthrough_module(__name__, '...utils', ('WINDOWS_VT_MODE', 'windows_enable_vt_mode'))
# compat_ctypes_WINFUNCTYPE = ctypes.WINFUNCTYPE # compat_ctypes_WINFUNCTYPE = ctypes.WINFUNCTYPE
# will not work since ctypes.WINFUNCTYPE does not exist in UNIX machines # will not work since ctypes.WINFUNCTYPE does not exist in UNIX machines
@ -28,14 +39,31 @@ def compat_ctypes_WINFUNCTYPE(*args, **kwargs):
return ctypes.WINFUNCTYPE(*args, **kwargs) return ctypes.WINFUNCTYPE(*args, **kwargs)
def compat_setenv(key, value, env=os.environ):
env[key] = value
compat_basestring = str compat_basestring = str
compat_chr = chr
compat_collections_abc = collections.abc compat_collections_abc = collections.abc
compat_cookiejar = http.cookiejar
compat_cookiejar_Cookie = http.cookiejar.Cookie
compat_cookies = http.cookies compat_cookies = http.cookies
compat_cookies_SimpleCookie = http.cookies.SimpleCookie
compat_etree_Element = etree.Element compat_etree_Element = etree.Element
compat_etree_register_namespace = etree.register_namespace compat_etree_register_namespace = etree.register_namespace
compat_filter = filter compat_filter = filter
compat_get_terminal_size = shutil.get_terminal_size
compat_getenv = os.getenv
compat_getpass = getpass.getpass
compat_html_entities = html.entities
compat_html_entities_html5 = html.entities.html5
compat_HTMLParser = html.parser.HTMLParser
compat_http_client = http.client
compat_http_server = http.server
compat_input = input compat_input = input
compat_integer_types = (int, ) compat_integer_types = (int, )
compat_itertools_count = itertools.count
compat_kwargs = lambda kwargs: kwargs compat_kwargs = lambda kwargs: kwargs
compat_map = map compat_map = map
compat_numeric_types = (int, float, complex) compat_numeric_types = (int, float, complex)
@ -43,11 +71,18 @@ compat_print = print
compat_shlex_split = shlex.split compat_shlex_split = shlex.split
compat_socket_create_connection = socket.create_connection compat_socket_create_connection = socket.create_connection
compat_Struct = struct.Struct compat_Struct = struct.Struct
compat_struct_pack = struct.pack
compat_struct_unpack = struct.unpack
compat_subprocess_get_DEVNULL = lambda: DEVNULL compat_subprocess_get_DEVNULL = lambda: DEVNULL
compat_tokenize_tokenize = tokenize.tokenize
compat_urllib_error = urllib.error
compat_urllib_parse = urllib.parse
compat_urllib_parse_quote = urllib.parse.quote compat_urllib_parse_quote = urllib.parse.quote
compat_urllib_parse_quote_plus = urllib.parse.quote_plus compat_urllib_parse_quote_plus = urllib.parse.quote_plus
compat_urllib_parse_unquote_plus = urllib.parse.unquote_plus
compat_urllib_parse_unquote_to_bytes = urllib.parse.unquote_to_bytes compat_urllib_parse_unquote_to_bytes = urllib.parse.unquote_to_bytes
compat_urllib_parse_urlunparse = urllib.parse.urlunparse compat_urllib_parse_urlunparse = urllib.parse.urlunparse
compat_urllib_request = urllib.request
compat_urllib_request_DataHandler = urllib.request.DataHandler compat_urllib_request_DataHandler = urllib.request.DataHandler
compat_urllib_response = urllib.response compat_urllib_response = urllib.response
compat_urlretrieve = urllib.request.urlretrieve compat_urlretrieve = urllib.request.urlretrieve
@ -55,10 +90,3 @@ compat_xml_parse_error = etree.ParseError
compat_xpath = lambda xpath: xpath compat_xpath = lambda xpath: xpath
compat_zip = zip compat_zip = zip
workaround_optparse_bug9161 = lambda: None workaround_optparse_bug9161 = lambda: None
def __getattr__(name):
if name in ('WINDOWS_VT_MODE', 'windows_enable_vt_mode'):
from .. import utils
return getattr(utils, name)
raise AttributeError(name)

View File

@ -4,7 +4,6 @@ import importlib
import sys import sys
import types import types
_NO_ATTRIBUTE = object() _NO_ATTRIBUTE = object()
_Package = collections.namedtuple('Package', ('name', 'version')) _Package = collections.namedtuple('Package', ('name', 'version'))
@ -31,7 +30,7 @@ def _is_package(module):
return True return True
def passthrough_module(parent, child, *, callback=lambda _: None): def passthrough_module(parent, child, allowed_attributes=None, *, callback=lambda _: None):
parent_module = importlib.import_module(parent) parent_module = importlib.import_module(parent)
child_module = None # Import child module only as needed child_module = None # Import child module only as needed
@ -41,22 +40,30 @@ def passthrough_module(parent, child, *, callback=lambda _: None):
with contextlib.suppress(ImportError): with contextlib.suppress(ImportError):
return importlib.import_module(f'.{attr}', parent) return importlib.import_module(f'.{attr}', parent)
ret = self.__from_child(attr)
if ret is _NO_ATTRIBUTE:
raise AttributeError(f'module {parent} has no attribute {attr}')
callback(attr)
return ret
def __from_child(self, attr):
if allowed_attributes is None:
if attr.startswith('__') and attr.endswith('__'):
return _NO_ATTRIBUTE
elif attr not in allowed_attributes:
return _NO_ATTRIBUTE
nonlocal child_module nonlocal child_module
child_module = child_module or importlib.import_module(child, parent) child_module = child_module or importlib.import_module(child, parent)
ret = _NO_ATTRIBUTE
with contextlib.suppress(AttributeError): with contextlib.suppress(AttributeError):
ret = getattr(child_module, attr) return getattr(child_module, attr)
if _is_package(child_module): if _is_package(child_module):
with contextlib.suppress(ImportError): with contextlib.suppress(ImportError):
ret = importlib.import_module(f'.{attr}', child) return importlib.import_module(f'.{attr}', child)
if ret is _NO_ATTRIBUTE: return _NO_ATTRIBUTE
raise AttributeError(f'module {parent} has no attribute {attr}')
callback(attr)
return ret
# Python 3.6 does not have module level __getattr__ # Python 3.6 does not have module level __getattr__
# https://peps.python.org/pep-0562/ # https://peps.python.org/pep-0562/

View File

@ -1,5 +1,7 @@
import base64
import contextlib import contextlib
import ctypes import ctypes
import http.cookiejar
import json import json
import os import os
import shutil import shutil
@ -17,7 +19,6 @@ from .aes import (
aes_gcm_decrypt_and_verify_bytes, aes_gcm_decrypt_and_verify_bytes,
unpad_pkcs7, unpad_pkcs7,
) )
from .compat import compat_b64decode, compat_cookiejar_Cookie
from .dependencies import ( from .dependencies import (
_SECRETSTORAGE_UNAVAILABLE_REASON, _SECRETSTORAGE_UNAVAILABLE_REASON,
secretstorage, secretstorage,
@ -142,7 +143,7 @@ def _extract_firefox_cookies(profile, logger):
total_cookie_count = len(table) total_cookie_count = len(table)
for i, (host, name, value, path, expiry, is_secure) in enumerate(table): for i, (host, name, value, path, expiry, is_secure) in enumerate(table):
progress_bar.print(f'Loading cookie {i: 6d}/{total_cookie_count: 6d}') progress_bar.print(f'Loading cookie {i: 6d}/{total_cookie_count: 6d}')
cookie = compat_cookiejar_Cookie( cookie = http.cookiejar.Cookie(
version=0, name=name, value=value, port=None, port_specified=False, version=0, name=name, value=value, port=None, port_specified=False,
domain=host, domain_specified=bool(host), domain_initial_dot=host.startswith('.'), domain=host, domain_specified=bool(host), domain_initial_dot=host.startswith('.'),
path=path, path_specified=bool(path), secure=is_secure, expires=expiry, discard=False, path=path, path_specified=bool(path), secure=is_secure, expires=expiry, discard=False,
@ -297,7 +298,7 @@ def _process_chrome_cookie(decryptor, host_key, name, value, encrypted_value, pa
if value is None: if value is None:
return is_encrypted, None return is_encrypted, None
return is_encrypted, compat_cookiejar_Cookie( return is_encrypted, http.cookiejar.Cookie(
version=0, name=name, value=value, port=None, port_specified=False, version=0, name=name, value=value, port=None, port_specified=False,
domain=host_key, domain_specified=bool(host_key), domain_initial_dot=host_key.startswith('.'), domain=host_key, domain_specified=bool(host_key), domain_initial_dot=host_key.startswith('.'),
path=path, path_specified=bool(path), secure=is_secure, expires=expires_utc, discard=False, path=path, path_specified=bool(path), secure=is_secure, expires=expires_utc, discard=False,
@ -589,7 +590,7 @@ def _parse_safari_cookies_record(data, jar, logger):
p.skip_to(record_size, 'space at the end of the record') p.skip_to(record_size, 'space at the end of the record')
cookie = compat_cookiejar_Cookie( cookie = http.cookiejar.Cookie(
version=0, name=name, value=value, port=None, port_specified=False, version=0, name=name, value=value, port=None, port_specified=False,
domain=domain, domain_specified=bool(domain), domain_initial_dot=domain.startswith('.'), domain=domain, domain_specified=bool(domain), domain_initial_dot=domain.startswith('.'),
path=path, path_specified=bool(path), secure=is_secure, expires=expiration_date, discard=False, path=path, path_specified=bool(path), secure=is_secure, expires=expiration_date, discard=False,
@ -835,7 +836,7 @@ def _get_windows_v10_key(browser_root, logger):
except KeyError: except KeyError:
logger.error('no encrypted key in Local State') logger.error('no encrypted key in Local State')
return None return None
encrypted_key = compat_b64decode(base64_key) encrypted_key = base64.b64decode(base64_key)
prefix = b'DPAPI' prefix = b'DPAPI'
if not encrypted_key.startswith(prefix): if not encrypted_key.startswith(prefix):
logger.error('invalid key') logger.error('invalid key')

View File

@ -1,6 +1,6 @@
# flake8: noqa: F401 # flake8: noqa: F401
"""Imports all optional dependencies for the project. """Imports all optional dependencies for the project.
An attribute "_yt_dlp__identifier" may be inserted into the module if it uses an ambigious namespace""" An attribute "_yt_dlp__identifier" may be inserted into the module if it uses an ambiguous namespace"""
try: try:
import brotlicffi as brotli import brotlicffi as brotli

View File

@ -59,10 +59,11 @@ PROTOCOL_MAP = {
def shorten_protocol_name(proto, simplify=False): def shorten_protocol_name(proto, simplify=False):
short_protocol_names = { short_protocol_names = {
'm3u8_native': 'm3u8_n', 'm3u8_native': 'm3u8',
'rtmp_ffmpeg': 'rtmp_f', 'm3u8': 'm3u8F',
'rtmp_ffmpeg': 'rtmpF',
'http_dash_segments': 'dash', 'http_dash_segments': 'dash',
'http_dash_segments_generator': 'dash_g', 'http_dash_segments_generator': 'dashG',
'niconico_dmc': 'dmc', 'niconico_dmc': 'dmc',
'websocket_frag': 'WSfrag', 'websocket_frag': 'WSfrag',
} }
@ -70,6 +71,7 @@ def shorten_protocol_name(proto, simplify=False):
short_protocol_names.update({ short_protocol_names.update({
'https': 'http', 'https': 'http',
'ftps': 'ftp', 'ftps': 'ftp',
'm3u8': 'm3u8', # Reverse above m3u8 mapping
'm3u8_native': 'm3u8', 'm3u8_native': 'm3u8',
'http_dash_segments_generator': 'dash', 'http_dash_segments_generator': 'dash',
'rtmp_ffmpeg': 'rtmp', 'rtmp_ffmpeg': 'rtmp',

View File

@ -6,8 +6,7 @@ import sys
import time import time
from .fragment import FragmentFD from .fragment import FragmentFD
from ..compat import functools # isort: split from ..compat import functools
from ..compat import compat_setenv
from ..postprocessor.ffmpeg import EXT_TO_OUT_FORMATS, FFmpegPostProcessor from ..postprocessor.ffmpeg import EXT_TO_OUT_FORMATS, FFmpegPostProcessor
from ..utils import ( from ..utils import (
Popen, Popen,
@ -403,8 +402,8 @@ class FFmpegFD(ExternalFD):
# We could switch to the following code if we are able to detect version properly # We could switch to the following code if we are able to detect version properly
# args += ['-http_proxy', proxy] # args += ['-http_proxy', proxy]
env = os.environ.copy() env = os.environ.copy()
compat_setenv('HTTP_PROXY', proxy, env=env) env['HTTP_PROXY'] = proxy
compat_setenv('http_proxy', proxy, env=env) env['http_proxy'] = proxy
protocol = info_dict.get('protocol') protocol = info_dict.get('protocol')

View File

@ -1,17 +1,13 @@
import base64
import io import io
import itertools import itertools
import struct
import time import time
import urllib.error
import urllib.parse
from .fragment import FragmentFD from .fragment import FragmentFD
from ..compat import ( from ..compat import compat_etree_fromstring
compat_b64decode,
compat_etree_fromstring,
compat_struct_pack,
compat_struct_unpack,
compat_urllib_error,
compat_urllib_parse_urlparse,
compat_urlparse,
)
from ..utils import fix_xml_ampersands, xpath_text from ..utils import fix_xml_ampersands, xpath_text
@ -35,13 +31,13 @@ class FlvReader(io.BytesIO):
# Utility functions for reading numbers and strings # Utility functions for reading numbers and strings
def read_unsigned_long_long(self): def read_unsigned_long_long(self):
return compat_struct_unpack('!Q', self.read_bytes(8))[0] return struct.unpack('!Q', self.read_bytes(8))[0]
def read_unsigned_int(self): def read_unsigned_int(self):
return compat_struct_unpack('!I', self.read_bytes(4))[0] return struct.unpack('!I', self.read_bytes(4))[0]
def read_unsigned_char(self): def read_unsigned_char(self):
return compat_struct_unpack('!B', self.read_bytes(1))[0] return struct.unpack('!B', self.read_bytes(1))[0]
def read_string(self): def read_string(self):
res = b'' res = b''
@ -203,11 +199,11 @@ def build_fragments_list(boot_info):
def write_unsigned_int(stream, val): def write_unsigned_int(stream, val):
stream.write(compat_struct_pack('!I', val)) stream.write(struct.pack('!I', val))
def write_unsigned_int_24(stream, val): def write_unsigned_int_24(stream, val):
stream.write(compat_struct_pack('!I', val)[1:]) stream.write(struct.pack('!I', val)[1:])
def write_flv_header(stream): def write_flv_header(stream):
@ -301,12 +297,12 @@ class F4mFD(FragmentFD):
# 1. http://live-1-1.rutube.ru/stream/1024/HDS/SD/C2NKsS85HQNckgn5HdEmOQ/1454167650/S-s604419906/move/four/dirs/upper/1024-576p.f4m # 1. http://live-1-1.rutube.ru/stream/1024/HDS/SD/C2NKsS85HQNckgn5HdEmOQ/1454167650/S-s604419906/move/four/dirs/upper/1024-576p.f4m
bootstrap_url = node.get('url') bootstrap_url = node.get('url')
if bootstrap_url: if bootstrap_url:
bootstrap_url = compat_urlparse.urljoin( bootstrap_url = urllib.parse.urljoin(
base_url, bootstrap_url) base_url, bootstrap_url)
boot_info = self._get_bootstrap_from_url(bootstrap_url) boot_info = self._get_bootstrap_from_url(bootstrap_url)
else: else:
bootstrap_url = None bootstrap_url = None
bootstrap = compat_b64decode(node.text) bootstrap = base64.b64decode(node.text)
boot_info = read_bootstrap_info(bootstrap) boot_info = read_bootstrap_info(bootstrap)
return boot_info, bootstrap_url return boot_info, bootstrap_url
@ -336,14 +332,14 @@ class F4mFD(FragmentFD):
# Prefer baseURL for relative URLs as per 11.2 of F4M 3.0 spec. # Prefer baseURL for relative URLs as per 11.2 of F4M 3.0 spec.
man_base_url = get_base_url(doc) or man_url man_base_url = get_base_url(doc) or man_url
base_url = compat_urlparse.urljoin(man_base_url, media.attrib['url']) base_url = urllib.parse.urljoin(man_base_url, media.attrib['url'])
bootstrap_node = doc.find(_add_ns('bootstrapInfo')) bootstrap_node = doc.find(_add_ns('bootstrapInfo'))
boot_info, bootstrap_url = self._parse_bootstrap_node( boot_info, bootstrap_url = self._parse_bootstrap_node(
bootstrap_node, man_base_url) bootstrap_node, man_base_url)
live = boot_info['live'] live = boot_info['live']
metadata_node = media.find(_add_ns('metadata')) metadata_node = media.find(_add_ns('metadata'))
if metadata_node is not None: if metadata_node is not None:
metadata = compat_b64decode(metadata_node.text) metadata = base64.b64decode(metadata_node.text)
else: else:
metadata = None metadata = None
@ -371,7 +367,7 @@ class F4mFD(FragmentFD):
if not live: if not live:
write_metadata_tag(dest_stream, metadata) write_metadata_tag(dest_stream, metadata)
base_url_parsed = compat_urllib_parse_urlparse(base_url) base_url_parsed = urllib.parse.urlparse(base_url)
self._start_frag_download(ctx, info_dict) self._start_frag_download(ctx, info_dict)
@ -411,7 +407,7 @@ class F4mFD(FragmentFD):
if box_type == b'mdat': if box_type == b'mdat':
self._append_fragment(ctx, box_data) self._append_fragment(ctx, box_data)
break break
except compat_urllib_error.HTTPError as err: except urllib.error.HTTPError as err:
if live and (err.code == 404 or err.code == 410): if live and (err.code == 404 or err.code == 410):
# We didn't keep up with the live window. Continue # We didn't keep up with the live window. Continue
# with the next available fragment. # with the next available fragment.

View File

@ -4,12 +4,14 @@ import http.client
import json import json
import math import math
import os import os
import struct
import time import time
import urllib.error
from .common import FileDownloader from .common import FileDownloader
from .http import HttpFD from .http import HttpFD
from ..aes import aes_cbc_decrypt_bytes, unpad_pkcs7 from ..aes import aes_cbc_decrypt_bytes, unpad_pkcs7
from ..compat import compat_os_name, compat_struct_pack, compat_urllib_error from ..compat import compat_os_name
from ..utils import ( from ..utils import (
DownloadError, DownloadError,
encodeFilename, encodeFilename,
@ -348,7 +350,7 @@ class FragmentFD(FileDownloader):
decrypt_info = fragment.get('decrypt_info') decrypt_info = fragment.get('decrypt_info')
if not decrypt_info or decrypt_info['METHOD'] != 'AES-128': if not decrypt_info or decrypt_info['METHOD'] != 'AES-128':
return frag_content return frag_content
iv = decrypt_info.get('IV') or compat_struct_pack('>8xq', fragment['media_sequence']) iv = decrypt_info.get('IV') or struct.pack('>8xq', fragment['media_sequence'])
decrypt_info['KEY'] = decrypt_info.get('KEY') or _get_key(info_dict.get('_decryption_key_url') or decrypt_info['URI']) decrypt_info['KEY'] = decrypt_info.get('KEY') or _get_key(info_dict.get('_decryption_key_url') or decrypt_info['URI'])
# Don't decrypt the content in tests since the data is explicitly truncated and it's not to a valid block # Don't decrypt the content in tests since the data is explicitly truncated and it's not to a valid block
# size (see https://github.com/ytdl-org/youtube-dl/pull/27660). Tests only care that the correct data downloaded, # size (see https://github.com/ytdl-org/youtube-dl/pull/27660). Tests only care that the correct data downloaded,
@ -457,7 +459,7 @@ class FragmentFD(FileDownloader):
if self._download_fragment(ctx, fragment['url'], info_dict, headers): if self._download_fragment(ctx, fragment['url'], info_dict, headers):
break break
return return
except (compat_urllib_error.HTTPError, http.client.IncompleteRead) as err: except (urllib.error.HTTPError, http.client.IncompleteRead) as err:
# Unavailable (possibly temporary) fragments may be served. # Unavailable (possibly temporary) fragments may be served.
# First we try to retry then either skip or abort. # First we try to retry then either skip or abort.
# See https://github.com/ytdl-org/youtube-dl/issues/10165, # See https://github.com/ytdl-org/youtube-dl/issues/10165,

View File

@ -1,12 +1,12 @@
import binascii import binascii
import io import io
import re import re
import urllib.parse
from . import get_suitable_downloader from . import get_suitable_downloader
from .external import FFmpegFD from .external import FFmpegFD
from .fragment import FragmentFD from .fragment import FragmentFD
from .. import webvtt from .. import webvtt
from ..compat import compat_urlparse
from ..dependencies import Cryptodome_AES from ..dependencies import Cryptodome_AES
from ..utils import bug_reports_message, parse_m3u8_attributes, update_url_query from ..utils import bug_reports_message, parse_m3u8_attributes, update_url_query
@ -61,12 +61,18 @@ class HlsFD(FragmentFD):
s = urlh.read().decode('utf-8', 'ignore') s = urlh.read().decode('utf-8', 'ignore')
can_download, message = self.can_download(s, info_dict, self.params.get('allow_unplayable_formats')), None can_download, message = self.can_download(s, info_dict, self.params.get('allow_unplayable_formats')), None
if can_download and not Cryptodome_AES and '#EXT-X-KEY:METHOD=AES-128' in s: if can_download:
if FFmpegFD.available(): has_ffmpeg = FFmpegFD.available()
no_crypto = not Cryptodome_AES and '#EXT-X-KEY:METHOD=AES-128' in s
if no_crypto and has_ffmpeg:
can_download, message = False, 'The stream has AES-128 encryption and pycryptodomex is not available' can_download, message = False, 'The stream has AES-128 encryption and pycryptodomex is not available'
else: elif no_crypto:
message = ('The stream has AES-128 encryption and neither ffmpeg nor pycryptodomex are available; ' message = ('The stream has AES-128 encryption and neither ffmpeg nor pycryptodomex are available; '
'Decryption will be performed natively, but will be extremely slow') 'Decryption will be performed natively, but will be extremely slow')
elif info_dict.get('extractor_key') == 'Generic' and re.search(r'(?m)#EXT-X-MEDIA-SEQUENCE:(?!0$)', s):
install_ffmpeg = '' if has_ffmpeg else 'install ffmpeg and '
message = ('Live HLS streams are not supported by the native downloader. If this is a livestream, '
f'please {install_ffmpeg}add "--downloader ffmpeg --hls-use-mpegts" to your command')
if not can_download: if not can_download:
has_drm = re.search('|'.join([ has_drm = re.search('|'.join([
r'#EXT-X-FAXS-CM:', # Adobe Flash Access r'#EXT-X-FAXS-CM:', # Adobe Flash Access
@ -140,7 +146,7 @@ class HlsFD(FragmentFD):
extra_query = None extra_query = None
extra_param_to_segment_url = info_dict.get('extra_param_to_segment_url') extra_param_to_segment_url = info_dict.get('extra_param_to_segment_url')
if extra_param_to_segment_url: if extra_param_to_segment_url:
extra_query = compat_urlparse.parse_qs(extra_param_to_segment_url) extra_query = urllib.parse.parse_qs(extra_param_to_segment_url)
i = 0 i = 0
media_sequence = 0 media_sequence = 0
decrypt_info = {'METHOD': 'NONE'} decrypt_info = {'METHOD': 'NONE'}
@ -162,7 +168,7 @@ class HlsFD(FragmentFD):
frag_url = ( frag_url = (
line line
if re.match(r'^https?://', line) if re.match(r'^https?://', line)
else compat_urlparse.urljoin(man_url, line)) else urllib.parse.urljoin(man_url, line))
if extra_query: if extra_query:
frag_url = update_url_query(frag_url, extra_query) frag_url = update_url_query(frag_url, extra_query)
@ -187,7 +193,7 @@ class HlsFD(FragmentFD):
frag_url = ( frag_url = (
map_info.get('URI') map_info.get('URI')
if re.match(r'^https?://', map_info.get('URI')) if re.match(r'^https?://', map_info.get('URI'))
else compat_urlparse.urljoin(man_url, map_info.get('URI'))) else urllib.parse.urljoin(man_url, map_info.get('URI')))
if extra_query: if extra_query:
frag_url = update_url_query(frag_url, extra_query) frag_url = update_url_query(frag_url, extra_query)
@ -215,7 +221,7 @@ class HlsFD(FragmentFD):
if 'IV' in decrypt_info: if 'IV' in decrypt_info:
decrypt_info['IV'] = binascii.unhexlify(decrypt_info['IV'][2:].zfill(32)) decrypt_info['IV'] = binascii.unhexlify(decrypt_info['IV'][2:].zfill(32))
if not re.match(r'^https?://', decrypt_info['URI']): if not re.match(r'^https?://', decrypt_info['URI']):
decrypt_info['URI'] = compat_urlparse.urljoin( decrypt_info['URI'] = urllib.parse.urljoin(
man_url, decrypt_info['URI']) man_url, decrypt_info['URI'])
if extra_query: if extra_query:
decrypt_info['URI'] = update_url_query(decrypt_info['URI'], extra_query) decrypt_info['URI'] = update_url_query(decrypt_info['URI'], extra_query)

View File

@ -1,11 +1,12 @@
import http.client
import os import os
import random import random
import socket import socket
import ssl import ssl
import time import time
import urllib.error
from .common import FileDownloader from .common import FileDownloader
from ..compat import compat_http_client, compat_urllib_error
from ..utils import ( from ..utils import (
ContentTooShortError, ContentTooShortError,
ThrottledDownload, ThrottledDownload,
@ -24,7 +25,7 @@ RESPONSE_READ_EXCEPTIONS = (
socket.timeout, # compat: py < 3.10 socket.timeout, # compat: py < 3.10
ConnectionError, ConnectionError,
ssl.SSLError, ssl.SSLError,
compat_http_client.HTTPException http.client.HTTPException
) )
@ -155,7 +156,7 @@ class HttpFD(FileDownloader):
ctx.resume_len = 0 ctx.resume_len = 0
ctx.open_mode = 'wb' ctx.open_mode = 'wb'
ctx.data_len = ctx.content_len = int_or_none(ctx.data.info().get('Content-length', None)) ctx.data_len = ctx.content_len = int_or_none(ctx.data.info().get('Content-length', None))
except compat_urllib_error.HTTPError as err: except urllib.error.HTTPError as err:
if err.code == 416: if err.code == 416:
# Unable to resume (requested range not satisfiable) # Unable to resume (requested range not satisfiable)
try: try:
@ -163,7 +164,7 @@ class HttpFD(FileDownloader):
ctx.data = self.ydl.urlopen( ctx.data = self.ydl.urlopen(
sanitized_Request(url, request_data, headers)) sanitized_Request(url, request_data, headers))
content_length = ctx.data.info()['Content-Length'] content_length = ctx.data.info()['Content-Length']
except compat_urllib_error.HTTPError as err: except urllib.error.HTTPError as err:
if err.code < 500 or err.code >= 600: if err.code < 500 or err.code >= 600:
raise raise
else: else:
@ -196,7 +197,7 @@ class HttpFD(FileDownloader):
# Unexpected HTTP error # Unexpected HTTP error
raise raise
raise RetryDownload(err) raise RetryDownload(err)
except compat_urllib_error.URLError as err: except urllib.error.URLError as err:
if isinstance(err.reason, ssl.CertificateError): if isinstance(err.reason, ssl.CertificateError):
raise raise
raise RetryDownload(err) raise RetryDownload(err)

View File

@ -2,9 +2,9 @@ import binascii
import io import io
import struct import struct
import time import time
import urllib.error
from .fragment import FragmentFD from .fragment import FragmentFD
from ..compat import compat_urllib_error
u8 = struct.Struct('>B') u8 = struct.Struct('>B')
u88 = struct.Struct('>Bx') u88 = struct.Struct('>Bx')
@ -268,7 +268,7 @@ class IsmFD(FragmentFD):
extra_state['ism_track_written'] = True extra_state['ism_track_written'] = True
self._append_fragment(ctx, frag_content) self._append_fragment(ctx, frag_content)
break break
except compat_urllib_error.HTTPError as err: except urllib.error.HTTPError as err:
count += 1 count += 1
if count <= fragment_retries: if count <= fragment_retries:
self.report_retry_fragment(err, frag_index, count, fragment_retries) self.report_retry_fragment(err, frag_index, count, fragment_retries)

View File

@ -4,7 +4,6 @@ import subprocess
import time import time
from .common import FileDownloader from .common import FileDownloader
from ..compat import compat_str
from ..utils import ( from ..utils import (
Popen, Popen,
check_executable, check_executable,
@ -143,7 +142,7 @@ class RtmpFD(FileDownloader):
if isinstance(conn, list): if isinstance(conn, list):
for entry in conn: for entry in conn:
basic_args += ['--conn', entry] basic_args += ['--conn', entry]
elif isinstance(conn, compat_str): elif isinstance(conn, str):
basic_args += ['--conn', conn] basic_args += ['--conn', conn]
if protocol is not None: if protocol is not None:
basic_args += ['--protocol', protocol] basic_args += ['--protocol', protocol]

View File

@ -1,8 +1,8 @@
import json import json
import time import time
import urllib.error
from .fragment import FragmentFD from .fragment import FragmentFD
from ..compat import compat_urllib_error
from ..utils import RegexNotFoundError, dict_get, int_or_none, try_get from ..utils import RegexNotFoundError, dict_get, int_or_none, try_get
@ -128,7 +128,7 @@ class YoutubeLiveChatFD(FragmentFD):
elif info_dict['protocol'] == 'youtube_live_chat': elif info_dict['protocol'] == 'youtube_live_chat':
continuation_id, offset, click_tracking_params = parse_actions_live(live_chat_continuation) continuation_id, offset, click_tracking_params = parse_actions_live(live_chat_continuation)
return True, continuation_id, offset, click_tracking_params return True, continuation_id, offset, click_tracking_params
except compat_urllib_error.HTTPError as err: except urllib.error.HTTPError as err:
count += 1 count += 1
if count <= fragment_retries: if count <= fragment_retries:
self.report_retry_fragment(err, frag_index, count, fragment_retries) self.report_retry_fragment(err, frag_index, count, fragment_retries)

View File

@ -563,6 +563,7 @@ from .funimation import (
) )
from .funk import FunkIE from .funk import FunkIE
from .fusion import FusionIE from .fusion import FusionIE
from .fuyintv import FuyinTVIE
from .gab import ( from .gab import (
GabTVIE, GabTVIE,
GabIE, GabIE,
@ -836,6 +837,7 @@ from .livestream import (
LivestreamOriginalIE, LivestreamOriginalIE,
LivestreamShortenerIE, LivestreamShortenerIE,
) )
from .livestreamfails import LivestreamfailsIE
from .lnkgo import ( from .lnkgo import (
LnkGoIE, LnkGoIE,
LnkIE, LnkIE,
@ -1329,6 +1331,7 @@ from .puhutv import (
PuhuTVIE, PuhuTVIE,
PuhuTVSerieIE, PuhuTVSerieIE,
) )
from .premiershiprugby import PremiershipRugbyIE
from .presstv import PressTVIE from .presstv import PressTVIE
from .projectveritas import ProjectVeritasIE from .projectveritas import ProjectVeritasIE
from .prosiebensat1 import ProSiebenSat1IE from .prosiebensat1 import ProSiebenSat1IE
@ -1509,6 +1512,7 @@ from .scte import (
SCTEIE, SCTEIE,
SCTECourseIE, SCTECourseIE,
) )
from .scrolller import ScrolllerIE
from .seeker import SeekerIE from .seeker import SeekerIE
from .senategov import SenateISVPIE, SenateGovIE from .senategov import SenateISVPIE, SenateGovIE
from .sendtonews import SendtoNewsIE from .sendtonews import SendtoNewsIE
@ -1630,7 +1634,10 @@ from .srgssr import (
from .srmediathek import SRMediathekIE from .srmediathek import SRMediathekIE
from .stanfordoc import StanfordOpenClassroomIE from .stanfordoc import StanfordOpenClassroomIE
from .startv import StarTVIE from .startv import StarTVIE
from .steam import SteamIE from .steam import (
SteamIE,
SteamCommunityBroadcastIE,
)
from .storyfire import ( from .storyfire import (
StoryFireIE, StoryFireIE,
StoryFireUserIE, StoryFireUserIE,
@ -1928,7 +1935,10 @@ from .vice import (
from .vidbit import VidbitIE from .vidbit import VidbitIE
from .viddler import ViddlerIE from .viddler import ViddlerIE
from .videa import VideaIE from .videa import VideaIE
from .videocampus_sachsen import VideocampusSachsenIE from .videocampus_sachsen import (
VideocampusSachsenIE,
ViMPPlaylistIE,
)
from .videodetective import VideoDetectiveIE from .videodetective import VideoDetectiveIE
from .videofyme import VideofyMeIE from .videofyme import VideofyMeIE
from .videomore import ( from .videomore import (

View File

@ -7,12 +7,13 @@ import json
import re import re
import struct import struct
import time import time
import urllib.parse
import urllib.request
import urllib.response import urllib.response
import uuid import uuid
from .common import InfoExtractor from .common import InfoExtractor
from ..aes import aes_ecb_decrypt from ..aes import aes_ecb_decrypt
from ..compat import compat_urllib_parse_urlparse, compat_urllib_request
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
bytes_to_intlist, bytes_to_intlist,
@ -33,7 +34,7 @@ def add_opener(ydl, handler):
''' Add a handler for opening URLs, like _download_webpage ''' ''' Add a handler for opening URLs, like _download_webpage '''
# https://github.com/python/cpython/blob/main/Lib/urllib/request.py#L426 # https://github.com/python/cpython/blob/main/Lib/urllib/request.py#L426
# https://github.com/python/cpython/blob/main/Lib/urllib/request.py#L605 # https://github.com/python/cpython/blob/main/Lib/urllib/request.py#L605
assert isinstance(ydl._opener, compat_urllib_request.OpenerDirector) assert isinstance(ydl._opener, urllib.request.OpenerDirector)
ydl._opener.add_handler(handler) ydl._opener.add_handler(handler)
@ -46,7 +47,7 @@ def remove_opener(ydl, handler):
# https://github.com/python/cpython/blob/main/Lib/urllib/request.py#L426 # https://github.com/python/cpython/blob/main/Lib/urllib/request.py#L426
# https://github.com/python/cpython/blob/main/Lib/urllib/request.py#L605 # https://github.com/python/cpython/blob/main/Lib/urllib/request.py#L605
opener = ydl._opener opener = ydl._opener
assert isinstance(ydl._opener, compat_urllib_request.OpenerDirector) assert isinstance(ydl._opener, urllib.request.OpenerDirector)
if isinstance(handler, (type, tuple)): if isinstance(handler, (type, tuple)):
find_cp = lambda x: isinstance(x, handler) find_cp = lambda x: isinstance(x, handler)
else: else:
@ -96,20 +97,20 @@ def remove_opener(ydl, handler):
opener.handlers[:] = [x for x in opener.handlers if not find_cp(x)] opener.handlers[:] = [x for x in opener.handlers if not find_cp(x)]
class AbemaLicenseHandler(compat_urllib_request.BaseHandler): class AbemaLicenseHandler(urllib.request.BaseHandler):
handler_order = 499 handler_order = 499
STRTABLE = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz' STRTABLE = '123456789ABCDEFGHJKLMNPQRSTUVWXYZabcdefghijkmnopqrstuvwxyz'
HKEY = b'3AF0298C219469522A313570E8583005A642E73EDD58E3EA2FB7339D3DF1597E' HKEY = b'3AF0298C219469522A313570E8583005A642E73EDD58E3EA2FB7339D3DF1597E'
def __init__(self, ie: 'AbemaTVIE'): def __init__(self, ie: 'AbemaTVIE'):
# the protcol that this should really handle is 'abematv-license://' # the protocol that this should really handle is 'abematv-license://'
# abematv_license_open is just a placeholder for development purposes # abematv_license_open is just a placeholder for development purposes
# ref. https://github.com/python/cpython/blob/f4c03484da59049eb62a9bf7777b963e2267d187/Lib/urllib/request.py#L510 # ref. https://github.com/python/cpython/blob/f4c03484da59049eb62a9bf7777b963e2267d187/Lib/urllib/request.py#L510
setattr(self, 'abematv-license_open', getattr(self, 'abematv_license_open')) setattr(self, 'abematv-license_open', getattr(self, 'abematv_license_open'))
self.ie = ie self.ie = ie
def _get_videokey_from_ticket(self, ticket): def _get_videokey_from_ticket(self, ticket):
to_show = self.ie._downloader.params.get('verbose', False) to_show = self.ie.get_param('verbose', False)
media_token = self.ie._get_media_token(to_show=to_show) media_token = self.ie._get_media_token(to_show=to_show)
license_response = self.ie._download_json( license_response = self.ie._download_json(
@ -136,7 +137,7 @@ class AbemaLicenseHandler(compat_urllib_request.BaseHandler):
def abematv_license_open(self, url): def abematv_license_open(self, url):
url = request_to_url(url) url = request_to_url(url)
ticket = compat_urllib_parse_urlparse(url).netloc ticket = urllib.parse.urlparse(url).netloc
response_data = self._get_videokey_from_ticket(ticket) response_data = self._get_videokey_from_ticket(ticket)
return urllib.response.addinfourl(io.BytesIO(response_data), headers={ return urllib.response.addinfourl(io.BytesIO(response_data), headers={
'Content-Length': len(response_data), 'Content-Length': len(response_data),
@ -311,7 +312,7 @@ class AbemaTVIE(AbemaTVBaseIE):
def _real_extract(self, url): def _real_extract(self, url):
# starting download using infojson from this extractor is undefined behavior, # starting download using infojson from this extractor is undefined behavior,
# and never be fixed in the future; you must trigger downloads by directly specifing URL. # and never be fixed in the future; you must trigger downloads by directly specifying URL.
# (unless there's a way to hook before downloading by extractor) # (unless there's a way to hook before downloading by extractor)
video_id, video_type = self._match_valid_url(url).group('id', 'type') video_id, video_type = self._match_valid_url(url).group('id', 'type')
headers = { headers = {

View File

@ -1,3 +1,4 @@
import getpass
import json import json
import re import re
import time import time
@ -5,19 +6,15 @@ import urllib.error
import xml.etree.ElementTree as etree import xml.etree.ElementTree as etree
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import ( from ..compat import compat_urlparse
compat_urlparse,
compat_getpass
)
from ..utils import ( from ..utils import (
unescapeHTML,
urlencode_postdata,
unified_timestamp,
ExtractorError,
NO_DEFAULT, NO_DEFAULT,
ExtractorError,
unescapeHTML,
unified_timestamp,
urlencode_postdata,
) )
MSO_INFO = { MSO_INFO = {
'DTV': { 'DTV': {
'name': 'DIRECTV', 'name': 'DIRECTV',
@ -1431,7 +1428,7 @@ class AdobePassIE(InfoExtractor):
guid = xml_text(resource, 'guid') if '<' in resource else resource guid = xml_text(resource, 'guid') if '<' in resource else resource
count = 0 count = 0
while count < 2: while count < 2:
requestor_info = self._downloader.cache.load(self._MVPD_CACHE, requestor_id) or {} requestor_info = self.cache.load(self._MVPD_CACHE, requestor_id) or {}
authn_token = requestor_info.get('authn_token') authn_token = requestor_info.get('authn_token')
if authn_token and is_expired(authn_token, 'simpleTokenExpires'): if authn_token and is_expired(authn_token, 'simpleTokenExpires'):
authn_token = None authn_token = None
@ -1506,7 +1503,7 @@ class AdobePassIE(InfoExtractor):
'send_confirm_link': False, 'send_confirm_link': False,
'send_token': True 'send_token': True
})) }))
philo_code = compat_getpass('Type auth code you have received [Return]: ') philo_code = getpass.getpass('Type auth code you have received [Return]: ')
self._download_webpage( self._download_webpage(
'https://idp.philo.com/auth/update/login_code', video_id, 'Submitting token', data=urlencode_postdata({ 'https://idp.philo.com/auth/update/login_code', video_id, 'Submitting token', data=urlencode_postdata({
'token': philo_code 'token': philo_code
@ -1726,12 +1723,12 @@ class AdobePassIE(InfoExtractor):
raise_mvpd_required() raise_mvpd_required()
raise raise
if '<pendingLogout' in session: if '<pendingLogout' in session:
self._downloader.cache.store(self._MVPD_CACHE, requestor_id, {}) self.cache.store(self._MVPD_CACHE, requestor_id, {})
count += 1 count += 1
continue continue
authn_token = unescapeHTML(xml_text(session, 'authnToken')) authn_token = unescapeHTML(xml_text(session, 'authnToken'))
requestor_info['authn_token'] = authn_token requestor_info['authn_token'] = authn_token
self._downloader.cache.store(self._MVPD_CACHE, requestor_id, requestor_info) self.cache.store(self._MVPD_CACHE, requestor_id, requestor_info)
authz_token = requestor_info.get(guid) authz_token = requestor_info.get(guid)
if authz_token and is_expired(authz_token, 'simpleTokenTTL'): if authz_token and is_expired(authz_token, 'simpleTokenTTL'):
@ -1747,14 +1744,14 @@ class AdobePassIE(InfoExtractor):
'userMeta': '1', 'userMeta': '1',
}), headers=mvpd_headers) }), headers=mvpd_headers)
if '<pendingLogout' in authorize: if '<pendingLogout' in authorize:
self._downloader.cache.store(self._MVPD_CACHE, requestor_id, {}) self.cache.store(self._MVPD_CACHE, requestor_id, {})
count += 1 count += 1
continue continue
if '<error' in authorize: if '<error' in authorize:
raise ExtractorError(xml_text(authorize, 'details'), expected=True) raise ExtractorError(xml_text(authorize, 'details'), expected=True)
authz_token = unescapeHTML(xml_text(authorize, 'authzToken')) authz_token = unescapeHTML(xml_text(authorize, 'authzToken'))
requestor_info[guid] = authz_token requestor_info[guid] = authz_token
self._downloader.cache.store(self._MVPD_CACHE, requestor_id, requestor_info) self.cache.store(self._MVPD_CACHE, requestor_id, requestor_info)
mvpd_headers.update({ mvpd_headers.update({
'ap_19': xml_text(authn_token, 'simpleSamlNameID'), 'ap_19': xml_text(authn_token, 'simpleSamlNameID'),
@ -1770,7 +1767,7 @@ class AdobePassIE(InfoExtractor):
'hashed_guid': 'false', 'hashed_guid': 'false',
}), headers=mvpd_headers) }), headers=mvpd_headers)
if '<pendingLogout' in short_authorize: if '<pendingLogout' in short_authorize:
self._downloader.cache.store(self._MVPD_CACHE, requestor_id, {}) self.cache.store(self._MVPD_CACHE, requestor_id, {})
count += 1 count += 1
continue continue
return short_authorize return short_authorize

View File

@ -1,36 +1,34 @@
import re
import json import json
import re
import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from .youtube import YoutubeIE, YoutubeBaseInfoExtractor from .youtube import YoutubeBaseInfoExtractor, YoutubeIE
from ..compat import ( from ..compat import compat_HTTPError, compat_urllib_parse_unquote
compat_urllib_parse_unquote,
compat_urllib_parse_unquote_plus,
compat_HTTPError
)
from ..utils import ( from ..utils import (
KNOWN_EXTENSIONS,
ExtractorError,
HEADRequest,
bug_reports_message, bug_reports_message,
clean_html, clean_html,
dict_get, dict_get,
extract_attributes, extract_attributes,
ExtractorError,
get_element_by_id, get_element_by_id,
HEADRequest,
int_or_none, int_or_none,
join_nonempty, join_nonempty,
KNOWN_EXTENSIONS,
merge_dicts, merge_dicts,
mimetype2ext, mimetype2ext,
orderedSet, orderedSet,
parse_duration, parse_duration,
parse_qs, parse_qs,
str_to_int,
str_or_none, str_or_none,
str_to_int,
traverse_obj, traverse_obj,
try_get, try_get,
unified_strdate, unified_strdate,
unified_timestamp, unified_timestamp,
url_or_none,
urlhandle_detect_ext, urlhandle_detect_ext,
url_or_none
) )
@ -143,7 +141,7 @@ class ArchiveOrgIE(InfoExtractor):
return json.loads(extract_attributes(element)['value']) return json.loads(extract_attributes(element)['value'])
def _real_extract(self, url): def _real_extract(self, url):
video_id = compat_urllib_parse_unquote_plus(self._match_id(url)) video_id = urllib.parse.unquote_plus(self._match_id(url))
identifier, entry_id = (video_id.split('/', 1) + [None])[:2] identifier, entry_id = (video_id.split('/', 1) + [None])[:2]
# Archive.org metadata API doesn't clearly demarcate playlist entries # Archive.org metadata API doesn't clearly demarcate playlist entries

View File

@ -1,8 +1,8 @@
import random import random
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ExtractorError, try_get, compat_str, str_or_none from ..compat import compat_str, compat_urllib_parse_unquote
from ..compat import compat_urllib_parse_unquote from ..utils import ExtractorError, str_or_none, try_get
class AudiusBaseIE(InfoExtractor): class AudiusBaseIE(InfoExtractor):

View File

@ -1,16 +1,12 @@
import xml.etree.ElementTree
import functools import functools
import itertools import itertools
import json import json
import re import re
import urllib.error
import xml.etree.ElementTree
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import ( from ..compat import compat_HTTPError, compat_str, compat_urlparse
compat_HTTPError,
compat_str,
compat_urllib_error,
compat_urlparse,
)
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
OnDemandPagedList, OnDemandPagedList,
@ -391,7 +387,7 @@ class BBCCoUkIE(InfoExtractor):
href, programme_id, ext='mp4', entry_protocol='m3u8_native', href, programme_id, ext='mp4', entry_protocol='m3u8_native',
m3u8_id=format_id, fatal=False) m3u8_id=format_id, fatal=False)
except ExtractorError as e: except ExtractorError as e:
if not (isinstance(e.exc_info[1], compat_urllib_error.HTTPError) if not (isinstance(e.exc_info[1], urllib.error.HTTPError)
and e.exc_info[1].code in (403, 404)): and e.exc_info[1].code in (403, 404)):
raise raise
fmts = [] fmts = []

View File

@ -600,9 +600,9 @@ class BrightcoveNewIE(AdobePassIE):
account_id, player_id, embed, content_type, video_id = self._match_valid_url(url).groups() account_id, player_id, embed, content_type, video_id = self._match_valid_url(url).groups()
policy_key_id = '%s_%s' % (account_id, player_id) policy_key_id = '%s_%s' % (account_id, player_id)
policy_key = self._downloader.cache.load('brightcove', policy_key_id) policy_key = self.cache.load('brightcove', policy_key_id)
policy_key_extracted = False policy_key_extracted = False
store_pk = lambda x: self._downloader.cache.store('brightcove', policy_key_id, x) store_pk = lambda x: self.cache.store('brightcove', policy_key_id, x)
def extract_policy_key(): def extract_policy_key():
base_url = 'http://players.brightcove.net/%s/%s_%s/' % (account_id, player_id, embed) base_url = 'http://players.brightcove.net/%s/%s_%s/' % (account_id, player_id, embed)

View File

@ -304,13 +304,13 @@ class CBCGemIE(InfoExtractor):
def _get_claims_token(self, email, password): def _get_claims_token(self, email, password):
if not self.claims_token_valid(): if not self.claims_token_valid():
self._claims_token = self._new_claims_token(email, password) self._claims_token = self._new_claims_token(email, password)
self._downloader.cache.store(self._NETRC_MACHINE, 'claims_token', self._claims_token) self.cache.store(self._NETRC_MACHINE, 'claims_token', self._claims_token)
return self._claims_token return self._claims_token
def _real_initialize(self): def _real_initialize(self):
if self.claims_token_valid(): if self.claims_token_valid():
return return
self._claims_token = self._downloader.cache.load(self._NETRC_MACHINE, 'claims_token') self._claims_token = self.cache.load(self._NETRC_MACHINE, 'claims_token')
def _find_secret_formats(self, formats, video_id): def _find_secret_formats(self, formats, video_id):
""" Find a valid video url and convert it to the secret variant """ """ Find a valid video url and convert it to the secret variant """

View File

@ -1,13 +1,9 @@
import codecs import codecs
import re
import json import json
import re
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import ( from ..compat import compat_ord, compat_urllib_parse_unquote
compat_chr,
compat_ord,
compat_urllib_parse_unquote,
)
from ..utils import ( from ..utils import (
ExtractorError, ExtractorError,
float_or_none, float_or_none,
@ -16,8 +12,8 @@ from ..utils import (
multipart_encode, multipart_encode,
parse_duration, parse_duration,
random_birthday, random_birthday,
urljoin,
try_get, try_get,
urljoin,
) )
@ -144,7 +140,7 @@ class CDAIE(InfoExtractor):
b = [] b = []
for c in a: for c in a:
f = compat_ord(c) f = compat_ord(c)
b.append(compat_chr(33 + (f + 14) % 94) if 33 <= f <= 126 else compat_chr(f)) b.append(chr(33 + (f + 14) % 94) if 33 <= f <= 126 else chr(f))
a = ''.join(b) a = ''.join(b)
a = a.replace('.cda.mp4', '') a = a.replace('.cda.mp4', '')
for p in ('.2cda.pl', '.3cda.pl'): for p in ('.2cda.pl', '.3cda.pl'):

View File

@ -1,11 +1,11 @@
import itertools import itertools
import json import json
import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_urllib_parse_unquote_plus
from ..utils import ( from ..utils import (
clean_html,
ExtractorError, ExtractorError,
clean_html,
int_or_none, int_or_none,
str_to_int, str_to_int,
url_or_none, url_or_none,
@ -47,8 +47,8 @@ class ChingariBaseIE(InfoExtractor):
'id': id, 'id': id,
'extractor_key': ChingariIE.ie_key(), 'extractor_key': ChingariIE.ie_key(),
'extractor': 'Chingari', 'extractor': 'Chingari',
'title': compat_urllib_parse_unquote_plus(clean_html(post_data.get('caption'))), 'title': urllib.parse.unquote_plus(clean_html(post_data.get('caption'))),
'description': compat_urllib_parse_unquote_plus(clean_html(post_data.get('caption'))), 'description': urllib.parse.unquote_plus(clean_html(post_data.get('caption'))),
'duration': media_data.get('duration'), 'duration': media_data.get('duration'),
'thumbnail': url_or_none(thumbnail), 'thumbnail': url_or_none(thumbnail),
'like_count': post_data.get('likeCount'), 'like_count': post_data.get('likeCount'),

View File

@ -1,6 +1,10 @@
import base64 import base64
import collections import collections
import getpass
import hashlib import hashlib
import http.client
import http.cookiejar
import http.cookies
import itertools import itertools
import json import json
import math import math
@ -9,24 +13,12 @@ import os
import random import random
import sys import sys
import time import time
import urllib.parse
import urllib.request
import xml.etree.ElementTree import xml.etree.ElementTree
from ..compat import functools, re # isort: split from ..compat import functools, re # isort: split
from ..compat import ( from ..compat import compat_etree_fromstring, compat_expanduser, compat_os_name
compat_cookiejar_Cookie,
compat_cookies_SimpleCookie,
compat_etree_fromstring,
compat_expanduser,
compat_getpass,
compat_http_client,
compat_os_name,
compat_str,
compat_urllib_error,
compat_urllib_parse_unquote,
compat_urllib_parse_urlencode,
compat_urllib_request,
compat_urlparse,
)
from ..downloader import FileDownloader from ..downloader import FileDownloader
from ..downloader.f4m import get_base_url, remove_encrypted_media from ..downloader.f4m import get_base_url, remove_encrypted_media
from ..utils import ( from ..utils import (
@ -71,6 +63,7 @@ from ..utils import (
str_to_int, str_to_int,
strip_or_none, strip_or_none,
traverse_obj, traverse_obj,
try_call,
try_get, try_get,
unescapeHTML, unescapeHTML,
unified_strdate, unified_strdate,
@ -399,7 +392,7 @@ class InfoExtractor:
There must be a key "entries", which is a list, an iterable, or a PagedList There must be a key "entries", which is a list, an iterable, or a PagedList
object, each element of which is a valid dictionary by this specification. object, each element of which is a valid dictionary by this specification.
Additionally, playlists can have "id", "title", and any other relevent Additionally, playlists can have "id", "title", and any other relevant
attributes with the same semantics as videos (see above). attributes with the same semantics as videos (see above).
It can also have the following optional fields: It can also have the following optional fields:
@ -671,7 +664,7 @@ class InfoExtractor:
if hasattr(e, 'countries'): if hasattr(e, 'countries'):
kwargs['countries'] = e.countries kwargs['countries'] = e.countries
raise type(e)(e.orig_msg, **kwargs) raise type(e)(e.orig_msg, **kwargs)
except compat_http_client.IncompleteRead as e: except http.client.IncompleteRead as e:
raise ExtractorError('A network error has occurred.', cause=e, expected=True, video_id=self.get_temp_id(url)) raise ExtractorError('A network error has occurred.', cause=e, expected=True, video_id=self.get_temp_id(url))
except (KeyError, StopIteration) as e: except (KeyError, StopIteration) as e:
raise ExtractorError('An extractor error has occurred.', cause=e, video_id=self.get_temp_id(url)) raise ExtractorError('An extractor error has occurred.', cause=e, video_id=self.get_temp_id(url))
@ -695,8 +688,16 @@ class InfoExtractor:
"""Sets a YoutubeDL instance as the downloader for this IE.""" """Sets a YoutubeDL instance as the downloader for this IE."""
self._downloader = downloader self._downloader = downloader
@property
def cache(self):
return self._downloader.cache
@property
def cookiejar(self):
return self._downloader.cookiejar
def _initialize_pre_login(self): def _initialize_pre_login(self):
""" Intialization before login. Redefine in subclasses.""" """ Initialization before login. Redefine in subclasses."""
pass pass
def _perform_login(self, username, password): def _perform_login(self, username, password):
@ -722,7 +723,7 @@ class InfoExtractor:
@staticmethod @staticmethod
def __can_accept_status_code(err, expected_status): def __can_accept_status_code(err, expected_status):
assert isinstance(err, compat_urllib_error.HTTPError) assert isinstance(err, urllib.error.HTTPError)
if expected_status is None: if expected_status is None:
return False return False
elif callable(expected_status): elif callable(expected_status):
@ -730,14 +731,14 @@ class InfoExtractor:
else: else:
return err.code in variadic(expected_status) return err.code in variadic(expected_status)
def _create_request(self, url_or_request, data=None, headers={}, query={}): def _create_request(self, url_or_request, data=None, headers=None, query=None):
if isinstance(url_or_request, compat_urllib_request.Request): if isinstance(url_or_request, urllib.request.Request):
return update_Request(url_or_request, data=data, headers=headers, query=query) return update_Request(url_or_request, data=data, headers=headers, query=query)
if query: if query:
url_or_request = update_url_query(url_or_request, query) url_or_request = update_url_query(url_or_request, query)
return sanitized_Request(url_or_request, data, headers) return sanitized_Request(url_or_request, data, headers or {})
def _request_webpage(self, url_or_request, video_id, note=None, errnote=None, fatal=True, data=None, headers={}, query={}, expected_status=None): def _request_webpage(self, url_or_request, video_id, note=None, errnote=None, fatal=True, data=None, headers=None, query=None, expected_status=None):
""" """
Return the response handle. Return the response handle.
@ -765,13 +766,13 @@ class InfoExtractor:
# geo unrestricted country. We will do so once we encounter any # geo unrestricted country. We will do so once we encounter any
# geo restriction error. # geo restriction error.
if self._x_forwarded_for_ip: if self._x_forwarded_for_ip:
if 'X-Forwarded-For' not in headers: headers = (headers or {}).copy()
headers['X-Forwarded-For'] = self._x_forwarded_for_ip headers.setdefault('X-Forwarded-For', self._x_forwarded_for_ip)
try: try:
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query)) return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))
except network_exceptions as err: except network_exceptions as err:
if isinstance(err, compat_urllib_error.HTTPError): if isinstance(err, urllib.error.HTTPError):
if self.__can_accept_status_code(err, expected_status): if self.__can_accept_status_code(err, expected_status):
# Retain reference to error to prevent file object from # Retain reference to error to prevent file object from
# being closed before it can be read. Works around the # being closed before it can be read. Works around the
@ -799,7 +800,7 @@ class InfoExtractor:
Arguments: Arguments:
url_or_request -- plain text URL as a string or url_or_request -- plain text URL as a string or
a compat_urllib_request.Requestobject a urllib.request.Request object
video_id -- Video/playlist/item identifier (string) video_id -- Video/playlist/item identifier (string)
Keyword arguments: Keyword arguments:
@ -827,7 +828,7 @@ class InfoExtractor:
""" """
# Strip hashes from the URL (#1038) # Strip hashes from the URL (#1038)
if isinstance(url_or_request, (compat_str, str)): if isinstance(url_or_request, str):
url_or_request = url_or_request.partition('#')[0] url_or_request = url_or_request.partition('#')[0]
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status) urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status)
@ -1048,7 +1049,7 @@ class InfoExtractor:
while True: while True:
try: try:
return self.__download_webpage(url_or_request, video_id, note, errnote, None, fatal, *args, **kwargs) return self.__download_webpage(url_or_request, video_id, note, errnote, None, fatal, *args, **kwargs)
except compat_http_client.IncompleteRead as e: except http.client.IncompleteRead as e:
try_count += 1 try_count += 1
if try_count >= tries: if try_count >= tries:
raise e raise e
@ -1284,7 +1285,7 @@ class InfoExtractor:
if tfa is not None: if tfa is not None:
return tfa return tfa
return compat_getpass('Type %s and press [Return]: ' % note) return getpass.getpass('Type %s and press [Return]: ' % note)
# Helper functions for extracting OpenGraph info # Helper functions for extracting OpenGraph info
@staticmethod @staticmethod
@ -1392,27 +1393,25 @@ class InfoExtractor:
return self._html_search_meta('twitter:player', html, return self._html_search_meta('twitter:player', html,
'twitter card player') 'twitter card player')
def _search_json_ld(self, html, video_id, expected_type=None, **kwargs): def _yield_json_ld(self, html, video_id, *, fatal=True, default=NO_DEFAULT):
json_ld_list = list(re.finditer(JSON_LD_RE, html)) """Yield all json ld objects in the html"""
default = kwargs.get('default', NO_DEFAULT) if default is not NO_DEFAULT:
# JSON-LD may be malformed and thus `fatal` should be respected. fatal = False
# At the same time `default` may be passed that assumes `fatal=False` for mobj in re.finditer(JSON_LD_RE, html):
# for _search_regex. Let's simulate the same behavior here as well. json_ld_item = self._parse_json(mobj.group('json_ld'), video_id, fatal=fatal)
fatal = kwargs.get('fatal', True) if default is NO_DEFAULT else False for json_ld in variadic(json_ld_item):
json_ld = [] if isinstance(json_ld, dict):
for mobj in json_ld_list: yield json_ld
json_ld_item = self._parse_json(
mobj.group('json_ld'), video_id, fatal=fatal) def _search_json_ld(self, html, video_id, expected_type=None, *, fatal=True, default=NO_DEFAULT):
if not json_ld_item: """Search for a video in any json ld in the html"""
continue if default is not NO_DEFAULT:
if isinstance(json_ld_item, dict): fatal = False
json_ld.append(json_ld_item) info = self._json_ld(
elif isinstance(json_ld_item, (list, tuple)): list(self._yield_json_ld(html, video_id, fatal=fatal, default=default)),
json_ld.extend(json_ld_item) video_id, fatal=fatal, expected_type=expected_type)
if json_ld: if info:
json_ld = self._json_ld(json_ld, video_id, fatal=fatal, expected_type=expected_type) return info
if json_ld:
return json_ld
if default is not NO_DEFAULT: if default is not NO_DEFAULT:
return default return default
elif fatal: elif fatal:
@ -1422,7 +1421,7 @@ class InfoExtractor:
return {} return {}
def _json_ld(self, json_ld, video_id, fatal=True, expected_type=None): def _json_ld(self, json_ld, video_id, fatal=True, expected_type=None):
if isinstance(json_ld, compat_str): if isinstance(json_ld, str):
json_ld = self._parse_json(json_ld, video_id, fatal=fatal) json_ld = self._parse_json(json_ld, video_id, fatal=fatal)
if not json_ld: if not json_ld:
return {} return {}
@ -1500,7 +1499,7 @@ class InfoExtractor:
assert is_type(e, 'VideoObject') assert is_type(e, 'VideoObject')
author = e.get('author') author = e.get('author')
info.update({ info.update({
'url': traverse_obj(e, 'contentUrl', 'embedUrl', expected_type=url_or_none), 'url': url_or_none(e.get('contentUrl')),
'title': unescapeHTML(e.get('name')), 'title': unescapeHTML(e.get('name')),
'description': unescapeHTML(e.get('description')), 'description': unescapeHTML(e.get('description')),
'thumbnails': [{'url': url} 'thumbnails': [{'url': url}
@ -1512,7 +1511,7 @@ class InfoExtractor:
# both types can have 'name' property(inherited from 'Thing' type). [1] # both types can have 'name' property(inherited from 'Thing' type). [1]
# however some websites are using 'Text' type instead. # however some websites are using 'Text' type instead.
# 1. https://schema.org/VideoObject # 1. https://schema.org/VideoObject
'uploader': author.get('name') if isinstance(author, dict) else author if isinstance(author, compat_str) else None, 'uploader': author.get('name') if isinstance(author, dict) else author if isinstance(author, str) else None,
'filesize': int_or_none(float_or_none(e.get('contentSize'))), 'filesize': int_or_none(float_or_none(e.get('contentSize'))),
'tbr': int_or_none(e.get('bitrate')), 'tbr': int_or_none(e.get('bitrate')),
'width': int_or_none(e.get('width')), 'width': int_or_none(e.get('width')),
@ -2161,7 +2160,7 @@ class InfoExtractor:
]), m3u8_doc) ]), m3u8_doc)
def format_url(url): def format_url(url):
return url if re.match(r'^https?://', url) else compat_urlparse.urljoin(m3u8_url, url) return url if re.match(r'^https?://', url) else urllib.parse.urljoin(m3u8_url, url)
if self.get_param('hls_split_discontinuity', False): if self.get_param('hls_split_discontinuity', False):
def _extract_m3u8_playlist_indices(manifest_url=None, m3u8_doc=None): def _extract_m3u8_playlist_indices(manifest_url=None, m3u8_doc=None):
@ -2534,7 +2533,7 @@ class InfoExtractor:
}) })
continue continue
src_url = src if src.startswith('http') else compat_urlparse.urljoin(base, src) src_url = src if src.startswith('http') else urllib.parse.urljoin(base, src)
src_url = src_url.strip() src_url = src_url.strip()
if proto == 'm3u8' or src_ext == 'm3u8': if proto == 'm3u8' or src_ext == 'm3u8':
@ -2557,7 +2556,7 @@ class InfoExtractor:
'plugin': 'flowplayer-3.2.0.1', 'plugin': 'flowplayer-3.2.0.1',
} }
f4m_url += '&' if '?' in f4m_url else '?' f4m_url += '&' if '?' in f4m_url else '?'
f4m_url += compat_urllib_parse_urlencode(f4m_params) f4m_url += urllib.parse.urlencode(f4m_params)
formats.extend(self._extract_f4m_formats(f4m_url, video_id, f4m_id='hds', fatal=False)) formats.extend(self._extract_f4m_formats(f4m_url, video_id, f4m_id='hds', fatal=False))
elif src_ext == 'mpd': elif src_ext == 'mpd':
formats.extend(self._extract_mpd_formats( formats.extend(self._extract_mpd_formats(
@ -2822,12 +2821,12 @@ class InfoExtractor:
base_url = '' base_url = ''
for element in (representation, adaptation_set, period, mpd_doc): for element in (representation, adaptation_set, period, mpd_doc):
base_url_e = element.find(_add_ns('BaseURL')) base_url_e = element.find(_add_ns('BaseURL'))
if base_url_e is not None: if try_call(lambda: base_url_e.text) is not None:
base_url = base_url_e.text + base_url base_url = base_url_e.text + base_url
if re.match(r'^https?://', base_url): if re.match(r'^https?://', base_url):
break break
if mpd_base_url and base_url.startswith('/'): if mpd_base_url and base_url.startswith('/'):
base_url = compat_urlparse.urljoin(mpd_base_url, base_url) base_url = urllib.parse.urljoin(mpd_base_url, base_url)
elif mpd_base_url and not re.match(r'^https?://', base_url): elif mpd_base_url and not re.match(r'^https?://', base_url):
if not mpd_base_url.endswith('/'): if not mpd_base_url.endswith('/'):
mpd_base_url += '/' mpd_base_url += '/'
@ -3097,7 +3096,7 @@ class InfoExtractor:
sampling_rate = int_or_none(track.get('SamplingRate')) sampling_rate = int_or_none(track.get('SamplingRate'))
track_url_pattern = re.sub(r'{[Bb]itrate}', track.attrib['Bitrate'], url_pattern) track_url_pattern = re.sub(r'{[Bb]itrate}', track.attrib['Bitrate'], url_pattern)
track_url_pattern = compat_urlparse.urljoin(ism_url, track_url_pattern) track_url_pattern = urllib.parse.urljoin(ism_url, track_url_pattern)
fragments = [] fragments = []
fragment_ctx = { fragment_ctx = {
@ -3116,7 +3115,7 @@ class InfoExtractor:
fragment_ctx['duration'] = (next_fragment_time - fragment_ctx['time']) / fragment_repeat fragment_ctx['duration'] = (next_fragment_time - fragment_ctx['time']) / fragment_repeat
for _ in range(fragment_repeat): for _ in range(fragment_repeat):
fragments.append({ fragments.append({
'url': re.sub(r'{start[ _]time}', compat_str(fragment_ctx['time']), track_url_pattern), 'url': re.sub(r'{start[ _]time}', str(fragment_ctx['time']), track_url_pattern),
'duration': fragment_ctx['duration'] / stream_timescale, 'duration': fragment_ctx['duration'] / stream_timescale,
}) })
fragment_ctx['time'] += fragment_ctx['duration'] fragment_ctx['time'] += fragment_ctx['duration']
@ -3209,7 +3208,7 @@ class InfoExtractor:
entries = [] entries = []
# amp-video and amp-audio are very similar to their HTML5 counterparts # amp-video and amp-audio are very similar to their HTML5 counterparts
# so we wll include them right here (see # so we will include them right here (see
# https://www.ampproject.org/docs/reference/components/amp-video) # https://www.ampproject.org/docs/reference/components/amp-video)
# For dl8-* tags see https://delight-vr.com/documentation/dl8-video/ # For dl8-* tags see https://delight-vr.com/documentation/dl8-video/
_MEDIA_TAG_NAME_RE = r'(?:(?:amp|dl8(?:-live)?)-)?(video|audio)' _MEDIA_TAG_NAME_RE = r'(?:(?:amp|dl8(?:-live)?)-)?(video|audio)'
@ -3360,7 +3359,7 @@ class InfoExtractor:
return formats, subtitles return formats, subtitles
def _extract_wowza_formats(self, url, video_id, m3u8_entry_protocol='m3u8_native', skip_protocols=[]): def _extract_wowza_formats(self, url, video_id, m3u8_entry_protocol='m3u8_native', skip_protocols=[]):
query = compat_urlparse.urlparse(url).query query = urllib.parse.urlparse(url).query
url = re.sub(r'/(?:manifest|playlist|jwplayer)\.(?:m3u8|f4m|mpd|smil)', '', url) url = re.sub(r'/(?:manifest|playlist|jwplayer)\.(?:m3u8|f4m|mpd|smil)', '', url)
mobj = re.search( mobj = re.search(
r'(?:(?:http|rtmp|rtsp)(?P<s>s)?:)?(?P<url>//[^?]+)', url) r'(?:(?:http|rtmp|rtsp)(?P<s>s)?:)?(?P<url>//[^?]+)', url)
@ -3466,7 +3465,7 @@ class InfoExtractor:
if not isinstance(track, dict): if not isinstance(track, dict):
continue continue
track_kind = track.get('kind') track_kind = track.get('kind')
if not track_kind or not isinstance(track_kind, compat_str): if not track_kind or not isinstance(track_kind, str):
continue continue
if track_kind.lower() not in ('captions', 'subtitles'): if track_kind.lower() not in ('captions', 'subtitles'):
continue continue
@ -3539,7 +3538,7 @@ class InfoExtractor:
# Often no height is provided but there is a label in # Often no height is provided but there is a label in
# format like "1080p", "720p SD", or 1080. # format like "1080p", "720p SD", or 1080.
height = int_or_none(self._search_regex( height = int_or_none(self._search_regex(
r'^(\d{3,4})[pP]?(?:\b|$)', compat_str(source.get('label') or ''), r'^(\d{3,4})[pP]?(?:\b|$)', str(source.get('label') or ''),
'height', default=None)) 'height', default=None))
a_format = { a_format = {
'url': source_url, 'url': source_url,
@ -3591,15 +3590,15 @@ class InfoExtractor:
def _set_cookie(self, domain, name, value, expire_time=None, port=None, def _set_cookie(self, domain, name, value, expire_time=None, port=None,
path='/', secure=False, discard=False, rest={}, **kwargs): path='/', secure=False, discard=False, rest={}, **kwargs):
cookie = compat_cookiejar_Cookie( cookie = http.cookiejar.Cookie(
0, name, value, port, port is not None, domain, True, 0, name, value, port, port is not None, domain, True,
domain.startswith('.'), path, True, secure, expire_time, domain.startswith('.'), path, True, secure, expire_time,
discard, None, None, rest) discard, None, None, rest)
self._downloader.cookiejar.set_cookie(cookie) self.cookiejar.set_cookie(cookie)
def _get_cookies(self, url): def _get_cookies(self, url):
""" Return a compat_cookies_SimpleCookie with the cookies for the url """ """ Return a http.cookies.SimpleCookie with the cookies for the url """
return compat_cookies_SimpleCookie(self._downloader._calc_cookies(url)) return http.cookies.SimpleCookie(self._downloader._calc_cookies(url))
def _apply_first_set_cookie_header(self, url_handle, cookie): def _apply_first_set_cookie_header(self, url_handle, cookie):
""" """
@ -3765,10 +3764,10 @@ class InfoExtractor:
return headers return headers
def _generic_id(self, url): def _generic_id(self, url):
return compat_urllib_parse_unquote(os.path.splitext(url.rstrip('/').split('/')[-1])[0]) return urllib.parse.unquote(os.path.splitext(url.rstrip('/').split('/')[-1])[0])
def _generic_title(self, url): def _generic_title(self, url):
return compat_urllib_parse_unquote(os.path.splitext(url_basename(url))[0]) return urllib.parse.unquote(os.path.splitext(url_basename(url))[0])
@staticmethod @staticmethod
def _availability(is_private=None, needs_premium=None, needs_subscription=None, needs_auth=None, is_unlisted=None): def _availability(is_private=None, needs_premium=None, needs_subscription=None, needs_auth=None, is_unlisted=None):

View File

@ -1,5 +1,6 @@
import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import compat_urlparse
class RtmpIE(InfoExtractor): class RtmpIE(InfoExtractor):
@ -23,7 +24,7 @@ class RtmpIE(InfoExtractor):
'formats': [{ 'formats': [{
'url': url, 'url': url,
'ext': 'flv', 'ext': 'flv',
'format_id': compat_urlparse.urlparse(url).scheme, 'format_id': urllib.parse.urlparse(url).scheme,
}], }],
} }

View File

@ -1,19 +1,20 @@
import base64 import base64
import re
import json import json
import zlib import re
import urllib.request
import xml.etree.ElementTree import xml.etree.ElementTree
import zlib
from hashlib import sha1 from hashlib import sha1
from math import pow, sqrt, floor from math import floor, pow, sqrt
from .common import InfoExtractor from .common import InfoExtractor
from .vrv import VRVBaseIE from .vrv import VRVBaseIE
from ..aes import aes_cbc_decrypt
from ..compat import ( from ..compat import (
compat_b64decode, compat_b64decode,
compat_etree_fromstring, compat_etree_fromstring,
compat_str, compat_str,
compat_urllib_parse_urlencode, compat_urllib_parse_urlencode,
compat_urllib_request,
compat_urlparse, compat_urlparse,
) )
from ..utils import ( from ..utils import (
@ -22,8 +23,8 @@ from ..utils import (
extract_attributes, extract_attributes,
float_or_none, float_or_none,
format_field, format_field,
intlist_to_bytes,
int_or_none, int_or_none,
intlist_to_bytes,
join_nonempty, join_nonempty,
lowercase_escape, lowercase_escape,
merge_dicts, merge_dicts,
@ -34,9 +35,6 @@ from ..utils import (
try_get, try_get,
xpath_text, xpath_text,
) )
from ..aes import (
aes_cbc_decrypt,
)
class CrunchyrollBaseIE(InfoExtractor): class CrunchyrollBaseIE(InfoExtractor):
@ -259,7 +257,7 @@ class CrunchyrollIE(CrunchyrollBaseIE, VRVBaseIE):
} }
def _download_webpage(self, url_or_request, *args, **kwargs): def _download_webpage(self, url_or_request, *args, **kwargs):
request = (url_or_request if isinstance(url_or_request, compat_urllib_request.Request) request = (url_or_request if isinstance(url_or_request, urllib.request.Request)
else sanitized_Request(url_or_request)) else sanitized_Request(url_or_request))
# Accept-Language must be set explicitly to accept any language to avoid issues # Accept-Language must be set explicitly to accept any language to avoid issues
# similar to https://github.com/ytdl-org/youtube-dl/issues/6797. # similar to https://github.com/ytdl-org/youtube-dl/issues/6797.

View File

@ -1,12 +1,8 @@
import re import re
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import compat_str
int_or_none, from ..utils import ExtractorError, int_or_none, urlencode_postdata
urlencode_postdata,
compat_str,
ExtractorError,
)
class CuriosityStreamBaseIE(InfoExtractor): class CuriosityStreamBaseIE(InfoExtractor):
@ -50,7 +46,7 @@ class CuriosityStreamIE(CuriosityStreamBaseIE):
IE_NAME = 'curiositystream' IE_NAME = 'curiositystream'
_VALID_URL = r'https?://(?:app\.)?curiositystream\.com/video/(?P<id>\d+)' _VALID_URL = r'https?://(?:app\.)?curiositystream\.com/video/(?P<id>\d+)'
_TESTS = [{ _TESTS = [{
'url': 'https://app.curiositystream.com/video/2', 'url': 'http://app.curiositystream.com/video/2',
'info_dict': { 'info_dict': {
'id': '2', 'id': '2',
'ext': 'mp4', 'ext': 'mp4',

View File

@ -91,4 +91,5 @@ class CWTVIE(InfoExtractor):
'timestamp': parse_iso8601(video_data.get('start_time')), 'timestamp': parse_iso8601(video_data.get('start_time')),
'age_limit': parse_age_limit(video_data.get('rating')), 'age_limit': parse_age_limit(video_data.get('rating')),
'ie_key': 'ThePlatform', 'ie_key': 'ThePlatform',
'thumbnail': video_data.get('large_thumbnail')
} }

View File

@ -119,16 +119,16 @@ class DropoutIE(InfoExtractor):
def _real_extract(self, url): def _real_extract(self, url):
display_id = self._match_id(url) display_id = self._match_id(url)
login_err, webpage = False, ''
try: webpage = None
if self._get_cookies('https://www.dropout.tv').get('_session'):
webpage = self._download_webpage(url, display_id)
if not webpage or '<div id="watch-unauthorized"' in webpage:
login_err = self._login(display_id) login_err = self._login(display_id)
webpage = self._download_webpage(url, display_id) webpage = self._download_webpage(url, display_id)
finally: if login_err and '<div id="watch-unauthorized"' in webpage:
if not login_err:
self._download_webpage('https://www.dropout.tv/logout', display_id, note='Logging out', fatal=False)
elif '<div id="watch-unauthorized"' in webpage:
if login_err is True: if login_err is True:
self.raise_login_required(method='password') self.raise_login_required(method='any')
raise ExtractorError(login_err, expected=True) raise ExtractorError(login_err, expected=True)
embed_url = self._search_regex(r'embed_url:\s*["\'](.+?)["\']', webpage, 'embed url') embed_url = self._search_regex(r'embed_url:\s*["\'](.+?)["\']', webpage, 'embed url')

View File

@ -119,7 +119,7 @@ class ERTFlixCodenameIE(ERTFlixBaseIE):
class ERTFlixIE(ERTFlixBaseIE): class ERTFlixIE(ERTFlixBaseIE):
IE_NAME = 'ertflix' IE_NAME = 'ertflix'
IE_DESC = 'ERTFLIX videos' IE_DESC = 'ERTFLIX videos'
_VALID_URL = r'https?://www\.ertflix\.gr/(?:series|vod)/(?P<id>[a-z]{3}\.\d+)' _VALID_URL = r'https?://www\.ertflix\.gr/(?:[^/]+/)?(?:series|vod)/(?P<id>[a-z]{3}\.\d+)'
_TESTS = [{ _TESTS = [{
'url': 'https://www.ertflix.gr/vod/vod.173258-aoratoi-ergates', 'url': 'https://www.ertflix.gr/vod/vod.173258-aoratoi-ergates',
'md5': '6479d5e60fd7e520b07ba5411dcdd6e7', 'md5': '6479d5e60fd7e520b07ba5411dcdd6e7',
@ -171,6 +171,9 @@ class ERTFlixIE(ERTFlixBaseIE):
'title': 'Το δίκτυο', 'title': 'Το δίκτυο',
}, },
'playlist_mincount': 9, 'playlist_mincount': 9,
}, {
'url': 'https://www.ertflix.gr/en/vod/vod.127652-ta-kalytera-mas-chronia-ep1-mia-volta-sto-feggari',
'only_matching': True,
}] }]
def _extract_episode(self, episode): def _extract_episode(self, episode):

View File

@ -1,10 +1,10 @@
import base64 import base64
import json import json
import re import re
import urllib import urllib.parse
from .common import InfoExtractor
from .adobepass import AdobePassIE from .adobepass import AdobePassIE
from .common import InfoExtractor
from .once import OnceIE from .once import OnceIE
from ..utils import ( from ..utils import (
determine_ext, determine_ext,
@ -197,7 +197,7 @@ class ESPNArticleIE(InfoExtractor):
@classmethod @classmethod
def suitable(cls, url): def suitable(cls, url):
return False if (ESPNIE.suitable(url) or WatchESPNIE.suitable(url)) else super(ESPNArticleIE, cls).suitable(url) return False if (ESPNIE.suitable(url) or WatchESPNIE.suitable(url)) else super().suitable(url)
def _real_extract(self, url): def _real_extract(self, url):
video_id = self._match_id(url) video_id = self._match_id(url)

View File

@ -1,18 +1,18 @@
import json import json
import re import re
import urllib.parse
from .common import InfoExtractor from .common import InfoExtractor
from ..compat import ( from ..compat import (
compat_etree_fromstring, compat_etree_fromstring,
compat_str, compat_str,
compat_urllib_parse_unquote, compat_urllib_parse_unquote,
compat_urllib_parse_unquote_plus,
) )
from ..utils import ( from ..utils import (
ExtractorError,
clean_html, clean_html,
determine_ext, determine_ext,
error_to_compat_str, error_to_compat_str,
ExtractorError,
float_or_none, float_or_none,
get_element_by_id, get_element_by_id,
get_first, get_first,
@ -467,7 +467,7 @@ class FacebookIE(InfoExtractor):
dash_manifest = video.get('dash_manifest') dash_manifest = video.get('dash_manifest')
if dash_manifest: if dash_manifest:
formats.extend(self._parse_mpd_formats( formats.extend(self._parse_mpd_formats(
compat_etree_fromstring(compat_urllib_parse_unquote_plus(dash_manifest)))) compat_etree_fromstring(urllib.parse.unquote_plus(dash_manifest))))
def process_formats(formats): def process_formats(formats):
# Downloads with browser's User-Agent are rate limited. Working around # Downloads with browser's User-Agent are rate limited. Working around

View File

@ -78,7 +78,7 @@ class FC2IE(InfoExtractor):
webpage = None webpage = None
if not url.startswith('fc2:'): if not url.startswith('fc2:'):
webpage = self._download_webpage(url, video_id) webpage = self._download_webpage(url, video_id)
self._downloader.cookiejar.clear_session_cookies() # must clear self.cookiejar.clear_session_cookies() # must clear
self._login() self._login()
title, thumbnail, description = None, None, None title, thumbnail, description = None, None, None

View File

@ -31,7 +31,7 @@ class FoxgayIE(InfoExtractor):
description = get_element_by_id('inf_tit', webpage) description = get_element_by_id('inf_tit', webpage)
# The default user-agent with foxgay cookies leads to pages without videos # The default user-agent with foxgay cookies leads to pages without videos
self._downloader.cookiejar.clear('.foxgay.com') self.cookiejar.clear('.foxgay.com')
# Find the URL for the iFrame which contains the actual video. # Find the URL for the iFrame which contains the actual video.
iframe_url = self._html_search_regex( iframe_url = self._html_search_regex(
r'<iframe[^>]+src=([\'"])(?P<url>[^\'"]+)\1', webpage, r'<iframe[^>]+src=([\'"])(?P<url>[^\'"]+)\1', webpage,

View File

@ -0,0 +1,30 @@
from .common import InfoExtractor
from ..utils import traverse_obj
class FuyinTVIE(InfoExtractor):
_VALID_URL = r'https?://(?:www\.)?fuyin\.tv/html/(?:\d+)/(?P<id>\d+)\.html'
_TESTS = [{
'url': 'https://www.fuyin.tv/html/2733/44129.html',
'info_dict': {
'id': '44129',
'ext': 'mp4',
'title': '第1集',
'description': 'md5:21a3d238dc8d49608e1308e85044b9c3',
}
}]
def _real_extract(self, url):
video_id = self._match_id(url)
json_data = self._download_json(
'https://www.fuyin.tv/api/api/tv.movie/url',
video_id, query={'urlid': f'{video_id}'})
webpage = self._download_webpage(url, video_id, fatal=False)
return {
'id': video_id,
'title': traverse_obj(json_data, ('data', 'title')),
'url': json_data['data']['url'],
'ext': 'mp4',
'description': self._html_search_meta('description', webpage),
}

View File

@ -1,5 +1,6 @@
import os import os
import re import re
import urllib.parse
import xml.etree.ElementTree import xml.etree.ElementTree
from .ant1newsgr import Ant1NewsGrEmbedIE from .ant1newsgr import Ant1NewsGrEmbedIE
@ -106,12 +107,7 @@ from .yapfiles import YapFilesIE
from .youporn import YouPornIE from .youporn import YouPornIE
from .youtube import YoutubeIE from .youtube import YoutubeIE
from .zype import ZypeIE from .zype import ZypeIE
from ..compat import ( from ..compat import compat_etree_fromstring
compat_etree_fromstring,
compat_str,
compat_urllib_parse_unquote,
compat_urlparse,
)
from ..utils import ( from ..utils import (
KNOWN_EXTENSIONS, KNOWN_EXTENSIONS,
ExtractorError, ExtractorError,
@ -146,7 +142,7 @@ class GenericIE(InfoExtractor):
IE_DESC = 'Generic downloader that works on some sites' IE_DESC = 'Generic downloader that works on some sites'
_VALID_URL = r'.*' _VALID_URL = r'.*'
IE_NAME = 'generic' IE_NAME = 'generic'
_NETRC_MACHINE = False # Supress username warning _NETRC_MACHINE = False # Suppress username warning
_TESTS = [ _TESTS = [
# Direct link to a video # Direct link to a video
{ {
@ -2703,7 +2699,7 @@ class GenericIE(InfoExtractor):
title = self._html_search_meta('DC.title', webpage, fatal=True) title = self._html_search_meta('DC.title', webpage, fatal=True)
camtasia_url = compat_urlparse.urljoin(url, camtasia_cfg) camtasia_url = urllib.parse.urljoin(url, camtasia_cfg)
camtasia_cfg = self._download_xml( camtasia_cfg = self._download_xml(
camtasia_url, video_id, camtasia_url, video_id,
note='Downloading camtasia configuration', note='Downloading camtasia configuration',
@ -2719,7 +2715,7 @@ class GenericIE(InfoExtractor):
entries.append({ entries.append({
'id': os.path.splitext(url_n.text.rpartition('/')[2])[0], 'id': os.path.splitext(url_n.text.rpartition('/')[2])[0],
'title': f'{title} - {n.tag}', 'title': f'{title} - {n.tag}',
'url': compat_urlparse.urljoin(url, url_n.text), 'url': urllib.parse.urljoin(url, url_n.text),
'duration': float_or_none(n.find('./duration').text), 'duration': float_or_none(n.find('./duration').text),
}) })
@ -2771,7 +2767,7 @@ class GenericIE(InfoExtractor):
if url.startswith('//'): if url.startswith('//'):
return self.url_result(self.http_scheme() + url) return self.url_result(self.http_scheme() + url)
parsed_url = compat_urlparse.urlparse(url) parsed_url = urllib.parse.urlparse(url)
if not parsed_url.scheme: if not parsed_url.scheme:
default_search = self.get_param('default_search') default_search = self.get_param('default_search')
if default_search is None: if default_search is None:
@ -2829,12 +2825,22 @@ class GenericIE(InfoExtractor):
new_url, {'force_videoid': force_videoid}) new_url, {'force_videoid': force_videoid})
return self.url_result(new_url) return self.url_result(new_url)
def request_webpage():
request = sanitized_Request(url)
# Some webservers may serve compressed content of rather big size (e.g. gzipped flac)
# making it impossible to download only chunk of the file (yet we need only 512kB to
# test whether it's HTML or not). According to yt-dlp default Accept-Encoding
# that will always result in downloading the whole file that is not desirable.
# Therefore for extraction pass we have to override Accept-Encoding to any in order
# to accept raw bytes and being able to download only a chunk.
# It may probably better to solve this by checking Content-Type for application/octet-stream
# after HEAD request finishes, but not sure if we can rely on this.
request.add_header('Accept-Encoding', '*')
return self._request_webpage(request, video_id)
full_response = None full_response = None
if head_response is False: if head_response is False:
request = sanitized_Request(url) head_response = full_response = request_webpage()
request.add_header('Accept-Encoding', '*')
full_response = self._request_webpage(request, video_id)
head_response = full_response
info_dict = { info_dict = {
'id': video_id, 'id': video_id,
@ -2847,7 +2853,7 @@ class GenericIE(InfoExtractor):
m = re.match(r'^(?P<type>audio|video|application(?=/(?:ogg$|(?:vnd\.apple\.|x-)?mpegurl)))/(?P<format_id>[^;\s]+)', content_type) m = re.match(r'^(?P<type>audio|video|application(?=/(?:ogg$|(?:vnd\.apple\.|x-)?mpegurl)))/(?P<format_id>[^;\s]+)', content_type)
if m: if m:
self.report_detected('direct video link') self.report_detected('direct video link')
format_id = compat_str(m.group('format_id')) format_id = str(m.group('format_id'))
subtitles = {} subtitles = {}
if format_id.endswith('mpegurl'): if format_id.endswith('mpegurl'):
formats, subtitles = self._extract_m3u8_formats_and_subtitles(url, video_id, 'mp4') formats, subtitles = self._extract_m3u8_formats_and_subtitles(url, video_id, 'mp4')
@ -2872,19 +2878,7 @@ class GenericIE(InfoExtractor):
self.report_warning( self.report_warning(
'%s on generic information extractor.' % ('Forcing' if force else 'Falling back')) '%s on generic information extractor.' % ('Forcing' if force else 'Falling back'))
if not full_response: full_response = full_response or request_webpage()
request = sanitized_Request(url)
# Some webservers may serve compressed content of rather big size (e.g. gzipped flac)
# making it impossible to download only chunk of the file (yet we need only 512kB to
# test whether it's HTML or not). According to yt-dlp default Accept-Encoding
# that will always result in downloading the whole file that is not desirable.
# Therefore for extraction pass we have to override Accept-Encoding to any in order
# to accept raw bytes and being able to download only a chunk.
# It may probably better to solve this by checking Content-Type for application/octet-stream
# after HEAD request finishes, but not sure if we can rely on this.
request.add_header('Accept-Encoding', '*')
full_response = self._request_webpage(request, video_id)
first_bytes = full_response.read(512) first_bytes = full_response.read(512)
# Is it an M3U playlist? # Is it an M3U playlist?
@ -2966,7 +2960,7 @@ class GenericIE(InfoExtractor):
# Unescaping the whole page allows to handle those cases in a generic way # Unescaping the whole page allows to handle those cases in a generic way
# FIXME: unescaping the whole page may break URLs, commenting out for now. # FIXME: unescaping the whole page may break URLs, commenting out for now.
# There probably should be a second run of generic extractor on unescaped webpage. # There probably should be a second run of generic extractor on unescaped webpage.
# webpage = compat_urllib_parse_unquote(webpage) # webpage = urllib.parse.unquote(webpage)
# Unescape squarespace embeds to be detected by generic extractor, # Unescape squarespace embeds to be detected by generic extractor,
# see https://github.com/ytdl-org/youtube-dl/issues/21294 # see https://github.com/ytdl-org/youtube-dl/issues/21294
@ -3239,7 +3233,7 @@ class GenericIE(InfoExtractor):
return self.url_result(mobj.group('url')) return self.url_result(mobj.group('url'))
mobj = re.search(r'class=["\']embedly-embed["\'][^>]src=["\'][^"\']*url=(?P<url>[^&]+)', webpage) mobj = re.search(r'class=["\']embedly-embed["\'][^>]src=["\'][^"\']*url=(?P<url>[^&]+)', webpage)
if mobj is not None: if mobj is not None:
return self.url_result(compat_urllib_parse_unquote(mobj.group('url'))) return self.url_result(urllib.parse.unquote(mobj.group('url')))
# Look for funnyordie embed # Look for funnyordie embed
matches = re.findall(r'<iframe[^>]+?src="(https?://(?:www\.)?funnyordie\.com/embed/[^"]+)"', webpage) matches = re.findall(r'<iframe[^>]+?src="(https?://(?:www\.)?funnyordie\.com/embed/[^"]+)"', webpage)
@ -3492,7 +3486,7 @@ class GenericIE(InfoExtractor):
r'<iframe[^>]+src="(?:https?:)?(?P<url>%s)"' % UDNEmbedIE._PROTOCOL_RELATIVE_VALID_URL, webpage) r'<iframe[^>]+src="(?:https?:)?(?P<url>%s)"' % UDNEmbedIE._PROTOCOL_RELATIVE_VALID_URL, webpage)
if mobj is not None: if mobj is not None:
return self.url_result( return self.url_result(
compat_urlparse.urljoin(url, mobj.group('url')), 'UDNEmbed') urllib.parse.urljoin(url, mobj.group('url')), 'UDNEmbed')
# Look for Senate ISVP iframe # Look for Senate ISVP iframe
senate_isvp_url = SenateISVPIE._search_iframe_url(webpage) senate_isvp_url = SenateISVPIE._search_iframe_url(webpage)
@ -3725,7 +3719,7 @@ class GenericIE(InfoExtractor):
if mediasite_urls: if mediasite_urls:
entries = [ entries = [
self.url_result(smuggle_url( self.url_result(smuggle_url(
compat_urlparse.urljoin(url, mediasite_url), urllib.parse.urljoin(url, mediasite_url),
{'UrlReferrer': url}), ie=MediasiteIE.ie_key()) {'UrlReferrer': url}), ie=MediasiteIE.ie_key())
for mediasite_url in mediasite_urls] for mediasite_url in mediasite_urls]
return self.playlist_result(entries, video_id, video_title) return self.playlist_result(entries, video_id, video_title)
@ -3920,11 +3914,11 @@ class GenericIE(InfoExtractor):
subtitles = {} subtitles = {}
for source in sources: for source in sources:
src = source.get('src') src = source.get('src')
if not src or not isinstance(src, compat_str): if not src or not isinstance(src, str):
continue continue
src = compat_urlparse.urljoin(url, src) src = urllib.parse.urljoin(url, src)
src_type = source.get('type') src_type = source.get('type')
if isinstance(src_type, compat_str): if isinstance(src_type, str):
src_type = src_type.lower() src_type = src_type.lower()
ext = determine_ext(src).lower() ext = determine_ext(src).lower()
if src_type == 'video/youtube': if src_type == 'video/youtube':
@ -3958,7 +3952,7 @@ class GenericIE(InfoExtractor):
if not src: if not src:
continue continue
subtitles.setdefault(dict_get(sub, ('language', 'srclang')) or 'und', []).append({ subtitles.setdefault(dict_get(sub, ('language', 'srclang')) or 'und', []).append({
'url': compat_urlparse.urljoin(url, src), 'url': urllib.parse.urljoin(url, src),
'name': sub.get('label'), 'name': sub.get('label'),
'http_headers': { 'http_headers': {
'Referer': full_response.geturl(), 'Referer': full_response.geturl(),
@ -3985,7 +3979,7 @@ class GenericIE(InfoExtractor):
return True return True
if RtmpIE.suitable(vurl): if RtmpIE.suitable(vurl):
return True return True
vpath = compat_urlparse.urlparse(vurl).path vpath = urllib.parse.urlparse(vurl).path
vext = determine_ext(vpath, None) vext = determine_ext(vpath, None)
return vext not in (None, 'swf', 'png', 'jpg', 'srt', 'sbv', 'sub', 'vtt', 'ttml', 'js', 'xml') return vext not in (None, 'swf', 'png', 'jpg', 'srt', 'sbv', 'sub', 'vtt', 'ttml', 'js', 'xml')
@ -4113,7 +4107,7 @@ class GenericIE(InfoExtractor):
if refresh_header: if refresh_header:
found = re.search(REDIRECT_REGEX, refresh_header) found = re.search(REDIRECT_REGEX, refresh_header)
if found: if found:
new_url = compat_urlparse.urljoin(url, unescapeHTML(found.group(1))) new_url = urllib.parse.urljoin(url, unescapeHTML(found.group(1)))
if new_url != url: if new_url != url:
self.report_following_redirect(new_url) self.report_following_redirect(new_url)
return { return {
@ -4139,8 +4133,8 @@ class GenericIE(InfoExtractor):
for video_url in orderedSet(found): for video_url in orderedSet(found):
video_url = unescapeHTML(video_url) video_url = unescapeHTML(video_url)
video_url = video_url.replace('\\/', '/') video_url = video_url.replace('\\/', '/')
video_url = compat_urlparse.urljoin(url, video_url) video_url = urllib.parse.urljoin(url, video_url)
video_id = compat_urllib_parse_unquote(os.path.basename(video_url)) video_id = urllib.parse.unquote(os.path.basename(video_url))
# Sometimes, jwplayer extraction will result in a YouTube URL # Sometimes, jwplayer extraction will result in a YouTube URL
if YoutubeIE.suitable(video_url): if YoutubeIE.suitable(video_url):

View File

@ -1,13 +1,8 @@
import itertools import itertools
from .common import InfoExtractor from .common import InfoExtractor
from ..utils import ( from ..compat import compat_str
qualities, from ..utils import parse_duration, parse_iso8601, qualities, str_to_int
compat_str,
parse_duration,
parse_iso8601,
str_to_int,
)
class GigaIE(InfoExtractor): class GigaIE(InfoExtractor):

Some files were not shown because too many files have changed in this diff Show More