This will allow us to decide much faster that we don't want an already archived video,
and will allow having to download webpages for each video that has already been downloaded,
thus significantly speeding up the archival of channels that have no new content.
The problem was in the following code:
class ArteTVPlus7IE(ArteTVBaseIE):
...
@classmethod
def suitable(cls, url):
return False if ArteTVPlaylistIE.suitable(url) else super(ArteTVPlus7IE, cls).suitable(url)
And its sublcasses like ArteTVCinemaIE.
Since in the lazy_extractors.py file ArteTVCinemaIE was not a subclass of ArteTVPlus7IE, super(ArteTVPlus7IE, cls) failed.
To fix it we have to make it a subclass. Since the order of _ALL_CLASSES is arbitrary we must sort them so that the base classes are defined first. We also must add base classes like YoutubeBaseInfoExtractor.
+ [svt:play] Add fallback path looking for video id and fix extraction for oppetarkiv
* [svt:base] Detect geo restriction
* [svt:base] Extract series related metadata
youtube-dl expects the format items to be returned as a list,
but when there's only one item Facebook returns a dict instead,
this wraps the dict in a list if necessary
In order to add a new key to both __objects and __functions dicts on jsinterp.py, it is
necessary to first verify if a key was present and if not, create the key and
assign it to a value.
However, this can be done with a single step using dict setdefault method.
Removed print statement from code.
Replaced two regex searches with the corret ones.
Removed some unnecessary semicolumns
fixed title extraction
refactored everything to search_regex
processed comments on commit 5650b0d, fixed feedback from flake8
Improved regexes and returns info dict now.
Added support for closertotruth interview URL
Added support for episodes page