Sometimes when I use the full channel download code with YouTube dl I get an error for a video 'unable to extract video data' . Is there a way to continue doing the full channel download after this happens instead of doing it one by one?
Go to paste link's download allSometimes when I use the full channel download code with YouTube dl I get an error for a video 'unable to extract video data' . Is there a way to continue doing the full channel download after this happens instead of doing it one by one?
Lol you can always email or dm it to josh.could we do a private KF tracker or some shit, skipping websites
I have an old thinkpad i could use for a serverLol you can always email or dm it to josh.
I'msome big brain volk can figure out a home brew as suggested a while back.
Sometimes when I use the full channel download code with YouTube dl I get an error for a video 'unable to extract video data' . Is there a way to continue doing the full channel download after this happens instead of doing it one by one?
Sometimes when I use the full channel download code with YouTube dl I get an error for a video 'unable to extract video data' . Is there a way to continue doing the full channel download after this happens instead of doing it one by one?
| video_webpage, urlh = self._download_webpage_handle(url, video_id) To this by defining the variable again |
9 times out of 10 for me, if I add -u, -p, this will fix the ones that were unable to extract. Whenever I see that unable to extract error, I cancel the entire thing, delete everything it’s downloaded thus far, and run the command again with my username and password attached. Now I just always add those credentials whenever I’m doing playlists.I know that doesn’t really answer your question but just throwing that out there in case it helps.Sometimes when I use the full channel download code with YouTube dl I get an error for a video 'unable to extract video data' . Is there a way to continue doing the full channel download after this happens instead of doing it one by one?
You can use this extension here: Easy Youtube Video Downloader Express. You will need to be logged in on youtube, ofc. (It's not available for chrome, though)I'm struggling to download the following video:
Here's what I've tried. (I'm on windows). Within youtube-dl, I've tried updating/refreshing the cookie file with --cookies, logging in manually with -u, -p (with multiple accounts), confirmed I'm running the latest version. I ran it with and without my VPN, and set the VPN to a couple different countries to see if that made a difference (it didn't). I also tried using JDownloader2 and the youtube-dlc fork. I also tried a few browser based methods, none could find the download link.
It looks like the issue is caused from the age-gating on the video based on the research I've done. This post on github describes the issue as well as a fix/work around. Dropping the text of that in the spoiler below.
1. I simply used the curl as stated above with actual parameters (replace REDACTED with your values).
2. Copied the response (html object).
3. Put that into a file (e.g. youtube_html.txt)
4. and searched for the age restriction line which I then removed - otherwise the age restriction check fires up and wants to use an alternative approach / workaround but since we have all the data in the html object we don't want that.
<meta property="og:restrictions:age" content="18+"> <- remove that in your txt file.
5. Overwrite video_webpage (do not remove the line but add a new line with same variable name)
youtube-dlc/youtube_dlc/extractor/youtube.py
Line 1800 in 2045de7
video_webpage, urlh = self._download_webpage_handle(url, video_id)
video_webpage, urlh = self._download_webpage_handle(url, video_id)
To this by defining the variable again
video_webpage = open("FULL PATH TO YOUR SAVED FILE", "r", encoding="utf-8").read()
- python3 -m youtube_dlc 7takIh1nK0s -F -v
Alternative to Step 1 - Ctrl + U (show source) and copy that. It should do the same but haven't tested it.
Keep in mind that you cannot download any other youtube video then. This is just a POC (proof of concept). So someone may be able or want to fix that issue.
Another solution would be to add a feature to use files when declared instead of actual web pages. That way you can feed data by another tool. That should also circumvent when videos are geo restricted and someone is providing that html object. But that is just theory and would need to be tested.
I believe this video is a special case where it requires you to be logged in to view it and in addition it's age restricted.This is where I'm stuck.Actually got past that, leaving it though so in case anyone actually looks at this they can check my work, lol.
I'm on step 5 of the fix now, trying to figure out how to replicate this step, but I literally have no idea what I'm doing.View attachment 1630941----
"curl 'https://www.youtube.com/watch?v=ouvjY5RrzXk'-H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;' \ --compressed
Invoke-WebRequest : Cannot bind parameter 'Headers'. Cannot convert the "cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;" value of type "System.String" to type "System.Collections.IDictionary".
At line:1 char:57
+ ... RrzXk' \ -H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID= ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: ([Invoke-WebRequest], ParameterBindingException
+ FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Microsoft.PowerShell.Commands.InvokeWebRequestCommand "
(After invalidargument, there's a left parentheses, colon, and a right parentheses. I can't make it not show up as a smiley. Sorry! Also, I don't actually have "redacted" in the cookie section of that command, I copied the info from my cookies.txt file after doing a fresh export with an account I know to be 18+.)
If I enter the command as:
"curl.exe 'https://www.youtube.com/watch?v=ouvjY5RrzXk' \ -H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;' \ --compressed"
I get:
"curl: option --compressed: the installed libcurl version doesn't support this"
If I run it without the --compressed option, I get a goddamn novel's worth of information.
Does anyone know - do I need to update something in regards to the libcurl version, run the command differently, or should I just paste that whole thing into a word doc, search for what I need to change, &figure out where the youtube.html text file I'm supposed to edit resides? Guessing that I need to save a copy of the original text to revert it after successfully downloading this, right?I misunderstood what I was reading, now I think I just need to paste it into a document and switch some stuff around.
Where do I go to overwrite that line / how do i get to it? I also think I may have an issue with the way I installed python so I'll look into that. But in the mean time, should anyone take pity on my retarded ass and feel like pointing me in the right direction, I'd be ever grateful.
I humbly accept the impending autistic ratings, for this is truly embarrassing. Both the fact that I haven't solved this and that I've spent this long attempting to archive a video of a death fat talking about other death fats. When I do figure it out, I'll update this post with what I did in case another farmer runs into the same problem.
I know I can screen record it from my phone, I've already done so. I just want to understand/fix this so the next time I run into an error like this, I'll know what to do. <3
EDIT:
9 times out of 10 for me, if I add -u, -p, this will fix the ones that were unable to extract. Whenever I see that unable to extract error, I cancel the entire thing, delete everything it’s downloaded thus far, and run the command again with my username and password attached. Now I just always add those credentials whenever I’m doing playlists.I know that doesn’t really answer your question but just throwing that out there in case it helps.
Edit2:
Thanks for that, I'll definitely keep that as an option in my back pocket for future reference. I prefer not to fuck with extensions but as a last resort it's nice to have another method. Confirmed that it does work on this video. <3 Thank you so much, @Blitzkrieger! <3You can use this extension here: Easy Youtube Video Downloader Express. You will need to be logged in on youtube, ofc. (It's not available for chrome, though)
I'm struggling to download the following video:
Here's what I've tried. (I'm on windows). Within youtube-dl, I've tried updating/refreshing the cookie file with --cookies, logging in manually with -u, -p (with multiple accounts), confirmed I'm running the latest version. I ran it with and without my VPN, and set the VPN to a couple different countries to see if that made a difference (it didn't). I also tried using JDownloader2 and the youtube-dlc fork. I also tried a few browser based methods, none could find the download link.
It looks like the issue is caused from the age-gating on the video based on the research I've done. This post on github describes the issue as well as a fix/work around. Dropping the text of that in the spoiler below.
1. I simply used the curl as stated above with actual parameters (replace REDACTED with your values).
2. Copied the response (html object).
3. Put that into a file (e.g. youtube_html.txt)
4. and searched for the age restriction line which I then removed - otherwise the age restriction check fires up and wants to use an alternative approach / workaround but since we have all the data in the html object we don't want that.
<meta property="og:restrictions:age" content="18+"> <- remove that in your txt file.
5. Overwrite video_webpage (do not remove the line but add a new line with same variable name)
youtube-dlc/youtube_dlc/extractor/youtube.py
Line 1800 in 2045de7
video_webpage, urlh = self._download_webpage_handle(url, video_id)
video_webpage, urlh = self._download_webpage_handle(url, video_id)
To this by defining the variable again
video_webpage = open("FULL PATH TO YOUR SAVED FILE", "r", encoding="utf-8").read()
- python3 -m youtube_dlc 7takIh1nK0s -F -v
Alternative to Step 1 - Ctrl + U (show source) and copy that. It should do the same but haven't tested it.
Keep in mind that you cannot download any other youtube video then. This is just a POC (proof of concept). So someone may be able or want to fix that issue.
Another solution would be to add a feature to use files when declared instead of actual web pages. That way you can feed data by another tool. That should also circumvent when videos are geo restricted and someone is providing that html object. But that is just theory and would need to be tested.
I believe this video is a special case where it requires you to be logged in to view it and in addition it's age restricted.This is where I'm stuck.Actually got past that, leaving it though so in case anyone actually looks at this they can check my work, lol.
----
"curl 'https://www.youtube.com/watch?v=ouvjY5RrzXk'-H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;' \ --compressed
Invoke-WebRequest : Cannot bind parameter 'Headers'. Cannot convert the "cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;" value of type "System.String" to type "System.Collections.IDictionary".
At line:1 char:57
+ ... RrzXk' \ -H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID= ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: ([Invoke-WebRequest], ParameterBindingException
+ FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Microsoft.PowerShell.Commands.InvokeWebRequestCommand "
(After invalidargument, there's a left parentheses, colon, and a right parentheses. I can't make it not show up as a smiley. Sorry! Also, I don't actually have "redacted" in the cookie section of that command, I copied the info from my cookies.txt file after doing a fresh export with an account I know to be 18+.)
If I enter the command as:
"curl.exe 'https://www.youtube.com/watch?v=ouvjY5RrzXk' \ -H 'cookie: VISITOR_INFO1_LIVE=REDACTED; __Secure-3PSID=REDACTED; LOGIN_INFO=REDACTED;' \ --compressed"
I get:
"curl: option --compressed: the installed libcurl version doesn't support this"
If I run it without the --compressed option, I get a goddamn novel's worth of information. (162 pages in a word doc)
Does anyone know, do I need to update something in regards to the libcurl version, run the command differently, or should I just paste that whole thing into a word doc, search for what I need to change, &figure out where the youtube.html text file I'm supposed to edit resides? Guessing that I need to save a copy of the original text to revert it after successfully downloading this, right?I misunderstood what I was reading, now I think I just need to paste it into a document and switch some stuff around.
Okay I figured out the syntax of the command, got that to execute properly and give me what I need. Also, I'm a dumbass. I could have also just opened the page the video is on and CTRL+U to show the source. LMAO. For some reason I insist on doing things the hard way.I'm on step 5 of the fix now, trying to figure out how to replicate this step. I literally have no idea what I'm doing.View attachment 1630941
Where do I go to overwrite that line / how do i get to it? I also think I may have an issue with the way I installed python so I'll look into that. But in the mean time, should anyone take pity on my retarded ass and feel like pointing me in the right direction, I'd be ever grateful.I still have no idea what I'm doing but after reading it again and again and again, I figured out what I needed to do on step 5.
Most recent edit:
Okay so I had to download the youtube dlc registry and manually edit the youtube.py code, which I did. This is now where I'm stuck
python3 -m youtube_dlc ouvjY5RrzXk -F -v
CUsers\redacted\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\python.exe: No module named youtube_dlc
Anyone know if I need to move a file around or something?
I humbly accept the impending autistic ratings, for this is truly embarrassing. Both the fact that I haven't solved this and that I've spent this long attempting to archive a video of a death fat talking about other death fats. When I do figure it out, I'll update this post with what I did in case another farmer runs into the same problem.
I know I can screen record it from my phone, I've already done so. I just want to understand/fix this so the next time I run into an error like this, I'll know what to do. <3
EDIT:
9 times out of 10 for me, if I add -u, -p, this will fix the ones that were unable to extract. Whenever I see that unable to extract error, I cancel the entire thing, delete everything it’s downloaded thus far, and run the command again with my username and password attached. Now I just always add those credentials whenever I’m doing playlists.I know that doesn’t really answer your question but just throwing that out there in case it helps.![]()
$ ytdl ouvjY5RrzXk
139 m4a audio only DASH audio 50k , m4a_dash container, mp4a.40.5@ 48k (22050Hz)
140 m4a audio only DASH audio 131k , m4a_dash container, mp4a.40.2@128k (44100Hz)
134 mp4 640x360 DASH video 15k , mp4_dash container, avc1.4d401e, 24fps, video only
160 mp4 256x144 DASH video 108k , mp4_dash container, avc1.4d400b, 24fps, video only
133 mp4 426x240 DASH video 242k , mp4_dash container, avc1.4d400c, 24fps, video only
137 mp4 1920x1080 DASH video 438k , mp4_dash container, avc1.640028, 24fps, video only
135 mp4 854x480 DASH video 1155k , mp4_dash container, avc1.4d4014, 24fps, video only
136 mp4 1280x720 DASH video 2310k , mp4_dash container, avc1.4d400a, 24fps, video only
18 mp4 640x360 360p 104k , avc1.42001E, 24fps, mp4a.40.2@ 96k (44100Hz), 10.95MiB
22 mp4 1280x720 720p 158k , avc1.64001F, 24fps, mp4a.40.2@192k (44100Hz) (best)
$ export YTDL_OPT1=
$ ytdl ouvjY5RrzXk
ERROR: ouvjY5RrzXk: YouTube said: Unable to extract video data
$ export YTDL_OPT1=--cookies=~/cookies.txt
$ ytdl 360 ouvjY5RrzXk
[youtube] ouvjY5RrzXk: Downloading webpage
[youtube] ouvjY5RrzXk: Downloading MPD manifest
[dashsegments] Total fragments: 165
[download] Destination: on piggy situation-ouvjY5RrzXk-240p.f133.mp4
[download] 100% of 610.05KiB in 00:57
[dashsegments] Total fragments: 89
[download] Destination: on piggy situation-ouvjY5RrzXk-NAp.f140.m4a
[download] 100% of 13.55MiB in 00:09
[ffmpeg] Merging formats into "on piggy situation-ouvjY5RrzXk-240p.mp4"
Deleting original file on piggy situation-ouvjY5RrzXk-240p.f133.mp4 (pass -k to keep)
Deleting original file on piggy situation-ouvjY5RrzXk-NAp.f140.m4a (pass -k to keep)
export YTDL_OPT1="--cookies=~/cookies.txt"
#export YTDL_OPT1=""
function ytdl
{
if [ -z $2 ]
then
youtube-dl $YTDL_OPT1 -F $1 | grep mp4
else
case $1 in
mp3)
youtube-dl $YTDL_OPT1 --extract-audio --audio-format mp3 --audio-quality 128K -f m4a $2
;;
aac)
youtube-dl $YTDL_OPT1 --extract-audio --audio-format aac -f m4a $2
;;
m4a)
youtube-dl $YTDL_OPT1 -f m4a $2
;;
jpg)
youtube-dl $YTDL_OPT1 --write-thumbnail --skip-download $2
;;
-F)
youtube-dl $YTDL_OPT1 -F $2 | grep mp4
;;
*)
# youtube-dl $YTDL_OPT1 --embed-thumbnail -f "bestvideo[height<=?${1}][fps<?60][ext=mp4]+bestaudio[ext=m4a]/best[height<=?${1}][ext=mp4][fps<?60]" "${2}" "-o%(title)s-%(id)s-%(height)sp.%(ext)s"
youtube-dl $YTDL_OPT1 -f "bestvideo[height<=?${1}][fps<?60][ext=mp4]+bestaudio[ext=m4a]/best[height<=?${1}][ext=mp4][fps<?60]" "${2}" "-o%(title)s-%(id)s-%(height)sp.%(ext)s"
;;
esac
fi
}
It doesn't work for me with a fresh cookies file, though. I tried that initially. This post on github describes the exact issue so I'm wondering if I can downgrade my youtube-dl version if it would work, since clearly it works for you. Sorry to blow this thread up with my idiocy, guys.It's the age gating. You need to have login or, for youtube-dl you need to have cookies of a valid login
Bash:$ ytdl ouvjY5RrzXk 139 m4a audio only DASH audio 50k , m4a_dash container, mp4a.40.5@ 48k (22050Hz) 140 m4a audio only DASH audio 131k , m4a_dash container, mp4a.40.2@128k (44100Hz) 134 mp4 640x360 DASH video 15k , mp4_dash container, avc1.4d401e, 24fps, video only 160 mp4 256x144 DASH video 108k , mp4_dash container, avc1.4d400b, 24fps, video only 133 mp4 426x240 DASH video 242k , mp4_dash container, avc1.4d400c, 24fps, video only 137 mp4 1920x1080 DASH video 438k , mp4_dash container, avc1.640028, 24fps, video only 135 mp4 854x480 DASH video 1155k , mp4_dash container, avc1.4d4014, 24fps, video only 136 mp4 1280x720 DASH video 2310k , mp4_dash container, avc1.4d400a, 24fps, video only 18 mp4 640x360 360p 104k , avc1.42001E, 24fps, mp4a.40.2@ 96k (44100Hz), 10.95MiB 22 mp4 1280x720 720p 158k , avc1.64001F, 24fps, mp4a.40.2@192k (44100Hz) (best) $ export YTDL_OPT1= $ ytdl ouvjY5RrzXk ERROR: ouvjY5RrzXk: YouTube said: Unable to extract video data $ export YTDL_OPT1=--cookies=~/cookies.txt $ ytdl 360 ouvjY5RrzXk [youtube] ouvjY5RrzXk: Downloading webpage [youtube] ouvjY5RrzXk: Downloading MPD manifest [dashsegments] Total fragments: 165 [download] Destination: on piggy situation-ouvjY5RrzXk-240p.f133.mp4 [download] 100% of 610.05KiB in 00:57 [dashsegments] Total fragments: 89 [download] Destination: on piggy situation-ouvjY5RrzXk-NAp.f140.m4a [download] 100% of 13.55MiB in 00:09 [ffmpeg] Merging formats into "on piggy situation-ouvjY5RrzXk-240p.mp4" Deleting original file on piggy situation-ouvjY5RrzXk-240p.f133.mp4 (pass -k to keep) Deleting original file on piggy situation-ouvjY5RrzXk-NAp.f140.m4a (pass -k to keep)
There's a handy Chrome extension which can grab cookies described here
https://kiwifarms.net/threads/archival-tools.6561/post-7135926
Here's the latest version of ytdl which uses cookies.txt
Bash:export YTDL_OPT1="--cookies=~/cookies.txt" #export YTDL_OPT1="" function ytdl { if [ -z $2 ] then youtube-dl $YTDL_OPT1 -F $1 | grep mp4 else case $1 in mp3) youtube-dl $YTDL_OPT1 --extract-audio --audio-format mp3 --audio-quality 128K -f m4a $2 ;; aac) youtube-dl $YTDL_OPT1 --extract-audio --audio-format aac -f m4a $2 ;; m4a) youtube-dl $YTDL_OPT1 -f m4a $2 ;; jpg) youtube-dl $YTDL_OPT1 --write-thumbnail --skip-download $2 ;; -F) youtube-dl $YTDL_OPT1 -F $2 | grep mp4 ;; *) # youtube-dl $YTDL_OPT1 --embed-thumbnail -f "bestvideo[height<=?${1}][fps<?60][ext=mp4]+bestaudio[ext=m4a]/best[height<=?${1}][ext=mp4][fps<?60]" "${2}" "-o%(title)s-%(id)s-%(height)sp.%(ext)s" youtube-dl $YTDL_OPT1 -f "bestvideo[height<=?${1}][fps<?60][ext=mp4]+bestaudio[ext=m4a]/best[height<=?${1}][ext=mp4][fps<?60]" "${2}" "-o%(title)s-%(id)s-%(height)sp.%(ext)s" ;; esac fi }
If you get an "Unable to extract video data" error, just regenerate the cookies file. It seems like Youtube rewrites them from time to time.
It doesn't work for me with a fresh cookies file, though. I tried that initially. This post on github describes the exact issue so I'm wondering if I can downgrade my youtube-dl version if it would work, since clearly it works for you. Sorry to blow this thread up with my idiocy, guys.Thank you all for trying to help me, I really appreciate you taking the time to look at it. <3
$ youtube-dl --version
2020.07.28
# Netscape HTTP Cookie File
# This file is generated by youtube-dl. Do not edit.
.youtube.com TRUE / FALSE [cookie info]
.youtube.com FALSE / FALSE [cookie info]
.youtube.com TRUE / TRUE [cookie info]
I'm onI'm using this version.
Code:$ youtube-dl --version 2020.07.28
[debug] youtube-dl version 2020.09.20
In which case I canBlackJack4494 said:
PS C:\Users\redacted> .\youtube-dl-2020.07.28.exe --cookies C:\Users\redacted\cookies.txt ouvjY5RrzXk -f 18
[youtube] ouvjY5RrzXk: Downloading webpage
[youtube] ouvjY5RrzXk: Downloading MPD manifest
[download] Destination: on piggy situation-ouvjY5RrzXk.mp4
[download] 100% of 10.95MiB in 00:06
Thanks, you always have great tips.I'm using this version.
Code:$ youtube-dl --version 2020.07.28
BTW your cookies file should look like this
Code:# Netscape HTTP Cookie File # This file is generated by youtube-dl. Do not edit. .youtube.com TRUE / FALSE [cookie info] .youtube.com FALSE / FALSE [cookie info] .youtube.com TRUE / TRUE [cookie info]

Idk but the second name looks like Kathryn.How can I un-redact names from screenshots that were redacted using a shitty markup tool in a mobile OS?
I've seen people do some image-editing fuckery to reveal the stuff that was rubbed out of screenshots like this:
View attachment 1680445
But I don't know what steps to take or what settings to manipulate in tools such as GIMP, in order to recover anything that's recoverable. In that example, the right-hand name is pretty readable without any manipulation at all, but I'm not sure if the left-hand name can be saved. Any advice on shit like this?
Based on fucking around with the font used for iOS texts (SF Pro) it's Stephanie. It fits perfectly in that space, anyway.
Based on fucking around with the font used for iOS texts (SF Pro) it's Stephanie. It fits perfectly in that space, anyway.
e:
View attachment 1680553

