Archival Tools - If you make a thread without archiving shit I will strangle you.

Leonard Helplessness

kiwifarms.net
You can use any image editing program that allows you to adjust levels, contrast, or color values. "Kathryn" is clearly visible while "Stephanie" only has "st-p-nie" visible.
Here's two examples:
Just adjusting levels:
View attachment 1680578

Inverting colors before adjusting levels:
View attachment 1680608
Thanks for the info! iOS's markup tool is amazingly shitty in that it just darkens parts of an image rather than completely blacking out the text, so it's a lot of fun to take a badly-redacted image and dig out what was improperly hidden.
 

naught

I'm going to unlock all the achievements.
kiwifarms.net
YouTube dl down.. source nool & happenings.
(https://twitter.com/Job4_2/status/1319734016287924226)
(https://archive.md/191Zb)
Screenshot_20201023_132515.jpg

(https://archive.vn/uJGeE)
(https://archive.md/JDFsI)
Alternatives needed..
From the guy who made pewtube..
Also tweetsave seems to be fucked..
Screenshot_20201023_130553.jpg
 

naught sock account 1

KILL COUNT: 3
kiwifarms.net

Condicional

Mostaccioli
kiwifarms.net
I don't know if someone noticed already (or if this is the reason why imgur is unsafe), if you click an imgur link from inside the forum, you get blocked (maybe because you come from kiwifarms), if you copy the url and access to it manually, you won't.

Code:
Access to i.imgur.com is denied 
You are not authorized to view this page.
HTTP ERROR 403

There's a couple of threads (specially ADF ones) that achieve with imgur, maybe we should take care of that?
 

ScatmansWorld

kiwifarms.net
Very comprehensive lists of archival tools.

 
Last edited:

3119967d0c

"a brain" - @REGENDarySumanai
True & Honest Fan
kiwifarms.net
Are there any tools out there to archive soundbytes outside of video sites? If so, is there anyway i can transfer those soundbytes into an MP3?
Not quite sure what you're referring to. Are you referring to ways to download material from sites like Vocaroo, podcast sites like Soundcloud, Anchor etc? If so, youtube-dl & JDownloader deal with most of these. If you want to just get the audio from a video, you can get that in JDownloader pretty easy, or with the '-x' flag in youtube-dl when downloading a link (adding --audio-format mp3 if you really want an mp3 rather than a m4a).

If you want to take specific segments of audio from an audio or video file, probably easiest to download the whole thing and make clips from it in Audacity.
 

Twinkie

True & Honest Fan
kiwifarms.net
Are there any tools out there to archive soundbytes outside of video sites? If so, is there anyway i can transfer those soundbytes into an MP3?
You can download videos as mp3s with one click using this site (use an adblocker). There are some restrictions, it won't work for absolutely everything, but it's a nice quick simple tool.
 

Ellesse_warrior

Plz dox thumb
True & Honest Fan
kiwifarms.net
I don't know what I'm doing wrong but I can't seem to full channel download anymore.

I was using this : youtube-dl.exe -o %(upload_date)s_%(id)s_%(title)s.%(ext)s -f best[ext=mp4] channel address

and I keep getting stuff like this:

youtube-dl.exe -o %(upload_date)s_%(id)s_%(title)s.%(ext)s -f best[ext=mp4] https://www.youtube.com/c/TardHub/featured
[youtube:user] TardHub: Downloading channel page
[youtube:playlist] UUhFurcY1PXG5oCyc22S74uA: Downloading webpage
[download] Downloading playlist: UUhFurcY1PXG5oCyc22S74uA
[youtube:playlist] playlist UUhFurcY1PXG5oCyc22S74uA: Downloading 0 videos
[download] Finished downloading playlist: UUhFurcY1PXG5oCyc22S74uA

Anyone know what I've done wrong?
 

garakfan69

Conjuring up money from lazy hoes
kiwifarms.net
I don't know what I'm doing wrong but I can't seem to full channel download anymore.

I was using this : youtube-dl.exe -o %(upload_date)s_%(id)s_%(title)s.%(ext)s -f best[ext=mp4] channel address

and I keep getting stuff like this:

youtube-dl.exe -o %(upload_date)s_%(id)s_%(title)s.%(ext)s -f best[ext=mp4] https://www.youtube.com/c/TardHub/featured
[youtube:user] TardHub: Downloading channel page
[youtube:playlist] UUhFurcY1PXG5oCyc22S74uA: Downloading webpage
[download] Downloading playlist: UUhFurcY1PXG5oCyc22S74uA
[youtube:playlist] playlist UUhFurcY1PXG5oCyc22S74uA: Downloading 0 videos
[download] Finished downloading playlist: UUhFurcY1PXG5oCyc22S74uA

Anyone know what I've done wrong?
It's fucked right now: https://kiwifarms.net/threads/youtube-dl-dmcad-by-the-riaa.78123/post-7731898
 

Gustav Schuchardt

Local Moderator
True & Honest Fan
kiwifarms.net
How to archive sites that youtube-dl and youtube-dlc don't support

Consider this site

https://www.mrctv.org/videos/former-fla-secretary-state-katherine-harris-we-let-al-gore-take-36-days

youtube-dlc fails like this

Code:
$ ytdl https://www.mrctv.org/videos/former-fla-secretary-state-katherine-harris-we-let-al-gore-take-36-days
WARNING: Falling back on generic information extractor.
ERROR: Unsupported URL: https://www.mrctv.org/videos/former-fla-secretary-state-katherine-harris-we-let-al-gore-take-36-days

Open Developer Tools in Chrome, refresh the page then clear the log to because you don't care about all the htlm/css/png bloat. Then start the video playing. At this point you play 'guess which is the video'

1605552382435.png


If you guessed 'the .mp4 file', congratulations

Right-click on the mp4 file and Copy Link Address. Now do this

Code:
$ wget https://cdn.mrctv.org/videos/46692/46692-480p.mp4
--2020-11-16 18:44:56--  https://cdn.mrctv.org/videos/46692/46692-480p.mp4
Resolving cdn.mrctv.org (cdn.mrctv.org)... 13.35.193.74, 13.35.193.100, 13.35.193.114, ...
Connecting to cdn.mrctv.org (cdn.mrctv.org)|13.35.193.74|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 207658121 (198M) [video/mp4]
Saving to: ‘46692-480p.mp4’

46692-480p.mp4                 100%[====================================================>] 198.04M  17.3MB/s    in 12s

Now you have a local copy of the mp4!

If you see a bunch of .ts files you'll need to write a bash for loop to pull those and then combine them using ffmpeg, which I explained how to do here

One interesting side effect of this technique is that it is resistant to attempts to protect the streams from youtube-dl and youtube-dlc by having the .ts or .mp4 files at an ephemeral pseudo randomize address. E.g. suppose Youtube adds protection to videos by putting them at pseudorandom URLs which change and youtube-dl and youtube-dlc do not yet know how to reverse engineer the Javascript on the page that handles that. Well with this technique it doesn't matter. Let the Javascript work out the URLs, capture them in Developer tools and then download the files with wget.
 
Last edited:

So-and-so

kiwifarms.net
I've been trying to automate and script together a bunch of archiving routines so that I can set and forget them with cron, and I'd like to know if anyone here has found reliable command line tools for interfacing with archival services of all kinds. Any suggestions would be appreciated. Here's what I've found so far:

WaybackPack : lets you download the entire Wayback Machine archive for a given URL.

MegaTools : Command line client for mega.nz. I've found that the experimental build (1.11.0) is more reliable. There are binaries on the site, or your could compile them yourself, if you're comfortable with that.

ArchiveNow : Tool / Webservice to push web resources to six public web archives at once (internet Archive, archive.md, WebCite, Archive.st, Megalodon.jp, and Perma.cc)

Plowshare : Set of Bash command line tools to upload and download from several file sharing websites (Dozens of sites are supported, but some are legacy, or unmaintained. Your mileage may vary)

Gallery-dl : Download image galleries and collections from image hosting sites (See supported sites)

DiscordChatExporter: Export Chats, DMs, etc to a text file. Haven't tried it yet, but there are some servers that I'm looking to archive, so I expect I'll be working with it shortly.

Pastebin Scraper : Automated tool to monitor pastebin for "interesting information". While it's usually used to look for password/email combos and the like, I feel like it has potential to be used to search via regex for information on people or communities of interest. Perhaps something to keep an eye on / experiment with.

I think that this is all I have to share for now. Hopefully nothing here has already been posted. If anyone out there is familiar with scripting, and has ideas/experience for how to streamline the usage of all these tools, feel free to get in touch. I'm basically learning python/bash as I go, so this is probably gonna be a slow project. Long term, I'd like to pull from RSS feeds and archive automatically so the DFE problem can be solved, and people can set up their own automation systems to keep an eye on their favourite cows without worrying about missing anything.

EDIT: Didn't see your link, @ScatmansWorld . Thank ya kindly. I'll dig through it now.
 
Last edited:

BlancoMailo

True & Honest Fan
kiwifarms.net
What's the best Youtube video archiver currently and how do I archive them to the farms directly?
I'll just quote my post here, since we get this question often.

How to quickly and easily archive videos: A step-by-step guide.
1. Download youtube-dl (https://youtube-dl.org/)
2. Watch Null's video on how to use youtube-dl (https://www.youtube.com/watch?v=clMu41B9r1o)
3. Download the desired video.
4. Download Avidemux (http://avidemux.sourceforge.net/)
5. Open the video in Avidemux.
6, Find the correct timestamps for the clip.
7. Watch a schizo taste herself off her dog's tongue
7a. Vomit.
7b. Clean vomit.
8. Now that you have the correct timestamps, use the double arrow buttons to find the keyframes just before the part you want starts (Avidemux cuts at keyframes only) and the one directly after to ensure nothing is lost
9. At the bottom of the window, there will be a red "A" button and a white "B" button, "A" determines the start of your clip and "B" determines the end. Once you select your clip, you'll see a thin blue border around it on the video timeline.
10. Click on "Save Video" at the top, making sure both the Video and Audio Outputs show "Copy" (otherwise, it's going to rerender the entire fucking video and waste minutes of your life that you wish were valuable.)
10a. Also, make sure the Output Format says "MP4 Muxer" so it gives you the video in a KF-readily uploadable video format (you can change the default in your settings by going to "Edit"->"Save current settings as default" once you change the default muxer - this will save you time in the future).

You now have a video you can upload normally by either dragging and dropping or going to the right edge of the textbox menu, clicking on the 3 vertical dots that show "More options..." by hovering your mouse and then click on the little movie camera option "Insert video".

BONUS STEP. Because the farms has a 100mb limit on video uploads, you might run into the problem I did, which is that the output file is 103mb. Avidemux does not rerender the file, meaning you end up with the same/slightly worse level of compression on files that you started with. You can either cut the file in half now and repeat steps 8-10 (quicker) or, if you want it to stay as one file, you can try to pop it into the video editor of your choice (even something as basic as windows movie maker can work) and attempt to rerender it in the next lowest resolution, you may have to screw around with the render settings. Now that the video file is under 100mb, you can upload it (assuming it doesn't look like indecipherable dogshit, in which case, just split the video instead).
 
Last edited:

awoo

Awootist
True & Honest Fan
kiwifarms.net
Anyone have experience using Internet Archive CLI? I believe it supports archiving any URL on demand so you could feed it long lists of URLs.

What's the best Youtube video archiver currently and how do I archive them to the farms directly?
youtube-dl. There are many guides online. You can upload directly to the site or upload to internet archive.
 

BlancoMailo

True & Honest Fan
kiwifarms.net
Another un-redacting conundrum, this time with a particularly revolting Twitter user. Trying to see our Pokemon-loving Muslim's Twitter handle; I can almost make out the whole username:

View attachment 1826479

Trying to get better at uncovering badly-erased text.

The original tweet being responded to for reference: https://twitter.com/healthyles/status/1244857986276868096

The issue with the censoring in this one is that it happens to be deceptively effective in just the right areas to make it look easy while still being a bit of a pain in the ass.
Here's what I'm seeing (note, I'm assuming only standard English characters):
JoiteanTa(r or n)kar185
@Un____Shadow

Did a few runs, first just using curves, the second using curves and color inversion, the third using color inversion followed by my messing with the levels, and on the fourth I decided to try out starting with the exposure, then the shadows/highlights, followed by sharpening. Some of the results are more extreme than others and some work better in different areas, when you combine the various pieces together, you get a better and quicker understanding of the overall puzzle than you would by trying to perfect a single method.
lewdinpeace-curves.jpglewdinpeace-curves+invert.jpglewdinpeace-invert-levels.jpglewdinpeace-exposure,shoadows-highlight, sharpen.jpg
 

Leonard Helplessness

kiwifarms.net
The original tweet being responded to for reference: https://twitter.com/healthyles/status/1244857986276868096

The issue with the censoring in this one is that it happens to be deceptively effective in just the right areas to make it look easy while still being a bit of a pain in the ass.
Here's what I'm seeing (note, I'm assuming only standard English characters):
JoiteanTa(r or n)kar185
@Un____Shadow

Did a few runs, first just using curves, the second using curves and color inversion, the third using color inversion followed by my messing with the levels, and on the fourth I decided to try out starting with the exposure, then the shadows/highlights, followed by sharpening. Some of the results are more extreme than others and some work better in different areas, when you combine the various pieces together, you get a better and quicker understanding of the overall puzzle than you would by trying to perfect a single method.
View attachment 1826589View attachment 1826590View attachment 1826591View attachment 1826684
That's very helpful; was enough to make an educated guess as JolteonTanker135. Searching the name got me the twitter handle UmbreShadow. Mostly I'm trying to learn ways to uncover text like this if it's obscured; playing with curves and layers is a great guideline.
 

Operator of the Wired

kiwifarms.net
I have bash aliases for youtube-dl and streamlink, as well as the details on how to monitor livestreams 24/7, and how to download from Trovo with that method. It's a long post so I'm spoilering it to save space.

Only elite Linux users are going to be able to utilize all this!

Bash (or your shell of choice) aliases.

To get your video with max quality, subtitles or youtube auto-subs, and the uploader's name prepending the filename:

alias vidget='youtube-dl -o "/YOUR/DIRECTORY/HERE/%(uploader)s - %(title)s-%(id)s.%(ext)s" --prefer-ffmpeg --write-sub --sub-lang=en --write-auto-sub --sub-format best --embed-subs'

To do the same as the above, but get every video in a playlist labelled in order in your current directory.

alias youtube-dl-ordered='youtube-dl -o "%(playlist_index)s-%(uploader)s - %(title)s-%(id)s.%(ext)s" --prefer-ffmpeg --write-sub --sub-lang=en --write-auto-sub --sub-format best --embed-subs'



A script to quickly execute streamlink for your link in a terminal:

Bash:
#!/bin/bash

set -e -o pipefail

DIR="/YOUR/DIRECTORY/HERE/"

#Define your folder to save in above.

streamlink --hls-live-edge 99999 --hls-segment-threads 5 --hls-live-restart -o "$DIR/$(date +%Y-%m-%d-%R).mp4" "$1" best

exit



A script that you can run every minute via cron to continuously monitor for a livestream and start downloading it as soon as it starts. Note that youtube likes to lie about when people are live and even display old content as "live", clogging up whatever storage you use.

Bash:
#!/bin/bash

set -e -o pipefail

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/user/Scripts:/home/user/.local/bin/

#Had to set this because youtube-dl was in an unusual place for execution, likely due to pip installation.



NAME="MadattheInternet"

#channel id goes above



if  [ $(ps aux | grep "$NAME" | grep youtube-dl | wc -l) -le 0 ]

    then

        echo "No process, commencing download..."

        youtube-dl -o "/YOUR/DIRECTORY/HERE/%(uploader)s - %(title)s-%(id)s.%(ext)s" --prefer-ffmpeg --write-sub --sub-lang=en --write-auto-sub --sub-format best --embed-subs --cookies="/YOUR/DIRECTORY/HERE/youtube-dl-cookies.txt" "https://www.youtube.com/c/$NAME/live"

    else

        echo "Download underway, skipping execution..."

        #This is to deduplicate downloads, otherwise your folder will get filled with duplicate streams and your bandwidth will eventually be filled.

fi

#Note that some channels use "channel/ID" instead of "c/ID". Before adapting this script to your target, check this first.

#Also you can use just "$NAME" on the youtube-dl line and have this work with any youtube-dl compatible link in the NAME field.



exit



For this zoomer Trovo platform, there's an extra step. You MUST use streamlink, and put this custom extractor into your ~/.config/streamlink/plugins folder, creating it if it doesn't exist. The following is the same auto-loader (as managed by cron) above, with streamlink as the runner program and Trovo set in there as an example. This would likely also work well for Twitch channels.

Bash:
#!/bin/bash

set -e -o pipefail

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/user/Scripts:/home/user/.local/bin/



DIR="/YOUR/DIRECTORY/HERE"

NAME="https://trovo.live/StillMad"

#channel URL goes above



if  [ $(ps aux | grep "$NAME" | grep streamlink | wc -l) -le 0 ]

    then

        echo "No process, commencing download..."

        streamlink --hls-live-edge 99999 --hls-segment-threads 5 --hls-live-restart -o "$DIR/Kiwi Farms - $(date +%Y-%m-%d-%R).mp4" "$NAME" best

        #Note that we are setting the uploader name "Kiwi Farms" manually. Streamlink to my knowledge does not have features for extracting usernames.

    else

        echo "Download underway, skipping execution..."

fi



exit



To do the monitoring scripts above once every minute, I use crontab -e to add lines like

* * * * * /YOUR/SCRIPT/DIRECTORY/trovo/josh

Youtube may start giving you "429: Too Many Requests" for doing this. There's a process you can do to get your cookies that's a little complicated, but I think it's worth your time to be able to save any livestream ever that a person you're interested in does.



Does one of these programs inexplicably stop working? Don't forget to use pip install --user --upgrade <program> from time to time.
 
Last edited:

Similar threads

Eccentric Internet individuals that are not cows but are funny to watch
Replies
40
Views
7K
Replies
204
Views
8K
General discussion of Dlive's infrastructure, shady business, and Cryptoshekelz.
Replies
90
Views
9K
Top