Links
- https://ffmpeg.org/
- The FFmpeg home page
- https://trac.ffmpeg.org/wiki
- The FFmpeg Bug Tracker and Wiki
- https://git.ffmpeg.org/gitweb/ffmpeg.git
- Browsable source code.
Docs
Opus
How to make Ogg/Opus files.
FFmpeg libopus codec documentation is here.
The following takes in some video and makes an Ogg/Opus file.
ffmpeg -i input -c:a libopus -b:a 128K -frame_duration 60 -metadata title="Some Title" -metadata artist="Some Artist" -bitexact output.opus
The following takes in a video and outputs a 96k audio/webm
.
Technically, audio/webm
files can also have the .webm
extension, but having it as .weba
makes it easier to distinguish for a webserver trying to figure out the media/MIME type.
ffmpeg -hide_banner -i "$input" -vn -c:a libopus -b:a 96k -f webm output.weba
CAF
Apple iOS, for whatever dumb reason, doesn’t support open audio formats like Opus in standard containers like Ogg or WebM. Weirdly, it does support Opus in it’s own special “Core Audio Format” file that no one else uses. See this HTML5 Audio Formats test from dogphilosophy.net for more info. Anyway, if you do want to make these files (and I don’t suggest you give in to this), here’s how to make a CAF file with FFmpeg.
ffmpeg -i input.mp4 -vn -c:a libopus -b:a 96k -f caf output.caf
Concatenating Audio
Adapting the Concatenate page in the FFmpeg wiki, this is how I’d concatenate a bunch of MP3 files into a single Opus file. Note that it’s bash specific.
readarray -td '' argarray < <(printf -- '-i\0%s\0' *.mp3)
ffmpeg "${argarray[@]}" -filter_complex "concat=n=$(( ${#argarray[@]} / 2)):v=0:a=1" -map_metadata -1 -bitexact -y output.opus
And here’s how you’d concatenate the audio of several files while stripping the video out:
myfiles=(*.webm)
readarray -td '' argarray < <(printf -- '-vn\0-i\0%s\0' "${myfiles[@]}")
ffmpeg "${argarray[@]}" -filter_complex "concat=n=${#myfiles[@]}:v=0:a=1" -map_metadata -1 -bitexact -y output.opus
This also works at generating audio/webm
files.
TODO: I should figure out how to add chapter markers when concatenating. Kyle Howells has a blog post on adding chapters to MP4s that might be helpful for me here.
Clipping segments
Sometimes you have a long video or audio recording you just want a segment of. Here’s an example of how to get a snippet, adapted from this stackoverflow answer.
ffmpeg -copyts -ss "1:23:45" -i input.webm -to "1:25:00" -map 0 -c copy output.webm
This starts at 1 hour, 23 minutes, 45 seconds in and ends at 1:25:00. The format of these time positions is the man page for ffmpeg-utils under the “Time duration” section.
It seems like the -copyts
is pretty necessary, otherwise the -to
argument doesn’t work as expected.
WebP
I kind of like WebP, which compresses images fairly well. I should investigate AVIF and JPEGXL later.
Here’s an example of converting a PNG (like a document) to WebP.
ffmpeg -hide_banner -i test1.png -f webp -quality 90 -preset text -compression 6 -y "test1.webp"
And here’s an example of taking an image from a pipe (a scanner) and saving it to a WebP:
scanimage --device "escl:http://192.168.1.XX:80" --format=png --resolution=300dpi --mode=color | ffmpeg -hide_banner -f image2pipe -c:v png -i - -f webp -quality 90 -preset text -compression 6 -y "output.webp"
WebM
WebM is a pretty cool format. Basically a limited Matroska that uses certain free codecs.
My screencaster tool records WebMs, though I noticed that these files aren’t seekable (since they’re essentially a concatenated stream). Doing the following simple thing reencodes it:
ffmpeg -i input.webm output.webm
This (on my system at least), reencodes it using libvpx-vp9
, and shrunk a simple 5 second screencast from 375k to 47k.
TODO: I should check out the FFmpeg page on Encode/VP9
GIFs
Clément Bœsch has a pretty good blog post on making GIFs with FFmpeg. You should read that first.
Something like the following is what you can use to make stickers.
ffmpeg -i input.mp4 -vf "fps=8,scale=150:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse=dither=bayer:bayer_scale=5" -loop 0 -y output.gif
Metadata
Add metadata
ffmpeg -i input -metadata title="some title" -metadata somefield="some value" -c copy output
Strip metadata
FFmpeg seems to usually add an “encoder
” metadata tag.
I think the following will strip it.
I think this works because when you copy the stream, you’re not actually re-encoding it, so the encoder won’t add its tag.
You can also specify an encoder tag like -metadata:s:a:0 encoder="something"
, but it doesn’t get rid of the tag (and doing -metadata:s:a:0 encoder=""
doesn’t work).
ffmpeg -i input.opus -map_metadata -1 -c:a copy -bitexact output.opus
Find encoder options
Like, here’s how to find which options work for the mjpeg
(JPEG) encoder:
ffmpeg -hide_banner -h encoder=mjpeg
This outputs (truncated):
Encoder mjpeg [MJPEG (Motion JPEG)]: General capabilities: threads Threading capabilities: frame and slice Supported pixel formats: yuvj420p yuvj422p yuvj444p mjpeg encoder AVOptions: -mpv_flags <flags> E..V....... Flags common for all mpegvideo-based encoders. (default 0) skip_rd E..V....... RD optimal MB level residual skipping strict_gop E..V....... Strictly enforce gop size qp_rd E..V....... Use rate distortion optimization for qp selection cbp_rd E..V....... use rate distortion optimization for CBP naq E..V....... normalize adaptive quantization
Pipe images
Use -f image2pipe
with a video codec parameter (e.g. -c:v mjpeg
or -c:v png
) to input/output different formats.
Another example:
<input1.png ffmpeg -hide_banner -f image2pipe -c:v png -i - -f image2pipe -c:v mjpeg -q:v 13 - > output1.jpg
Here, -q:v
modifies the quality of the output JPEG.
Nix
Nix JPEG XL
As of 2023-11-01, I didn’t see an --enable-libjxl
flag for the ffmpeg derivation.
It may need to be added with an override.
Todos
-
Cinemagraphs
I’m still working on a cinemagraphs page that documents how to create cinemagraphs with FFmpeg. (See an example on my makkoli/makgeolli page or on my page on Berkeley’s Hidden Waterfall.)