<sub>2026-03-14 @0940</sub>
#ffmpeg
# Converting Screen Recordings to GIFs with FFmpeg
I have been recording short screen demos more often lately. When I am working through something and want to show the result in my notes or documentation, a static screenshot sometimes does not cut it. A short clip showing something in motion is clearer. The problem is that `.mov` files from the macOS screenshot tool are not always easy to drop into documentation, notes, or a blog post. A `.gif` tends to just work.
I knew FFmpeg could handle this conversion. What I did not know was how to do it well.
> [!NOTE]
> For an example of `.gif` files i've generated check out [[2026-03-08-orbit-sims]]
## The First Few Attempts
My early attempts at converting `.mov` to `.gif` produced results that were technically correct but visually rough. The colors looked washed out, or banded, or just off in a way I could not fully explain. The file sizes were also larger than I expected for something that looked that bad.
After some digging I found that GIF has a hard limit of 256 colors. What that means is that how you choose those 256 colors matters a lot. FFmpeg has a way to handle this properly, and once I understood it the output improved noticeably.
## The Command
This is the command I ended up using:
```bash
ffmpeg -i input.mov \
-vf "fps=15,scale=800:-1:flags=lanczos,split[s0][s1];[s0]palettegen[p];[s1][p]paletteuse" \
output.gif
```
The part that makes it work well is the split at the end of the filter chain. Instead of using a generic color palette, this command analyzes the actual frames of the video and builds a 256-color palette optimized for that specific content. It then applies that palette to produce the final output. The two named streams, `s0` and `s1`, are just two copies of the same processed video so that the palette step and the output step can both operate on the same source.
The `scale=800:-1` part handles aspect ratio automatically. Setting the width to 800 and the height to `-1` tells FFmpeg to calculate the correct height rather than me having to do the math or risk stretching the video.
## What It Looks Like in Practice
A 16-second recording at 800 pixels wide came out to about 1.3 MB. That is reasonable for what it is. The colors matched the source well enough that I did not feel the need to dig further into more options.
Once I had the command in my notes, using it became easy. I record something with the macOS screenshot tool, run the command with the filenames swapped out, and I have a `.gif` ready to embed. That reduction in effort is honestly why this workflow has stuck.
## Why GIF Still Makes Sense
GIF is old and limited. I am aware of that. But it is also supported almost everywhere without any configuration, it loops automatically, and it does not require a video player. For short demonstrations inside documentation or a notes app, that matters. I am not trying to encode a film. I am trying to show a 10-second thing that would take a paragraph to describe in words.
---
## Notes to myself
1. Look into whether `.webm` or `.webp` animated files are better supported in Obsidian Publish and whether they would replace `.gif` for blog posts without the color tradeoff.
2. Try adding `stats_mode=full` to `palettegen` and see if it makes a visible difference on recordings that have more color variation.
3. Figure out if there is a way to batch convert multiple `.mov` files in a directory using a shell loop so I do not have to run the command manually each time.
4. Explore whether adding a dither option like `dither=bayer:bayer_scale=5` to `paletteuse` is worth it for recordings that include gradients or dark backgrounds.