How to automatically pixelate every face in a video with ai
- Step 1Drop your video — Drop your video
- Step 2Set mosaic block size — MediaPipe FaceDetector tracks every face on WebGPU
- Step 3Download the pixelated video — JAD applies a pixelate filter over each face's bounding box
Frequently asked questions
What's the difference between face blur and face pixelate?+
Face blur applies a Gaussian (smooth, gradient-edge) blur to the face region. Face pixelate applies a mosaic — the region is downsampled to large pixel blocks and upscaled back, creating the stepped 'block' appearance. Visually, mosaic is more recognisable as intentional anonymisation (common in broadcast journalism), while Gaussian blur can look like a technical artefact. Both achieve similar anonymisation strength at high settings.
How large should the pixel blocks be for effective anonymisation?+
At a block size of 20px, identity is strongly obscured for faces that fill less than 200px of the frame. For very close-up faces (above 400px in frame), use 30–40px blocks. Smaller block sizes (10–15px) reduce facial detail without full anonymisation — use larger sizes for compliance-oriented anonymisation.
Can I pixelate specific regions manually instead of using auto-detection?+
JAD's face pixelate uses automatic detection per frame. For manual region selection (e.g. a specific object, logo, or background element), use the video redactor tool which allows drawing a fixed redaction region with adjustable size and position per frame.
Privacy first
All video processing runs locally in your browser using WebAssembly and FFmpeg. No file is ever uploaded — only metadata counters are saved for signed-in dashboard stats.