How to blur faces for privacy before publishing
- Step 1Drop your video — Drop your video
- Step 2Set blur intensity and tracking sensitivity — JAD runs BlazeFace per frame on WebGPU and tracks faces frame-to-frame
- Step 3Export the anonymised video — Composites a Gaussian blur over each face and re-encodes
Frequently asked questions
How accurate is the automatic face detection?+
JAD uses TensorFlow.js with a lightweight face detection model optimised for browser inference. Detection accuracy is high for front-facing subjects in reasonable lighting (>90% recall). It may miss faces that are partially occluded, very small in frame (below ~30px), angled away, or in very dark/backlit conditions. Review the output at 1x speed before publishing for any compliance-sensitive use case.
Will the blur track faces as they move across the frame?+
Yes — the detection runs independently on each frame and repositions the blur region per frame. Tracking is based on detection position each time, not motion-based interpolation, so if a face is momentarily undetected (occluded, fast motion) the blur may disappear for that frame. For critical compliance use, supplement with manual review.
Does face blurring qualify as sufficient anonymisation under GDPR?+
GDPR defines anonymisation as rendering re-identification impossible. Standard Gaussian blur can be partially reversed with image processing techniques in some cases. For high-risk contexts (research, journalism involving vulnerable subjects), consider higher blur intensity or pixelation instead. For everyday editorial use (social publishing, corporate video), strong Gaussian blur is generally considered sufficient and is industry-standard practice.
Privacy first
All video processing runs locally in your browser using WebAssembly and FFmpeg. No file is ever uploaded — only metadata counters are saved for signed-in dashboard stats.