Its face de-identification tech, developed by three AI researchers who work with the company, modifies your face slightly in video content, so that facial recognition systems can’t match what they see in the footage with images of you in their databases. You can see this in action in this video (screenshot above), in which certain details are tweaked, such as the shape of a person’s mouth, or the size of their eyes. Facebook says this can be used with pre-recorded content, as well as with live video. This technology could allow for more ethical use of video footage of people for training AI systems, which typically require several examples to learn how to emulate the content they’re fed. By making people’s faces impossible to recognize, these AI systems can be trained without infringing on the test subjects’ privacy. I imagine this might soon become a standard requirement for government agencies and companies that capture footage of people, whether for security or other purposes. For example, since the state of California banned law enforcement agencies from using facial recognition tech recently, it could mandate that all footage of civilians be processed using a system that incorporates face de-identification tech in order to protect their privacy. VentureBeat succinctly explained how this de-identification method works: The outlet also noted that Facebook has no plans to use it in any of its own products. The social network uses facial recognition to identify your friends in uploaded photos for easier tagging, and also to alert you when you’re in someone else’s pictures. The company turned facial recognition off by default last month, which is a small step towards respecting its users’ privacy a bit more. Find out more about Facebook’s face de-identification AI in this research paper (PDF).