Vishnu Mohandas, a software engineer who formerly worked at Google, made a significant career shift when he learned that the tech company had aided the US military in developing AI for analyzing drone footage. This unsettling revelation led him to leave his position in 2020, where he was working on Google Assistant, and prompted substantial changes in his relationship with Google’s services, most notably stopping the backup of his photos to Google Photos. Mohandas was deeply concerned about the potential misuse of his personal content for training AI systems, even those not specifically associated with military initiatives. He questioned his control over future outcomes enabled by such technologies and felt a moral responsibility to find a better alternative.
Inspired by his concerns over privacy and security, Mohandas, who self-taught himself programming and resides in Bengaluru, India, embarked on creating Ente—a privacy-centric platform for photo storage and sharing. Unlike conventional services, Ente is designed to be open source and utilize end-to-end encryption, emphasizing user trust and data protection. The paid service has successfully attracted over 100,000 users, primarily among privacy-conscious individuals. Despite the outreach of Ente, Mohandas found it challenging to communicate the reasons for moving away from Google Photos, given the program’s convenience and popularity.
In May, an intern at Ente devised a practical strategy to illustrate the privacy concerns surrounding Google’s AI capabilities. This led to the launch of the website https://Theyseeyourphotos.com, which serves as both a marketing initiative and a provocative demonstration of the level of detail AI can extract from personal images. Users can upload their photos to the site, and the image is processed through a Google Cloud computer vision program that generates a comprehensive three-paragraph description of the content. This unexpected twist on Google’s technology aimed to make people reconsider their reliance on it by exposing the hidden scrutiny their images faced.
When Mohandas uploaded a personal family photo, the AI’s analysis was alarmingly detailed, even identifying specific brands and styles, such as his wife’s watch. However, the AI’s output included a troubling association of the watch model with Islamic extremists, prompting Mohandas to edit the prompts to soften the insights produced by the AI. To ensure the outputs remained non-offensive and objective, they adjusted the AI’s requests, leading to less triggering yet still revealing descriptions of the uploaded images.
Even with modifications, the AI’s analysis still produced several assumptions about Mohandas and his family, which highlighted potential biases inherent in AI models. The results described the family’s emotional expressions, socio-economic status, and even validated elements from the photo’s metadata, such as time indicated by the watch. The AI’s ability to interpret and judge the subject’s environment and attire emphasized the risks tied to uploading personal pictures to platforms without ensuring robust privacy protections.
When approached for comment, a Google spokesperson refrained from addressing Ente’s initiative but directed inquiries to support documentation. Google maintains that uploads to Google Photos are utilized only for features that enhance user experience, notably for managing digital photo libraries and not for any malicious or commercial purposes. Users have options to limit certain analyses, but the absence of end-to-end encryption means they cannot entirely prevent Google from accessing and storing their images. Mohandas’ concerns and his initiative through Ente demonstrate a growing desire for more secure and privacy-friendly solutions in an age where personal data is increasingly vulnerable to exploitation by powerful tech corporations.