Don’t type something into Gemini, Google’s group of GenAI applications, that’s incriminating — or it collects data from users of its Gemini chatbot apps for the web, Android and iOS.might matter when it comes to data securityGoogle notes that human annotators routinely read, label and process conversations with Gemini — albeit conversations “disconnected” from Google Accounts — to improve the service that you wouldn’t want someone else to see.

That’s the PSA (of sorts) today from Google, which in a new outlines the ways in which. (It’s not clear whether these annotators are in-house or outsourced, which ; Google doesn’t say.) These conversations are retained for up to three years, along with “related data” like the languages and devices the user used and their location.

Now, Google affords users

some

control over which Gemini-relevant data is retained — and how.

Switching off Gemini Apps Activity in Google’s My Activity dashboard (it’s enabled by default) prevents future conversations with Gemini from being saved to a Google Account for review (meaning the three-year window won’t apply). Individual prompts and conversations with Gemini, meanwhile, can be deleted from the Gemini Apps Activity screen.

But Google says that even when Gemini Apps Activity is off, Gemini conversations will be saved to a Google Account for up to 72 hours to “maintain the safety and security of Gemini apps and improve Gemini apps*)“Please that is don’t enter information that is confidential your conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine learning technologies,” Google writes.

To Be fair, Google’s GenAI data retention and collection policies don’t differ every that much from those of the competitors. OpenAI, as an example, saves all chats with ChatGPT for thirty days whether or not ChatGPT’s discussion record function is powered down, excepting where a user’s subscribed to an plan that is enterprise-level a custom data retention policy.requestedBut Google’s policy illustrates the challenges inherent in balancing privacy with developing GenAI models that feed on user data to self-improve.

Liberal GenAI data retention policies have landed vendors in hot water with regulators in the past that is recent

Last summer, the FTC survey detailed information from OpenAI as to how the company vets data utilized for training its designs, including customer data — and exactly how that data’s protected when accessed by 3rd functions. International, Italy’s information privacy regulator, the Italian information Protection Authority, stated that OpenAI lacked a basis that is“legal for the mass collection and storage of personal data to train its GenAI models.

As GenAI tools proliferate, organizations are growing increasingly wary of the privacy risks.A recent from Cisco found that 63% of companies have established limitations on what data can be entered into GenAI tools, while 27% have banned GenAI altogether. The survey that is same that 45% of staff members have actually entered “problematic” information into GenAI resources, including staff member information and non-public data about their particular workplace.

OpenAI, Microsoft, Amazon, Bing as well as others provide GenAI services and products aimed toward businesses that clearly (*)don’t(*) retain information for just about any amount of time, whether for design education or just about any other function. Consumers though — as can be the situation — have the end that is short of stick.(*)