Since buying (and later selling) Boston Dynamics, making scientific advances through DeepMind, and more recently making the topic the focus of this year’s Google I/O conference in the wake of the release of its AI chatbot, Google Bard, Alphabet, the parent company of Google, has been all in on artificial intelligence. The corporation now cautions its staff to use caution when speaking to these AI bots, including its own.
According to a Reuters article, Alphabet told its employees not to divulge sensitive information to AI chatbots because the companies that own the technology will later store it.
No matter who says it, this is wonderful counsel that comes straight from the source. Additionally, revealing private or personal information anywhere online regularly is not a smart idea.
Because ChatGPT, Google Bard, and Bing Chat are based on large language models (LLMs) that are constantly being trained, whatever you say to one of these AI chatbots can be used to teach it. The businesses that created these AI chatbots also store the data, which is accessible to their staff.
Of Bard, Google’s AI chatbot, the company explains in its FAQs:
“When you interact with Bard, Google collects your conversations, your location, your feedback, and usage information. That data helps us provide, improve and develop Google products, services, and machine-learning technologies, as explained in the Google Privacy PolicyOpens in a new window“
Google further emphasises to “not include information that can be used to identify you or others in your Bard conversations” by saying that it chooses a subset of talks as samples to be examined by expert reviewers and retained for up to three years.
According to OpenAI, AI trainers evaluate ChatGPT talks as well to aid the company’s systems. The company states on its website: “We review conversations to improve our systems and to ensure the content complies with our policies and safety requirements.”