Alphabet, Google's parent company, has issued a warning to its employees about not using Bard, Google's very own chatbot that has been gaining some traction for some time. The reason for this warning is simple, the company does not want the employees to enter any confidential or sensitive information into the chatbot and has also talked about the risks that are associated with using these tools. This is not the first time a company has come forward with such concerns, and for the right reasons since these chatbots continuously learn and evolve, but at the same time, they do have a tendency to make mistakes.
After Samsung, even Google is warning its employees against the use of chatbots like Bard
In addition to not using chatbots extensively with sensitive information at risk, Google has also advised the engineers to avoid using the code provided by the chatbot directly. Simply put, although the code provided by Bard or other chatbots can be accurate, there is always a risk of the code being incorrect, which would then result in errors and results that are not what the engineers are looking forward to.
With this in mind, Google is not the only company that has warned against the use of AI chatbots. Samsung has actually banned its employees from using ChatGPT or any other chatbot over the same concern. At the same time, we have heard that the company is working on developing its own large language model, so we might be seeing something pretty soon in the future. Apple has also restricted employees from using Bard, so it is safe to say that the concerns are real.
I do understand the concerns over the use of chatbots, especially for large companies like Apple, Samsung, and Google. Especially when employees have access to a lot of information that shouldn't get out. Hopefully, this will prevent confidential and sensitive information from getting into the wrong hands because, at this point, it is not just about products leaking out, but a lot more is at stake.
The risks associated with the use of chatbots make it evident enough that they are not the most reliable offerings out there in terms of providing you with any concrete proof. And, feeding sensitive information to a chatbot is definitely not something we would suggest to anyone because it can result in the information becoming a permanent part of the bot and getting into the hands of someone else.
Source: Reuters









