ChatGPT privacy tips : 5 things you should never share

ChatGPT privacy tips : 5 things you should never share

Welcome to our beginner’s guide on ChatGPT privacy tips. Google recently updated the privacy policies for its apps, warning users that it will use their online and public data to train ChatGPT’s rivals. Unfortunately, there is no way to challenge this change other than by cancelling your Google account. However, even doing so could not stop Google’s Bard and other ChatGPT substitutes from learning from your previous online posts.

A sharp reminder to use cautious when exchanging information with AI chatbots is provided by this Google policy update. Be careful what you disclose until these systems can be trusted with user privacy. Let me give you a few examples of the kinds of data you ought to hold off sharing with AI up until more stringent laws are put in place to safeguard user privacy and copyrighted content globally.

Governments from all around the world will eventually set best practices for generative AI programs in order to protect user privacy and copyrighted information. The future promises on-device generative AI solutions that don’t report back to a central server, yet we are currently in a moment of unregulated generative AI research. If Apple decides to enter the generative AI market, products like Humane’s Ai Pin or Apple’s Vision Pro could serve as examples of such developments. It’s prudent to approach ChatGPT, Google Bard, Bing Chat, and other such platforms as guests in your house or workplace up until that point. Use the same caution with these AI systems as you would if you were sharing private or confidential information with a stranger.

I’ve already underlined the significance of withholding personal information from ChatGPT, but allow me to elaborate on the specific kinds of sensitive information that ought to be kept out of the hands of generative AI businesses.

Personal information that can identify you:

Avoid providing ChatGPT and other chatbots with personal information that can be used to identify you, such as your complete name, address, birthday, or social security number. Prioritize your privacy and take the appropriate safety measures. It’s important to note that OpenAI released privacy features after ChatGPT had already been available for some time. You can stop ChatGPT from accessing your prompts using these features. It’s crucial to understand, though, that depending exclusively on this setting might not ensure that the information you disclose with the chatbot remains private. The setting might be disabled, and a bug could appear, which would reduce its efficacy.

Your information is utilized to train the AI model even though ChatGPT and OpenAI have no intention of abusing it or making money off of it. Additionally, early in May, a cyber attack resulted in a data leak at OpenAI. Such occurrences may expose your data to unauthorized parties. Even while it might be difficult, it is not completely impossible for someone to find your personal information. This information can be used for harmful purposes, such as identity theft, in the wrong hands. It’s critical to exercise caution and take precautions to safeguard your personal data from any threats.

Protecting your privacy and personal information is crucial, especially in the dangerous online environment. Use caution and refrain from disclosing private information that could jeopardize your identity and security.

Usernames and passwords:

Login credentials are the information that hackers most frequently seek out during data breaches. Particularly if you use the same username and password for several different apps and services, usernames and passwords have the potential to provide unauthorized access. I want to address this by reiterating how crucial it is to use password management tools like Proton Pass and 1Password to safely maintain all of your credentials.

While it’s intriguing to imagine a time when we can tell our OS systems to log us into apps, this convenience should never include giving generative AI access to your login information. Absolutely no advantage can be derived from doing so.

Financial information:

Furthermore, there is no justification for providing ChatGPT with personal banking information. Your bank account information or credit card numbers will never be requested by OpenAI, and ChatGPT is unable to use them. Similar to the preceding categories, this sensitive data needs to be treated carefully because it could result in serious financial harm if it ends up in the wrong hands.

Be cautious if you come across any software posing as a ChatGPT client for a PC or mobile device and asking for financial details. This can be a warning sign for ChatGPT virus. It is essential in these situations to delete the application instead of disclosing any data. Don’t stray from the official generative AI applications offered by reliable companies like OpenAI, Google, or Microsoft.

You may safeguard yourself from potential hazards and make sure that your online experience is safer by placing a high priority on the security of your login credentials and financial data.

Work secrets:

Confidential code from Samsung personnel made its way to OpenAI’s servers in the early stages of ChatGPT. Following this occurrence, Samsung banned generative AI bots, and other businesses, including Apple, did the same. It’s interesting to note that Apple is also creating products similar to ChatGPT. Google has put limitations on the use of generative AI in the workplace, despite its intention to use internet data to train its ChatGPT rivals.

These events serve as a blatant reminder that you need to protect your workplace secrets. If you need help from ChatGPT, it’s important to look into more creative options rather than sharing sensitive details about your job.

Health information:

The most complicated issue that needs consideration out of all the factors is talking about health information with chatbots. It is advised to hold off on giving these bots any specific medical information. While it may be tempting to give chatbots instructions based on hypothetical “what if” scenarios involving particular symptoms, it is crucial to refrain from using ChatGPT at this time for self-diagnosis or researching disorders. Although generative AI may eventually be able to do these functions, it is still important to use caution and avoid disclosing all of your health information to ChatGPT-like services unless they are personal AI products running on your own device.

For instance, I personally used ChatGPT to look for suggestions for running shoes that would address particular medical concerns without disclosing too much personal health information. Your most private thoughts fall under a different type of health information that should also be taken into account. Some people might use chatbots for therapy instead of qualified mental health providers. Although it is not my place to judge if this strategy is suitable, the primary point is still valid: ChatGPT and other chatbots cannot provide a level of privacy that can be relied upon.

It is inevitable that your private ideas will travel through the servers of OpenAI, Google, and Microsoft before being used to train the chatbots. We have not yet reached the level where generative AI products can act as personal psychologists, though we might ultimately get there. You must exercise caution and be aware of the information you share with the bots if you feel the need to interact with generative AI for emotional support.

ChatGPT isn’t all-knowing:

I’ve talked about ChatGPT’s limitations and the kinds of prompts it can’t help with in earlier conversations. I made it clear that the data offered by tools like ChatGPT would not always be dependable or accurate. It is important to keep in mind that ChatGPT and other chatbots can give inaccurate information, even while discussing health-related topics like mental health or other medical disorders. As a result, it’s crucial to always ask for the sources of the responses you get. In order to get answers that are more relevant to your particular needs, it is never a good idea to be persuaded to give the bots additional personal information.

Additionally, there is a chance that malevolent applications posing as generative AI programs will obtain personal information unintentionally. If such a terrible thing happens, you might not understand the repercussions until it’s too late. Hackers may use the personal information they obtain against you in a number of ways.

It is crucial to approach generative AI with caution in order to maintain your security and safeguard your privacy. Always double-check the information you get, exercise caution when disclosing personal information, and keep an eye out for potential dangers or con artists.

Leave a Comment