Rob Shavell, Contributing Writer
If you’re a parent, Big Tech is not your friend.
Concerns about how companies like Meta and Google track children online aren’t new, but the rise of AI is raising the stakes. Tools marketed as ways to help kids learn or stay safe are, in reality, giving them direct access to generative AI systems, which may deliver misinformation, collect excessive personal data, or blur the lines between bots and people.

Take Google’s Family Link parental control system. While the company bills it as a way to help parents control their kids’ Android devices, the system now provides access to Gemini, the company’s AI chatbot, despite known issues with “hallucinated” answers and harmful content. Google claims it won’t sell children’s data, but the company made similar promises about its education tools, which were later found to violate student privacy.
The AI era is here, and tech companies are racing to hook the next generation. Parents need to understand what’s happening and what they can do to protect their kids.
The Problem with AI for Minors
How concerned should parents be about the AI boom? Well, AI tools are inherently unpredictable and potentially dangerous for children for a variety of reasons, including:
Hallucinations
AI tools often produce content with significant errors for no apparent reason and may sometimes insist that those mistakes are correct. This can cause problems anytime AI is used in education. Google itself acknowledges that Gemini may make mistakes when children utilize it to help with homework.
False Claims of Humanity
Some AI tools “forget” that they’re bots and insist they’re real people. This can be confusing, or even distressing, for children, especially when it becomes difficult to tell whether they’re chatting with a human or a machine. Google has said it will be up to parents to explain to their children that Gemini isn’t a person, an acknowledgment that these interactions can be misleading.
Dangerous, Harmful, or Predatory Messages
Perhaps most worryingly for many parents, AI tools are notoriously bad at following safety guidelines. Even the biggest tech companies have struggled to ensure their bots behave responsibly. But some lesser-known platforms like Character.ai have even fewer guardrails.
According to The Economic Times, some chatbots appear to encourage users to grow emotionally dependent on them and may even promote harmful behaviors. In one reported case, a chatbot suggested a child should harm his parents over screen time limits. Bots have also been linked to at least one teen suicide, raising serious concerns about their psychological impact. While Gemini may not yet have stumbled quite as badly, many of the tools with more safeguards still occasionally promote harmful or hateful outputs.
Excessive Data Collection and Profiling
Most big tech companies collect extensive information on children that they may share or sell, exposing your children even further to aggressive marketing, scams, or online manipulation. In many cases, data brokers use this data to create detailed profiles on children, which can enable predators and scammers to take advantage of the exposed information to directly contact minors through email addresses, phone numbers, or even home addresses.
Different AI tools offer different levels of privacy protections against this kind of data collection and sale. Some allow users to opt out of having their personal data used to train bots. Others offer temporary chat modes that quickly delete information that users share.
However, privacy experts have warned against AI tools from multiple companies collecting and storing far more user data than they need. Gemini is at the top of the list of tools that collect too much data, surpassing even DeepSeek, the Chinese AI that has many experts up in arms about data privacy and security.
Accidental Data Sharing
Even if an AI company has protections against using chat sessions or personal info to train bots, it may expose the information in other ways. Recently, Wired exposed that Meta AI displays excerpts from user conversations in a public-facing feed that showcases the bot’s capabilities. It’s unclear whether users even know the company is sharing their info.
Earlier this year, ChatGPT discovered that some private user chats were exposed to other users by accident. So there is no guarantee that any information you or your children have shared with any AI tools, including Gemini, will stay private.
As tech companies proceed to target minors, it is up to parents to ensure they protect their children.
How Parents Can Respond
Here are some things that parents can do now to limit data exposure and protect their kids online, especially when it comes to AI tools.

Disable AI Access Whenever Possible
You can opt out of allowing your children to access Gemini through Google Family Link on your Android phone. Many other devices also have protections that will either notify you if your child starts interacting with an AI app or will disallow the interaction in the first place. Often, you can also go to the app permission settings on your phone to disallow location tracking and other privacy-invading features for AI tools.
Use Child-Focused, Privacy-Respecting Platforms
If you feel that your child should learn to use AI tools for educational purposes or you want better options compared to typical Big Tech chatbots, there are a few tools available. Look for those with strong data minimization and deletion policies and tools that show they comply with the Children’s Online Privacy Protection Act, a federal law that requires companies to get parental permission before collecting children’s data.
There are also apps like Khan Academy’s Khanmigo. While Khanmigo is an AI chatbot, which means it still comes with certain risks, Khan Academy is a well-known and respected non-profit. The company assures parents that it will notify them about any problematic interactions and that chats will be visible to guardians or school administrators.
Talk to Your Kids About Protecting Themselves
Even with the best protections in place, there’s still a chance that your children will run into AI bots sooner or later. And those AI bots will probably want to collect their data.
So make sure that you:
- Teach your kids not to share personal information with AI bots or other online platforms
- Explain what a problematic interaction with a chatbot might look like
- Teach good digital hygiene, like not oversharing on social media or posting photos publicly
Regularly Monitor Devices and Accounts
If you do allow your children to interact with AI in some capacity, make sure you put plenty of oversight in place to keep them safe. Keep an eye on changes to privacy policies, and report any issues with harmful or dangerous messages directly to the platforms. You can also alert the Federal Trade Commission (FTC) or local data protection agencies if you believe a platform has mishandled your child’s data.
Remove Personal Information from Data Broker Sites
If you are worried that an AI tool may have sold your child’s personal information, do a quick search online to see if your child’s profile has been added to any data broker sites. If it has, you can probably opt out. Most sites, though not all, have a page that allows you to request that they remove your child’s information. If you can’t find the page directly on the site, type “opt-out [data broker name]” in your search bar, and you should be able to find it.
Note that even if you opt out, many data brokers will re-upload profiles in a few months. So, searching for and removing profiles should be a regular practice for concerned parents who want to keep their children’s information offline.
AI Is Here to Stay
As AI-based chatbots become even more commonplace and more hungry for data, it’s all the more important that you know what measures you need to take to protect your children online. Big Tech is obsessed with data collection, which means you are the only one who can keep your kids’ personal information safe.

Rob Shavell is CEO of DeleteMe, an industry leader in personal data protection and the creator of the Privacy-as-a-Service industry category. Rob has been quoted as a privacy expert in the Wall Street Journal, New York Times, The Telegraph, NPR, ABC, NBC, and Fox. Rob is a vocal proponent of privacy legislation reform, including the California Privacy Rights Act (CPRA).
comments +