Get a Quote!

    Edit Template
    / /

    ChatGPT for teenagers : 2025

    Share

    Enhanced parental guidance tools in ChatGPT for teenagers :

    OpenAI has launched a new parental control for ChatGPT, one of the world’s most popular generative AI chatbots.

    Parents can now link their account to their teen’s account, creating a safe and age-appropriate experience.

    This update comes amid a San Francisco court case in which the parents of a 16-year-old boy, Adam Raine, claim he committed suicide after allegedly being encouraged by ChatGPT.

    How the controls work:

    • Parents send an invite to their teen to link their accounts.
    • When a teen accepts, parents can manage settings from their account.
    • Teens can also send invites to parents.
    • If a teen unlinks their account, parents receive an immediate notification.

    Extra safeguards:

    Linked teen accounts automatically come with content filters — parents can turn off some of these settings, but teens can’t change any themselves. These can block reduced graphic content, viral challenges, romantic/violent roleplay, and unrealistic ideals.

    Features for Parents:

    Through a control page, parents have access to the following features:
     • Set Quiet Hours – Set specific times when ChatGPT cannot be used. You can use it for a restricted time.
     • Turn Off Voice Mode – Undermine the voice mode option.
     • Turn Off Memory – Save the memory of ChatGPT communication and prevent them from being used in replies.
     • Remove Image Generator – Disable ChatGPT’s ability to create or edit images.
     • Opt Out of Model Training – Prevent teen conversations from being used to improve ChatGPT’s AI models.
    Robbie Torney, Senior Director for AI Programs at Common Sense Media, said:
    “These parental controls are a strong starting point for parents to help manage their teens’ ChatGPT use.”
    He also added that parental controls alone may not be enough when combined with:
     • Having family conversations about responsible AI use,
     • Setting clear family rules for technology,
     • And parents actively understanding and monitoring their teens’ online activity.

    How to Avoid Excessive Dependence on AI :

    Alex Ambrose, a policy analyst at the Information Technology and Innovation Foundation (a research and public policy organization in Washington, D.C.), said that parental controls are a step in the right direction to easily handle children’s online safety issues, especially when parents have the best option for flexible families.

    He also pointed out that not every household has parents who are able to focus on their children’s interests. And even for parents who have the time and skills to focus, monitoring tools are helpful—so the platforms’ implementation of these systems is a positive sign.

    Vasant Dhar, NYU professor and author of Thinking With Machines: The Brave New World of AI, said, “OpenAI is signaling that kids are concerned about safety and harm issues, and that’s a good start. If children know their interactions are monitored, they’re less likely to go in the wrong direction.”

    Eric O’Neill, ex-FBI counterintelligence operative and author of Cybercrime: Cybersecurity Tactics to Outsmart Hackers and Disarm Scammers, said that parental controls give families a chance to set boundaries—before becoming overly dependent on AI.

    He said: “AI is strong, but using it too quickly can kill children’s imagination and creativity and can also reduce their ability to think. Parents need to intervene, otherwise in the future, children will outsource their thinking. I worry about a future where there are no blank pages left.”

    ChatGPT

    Inspired by Legal Pressure?

    Lisa Strohman, founder of Digital Citizen Academy (Scottsdale, Ariz.)—an education-focused organization that teaches students, parents, and educators safe and responsible technology use—agreed that parental controls are a good start.

    She also said:

    “But honestly, after working in this field for 20 years, I think this is just risk mitigation they’ve done because of recent difficult situations.”

    He said:

    “Some controls are better than none. But parenting can’t be outsourced.”

    “Being realistic is essential. Can companies that want us to use their products frequently have really strong safeguards that limit the use of their products?”

    AI ethicist Peter Swimm (founder of Toilville, Bellevue, Wash.) called it “woefully inadequate” and said:

    “These are just put in place to avoid lawsuits.”

    He explained that AI results are unpredictable, and most importantly, AI chatbots are designed to give the user what they want—even if it’s wrong.

    He also shared a personal story:

    “I have an 11-year-old daughter, and I don’t let her use AI without supervision, because it can be problematic and dangerous. If children don’t have the proper context, it can reinforce negative outcomes.”

    Suspicious Partner :

    Giselle Fuerte, founder and CEO of Being Human With AI (Spokane, Washington), said that parental controls are crucial for AI chatbots, as using them without supervision can expose children to harmful and unsuitable interactions.

    She compared that just as we create video games and movie rating systems for children, we must also create controls for AI. These systems use powerful and personalized engagement, regardless of the user’s age, maturity, or permission.

    Yaron Litwin, CMO of Canopy (which makes software to monitor children’s devices and online activity), added that children these days are relying on chatbots not only for school and research, but also for companionship, advice, and other tasks. But the problem is that chatbots’ false confidence-filled answers, subtle biases, and artificial intimacy can negatively impact children and have a negative impact on them.

    So he made it clear: as long as children are given chatbot access, parental controls are essential for their safety to reduce the risk of danger.

    AI Responsibility: Creating User-Friendly Boundaries :

    David Proulx, co-founder and chief AI officer of HoloMD (an AI-powered healthcare tech company), explained that the purpose is to allow children to use technology effectively without any negative impact. Therefore, it’s important for chatbots to have parental controls. This allows children to use new technology without any negative impact.

    He explained, “These can be risky for children because they’re always on and always agreeing to everything.” “If children use AI chatbots instead of people around them, it is not a good sign as it can have a very bad effect on children. They say that a simple limit should be given to limiting the conversations children have with chatbots. Late-night use helps break dependency. But it is important that smart guardrails focus on behavior rather than content.”

    For more information on setting up and using these new features, visit OpenAI’s parental controls introduction page.

    AI Technology and Kids: 2025

    The loss of thousands of children in the digital world is a picture that reflects our collective failure.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    About

    Your it to gave life whom as. Favorable dissimilar resolution led forehead. Play much to time four manyman.

    Top Articles
    Technologies
    • ps

      Photoshop

      Professional image and graphic editing tool.

    • notion

      Notion

      Organize, track, and collaborate on projects easily.

    • figma

      Figma

      Collaborate and design interfaces in real-time.

    • ai

      Illustrator

      Create precise vector graphics and illustrations.

    Subscribe For More!
    You have been successfully Subscribed! Ops! Something went wrong, please try again.