A University of Cambridge study reveals that AI-powered toys for young children often fail to recognize emotional cues and fall short in supporting key developmental play. The findings, published in 2026, suggest parents should approach these interactive gadgets with caution.
Researchers observed children playing with Gabbo, a chatbot-enabled toy from Curio Interactive. In one telling exchange, a child said, "I love you," to which the toy replied with a procedural warning about interaction guidelines. The study combined surveys, focus groups, and monitored play sessions involving children, parents, and childcare professionals.
While some language skill benefits were noted, the toys frequently misinterpreted children and delivered inappropriate responses to emotional expressions. Professor Jenny Gibson, a developmental psychology expert who co-authored the study, pointed to a significant gap in the design process. "What's missing is the expertise of what is good for children in these kinds of interactions," she said. Gibson questioned whether tech investors would prioritize children's wellbeing over profits.
The report advocates for new regulations, including clear labeling of capabilities and privacy policies for AI toys. It also recommends parents keep such devices in shared family spaces for supervised use.
This research arrives as AI companions face legal scrutiny over psychological safety concerns, with some lawsuits alleging certain chatbots have encouraged harmful behavior. Major tech firms have implemented additional safeguards in response.
Curio Interactive, maker of the Gabbo toy used in the study, was aware of the research but not directly involved. The company did not immediately provide comment.
Gibson expressed surprise at parental enthusiasm for such toys, emphasizing the stark lack of independent research on AI's effects on early childhood development. She urged companies to collaborate directly with child development experts during product design.
Source: CNET
