A recent study reveals a surprising contrast in learning methods: while ChatGPT and other large language models offer convenience, they may not foster deep understanding. The research, co-authored by marketing professors, found that relying on these models for information synthesis leads to shallower knowledge compared to traditional Google searches. The study involved over 10,000 participants across seven experiments, where they learned about topics like gardening using either LLMs or Google searches. The key finding: those using LLMs felt they learned less, wrote shorter and less informative advice, and were less likely to adopt it. This pattern persisted even when controlling for information exposure and search platform. The study highlights the importance of active engagement in learning, as Google searches require navigating links, reading sources, and personal interpretation, leading to deeper understanding. The authors suggest that while LLMs have benefits, users should be strategic, especially for developing deep and generalizable knowledge. Future research aims to explore ways to make LLM learning more active, addressing the challenge of preparing students for a world where LLMs are integral.