🤖 Tech Talk: Is ChatGPT making us lazy thinkers?
Plus: 10% GenAI apps used by firms are high-risk; AI tool of the week: ChatGPT Record mode to transcribe audio meetings; AI handles 45% of insurance workflows, speeds up resolution; and more…

Dear reader,
In January 2023, I had asked: Will ChatGPT rewire our brains? The reason: ChatGPT had garnered 100 million users in just two months of its release in November 2022, and many wondered if artificial intelligence-powered tools like these would rewire our brains and weaken our reading, writing, comprehension skills, and communication skills?
To be sure, new technologies have historically triggered such concern. Nicholas Carr, in The Shallows (2010), argued that the internet weakened deep thinking even as it improved our ability to scan information. His views remain unchanged in the book’s 2020 update. Clay Shirky, who teaches new media at New York University, offered a broader take in his book ‘Here Comes Everybody: The Power of Organizing Without Organizations’: “When we change the way we communicate, we change society.”
Indeed, each leap—from wheels to smartphones—reshapes our worldview. While some decry smartphones for spreading misinformation or distracting youth, millions have gained skills via YouTube. In communication, especially, change feels more profound because it alters how we think, connect, and create—provoking both anxiety and awe.
In September 2024, I asked: 'Can AI agents ideate better than humans?' AI agents refer to AI models capable of autonomous decision-making and action to achieve specific goals, which simply means that they work without any human intervention.
However, no two experts agree on this subject. For instance, an August 2024 study from the University of Bath and the Technical University of Darmstadt suggested that large language models (LLMs) may impress with their capacity to process larger datasets and generate increasingly sophisticated language, but that these models were unlikely to evolve and develop complex reasoning skills. In March 2024, though, Stanford University and Notbad AI researchers indicated that their Quiet Self-Taught Reasoner (Quiet-STaR) AI model could be trained to think before responding to prompts, representing a step towards AI models learning to reason.
In September 2024, Stanford researchers in a study titled 'Can LLMs Generate Novel Research Ideas?' asked whether AI research agents can "autonomously generate and validate new ideas". In other words, can AI agents think and reason too? The researchers posited that while recent advances in LLMs had raised hopes about their ability to speed up scientific discovery, there was no evidence that LLMs could generate novel, expert-level ideas or complete the research process.
At present, AI and generative AI (GenAI) have made many advancements in reasoning. Besides, they don't hallucinate (fabricate facts) as much as they did earlier. Early this year, Chinese AI lab DeepSeek shook the world of AI by introducing DeepSeek-R1, an open-source reasoning model that challenged OpenAI's o1 model and was developed at just a fraction of the cost of existing AI models and also consumed less power. In response, OpenAI released the o3 mini model, followed by GPT-4.5 (internal name was Orion). In March, the world witnessed another autonomous general purpose AI agent Manus from China, which is being heralded as the "second DeepSeek, and GPT" moment.
These rapid advancements have once again prompted researchers to ask whether ChatGPT-like models can rob us of our thinking power.
In a new 206-page study titled 'Your Brain on ChatGPT' researchers from the Massachusetts Institute of Technology (MIT) set out to understand the "cognitive cost of using an LLM in the educational context of writing an essay".
In the study of 54 participants, released this month, they compared how essay writing was affected by using different tools: large language models (LLMs), search engines, or no tools (brain-only). Each participant wrote essays over three sessions using just their assigned method. In the fourth session, LLM users had to write without help, while brain-only participants tried using LLMs.
Note: The 54 participants, aged 18 to 39, came from five universities in the Greater Boston area—MIT, Wellesley, Harvard, Tufts, and Northeastern. Of them, 35 were undergraduates, 14 postgraduates, and 6 were post-MSc or PhD professionals working as postdocs, researchers, or software engineers.
How they fared
Using electroencephalography (EEG) to track brain activity, researchers found brain-only users showed the strongest cognitive engagement, followed by search-engine users. LLM users had the weakest brain activity and lowest memory recall. When LLM users had to write unaided, their performance dropped further. In contrast, brain-only users introduced to LLMs showed a spike in visual processing and memory recall. Essays by LLM users were more uniform and scored lower. They also felt less ownership of their writing and struggled to recall it afterward.
Conclusion
The study suggests that while LLMs are efficient, they may reduce learning, memory, and critical thinking if overused. The researchers, however, added that "longitudinal studies (research studies that follow the same people over a long period of time to observe how things change) are needed in order to understand the long-term impact of the LLMs on the human brain, before LLMs are recognized as something that is net positive for the humans".
10% GenAI apps used by firms are high-risk: report
On average, 10% of a company’s 66 GenAI apps are high-risk, noted the 'State of Generative AI 2025' report released by Palo Alto Networks this month. While AI growth offered significant productivity benefits, the report cautions that unsanctioned use, emerging threats, and a lack of governance have rapidly expanded the attack surface for organisations, particularly across India and the Asia-Pacific region.
The report also warned of a surge in unsanctioned GenAI usage, emerging risks like “Shadow AI” (AI tools used by employees without the knowledge or approval of their company’s IT or security teams), jailbreaking (bypassing a device’s or software’s built-in restrictions illegally to gain access or make it do things it wasn’t meant to do), and exponential traffic growth—especially in India—driven by tools like Grammarly and Microsoft Copilot.
Highlights
Surge in usage: GenAI traffic rose by 890% in 2024.
India’s top apps: Grammarly (32.56%), Microsoft Power Apps (19.98%), and Copilot (16.37%) led GenAI use.
Risk exposure: 10% of the average 66 GenAI apps used by companies are classified as high-risk.
DeepSeek boom: Traffic linked to DeepSeek-R1 jumped 1,800% within two months of its January 2025 launch.
Data loss climbs: GenAI-related data loss incidents doubled in 2025, now making up 14% of all data security breaches.
Shadow AI risk: Unapproved GenAI use creates blind spots for IT teams, complicating data control.
Jailbreaking threat: Many GenAI models remain vulnerable to jailbreaks that generate harmful or illegal content.
Sectoral risks: Tech and manufacturing account for 39% of global AI coding activity, heightening IP exposure.
AI Unlocked
by AI&Beyond, with Jaspreet Bindra and Anuj Magazine
The AI feature we have unlocked this week is: ChatGPT Record mode to transcribe audio meetings
What problem does it solve?
Meetings are critical for collaboration, but capturing their essence is often difficult. Manually scribbling notes often misses key points, leading to miscommunication or forgotten action items. Post-meeting, summarizing discussions takes hours, and transcribing audio manually is tedious, error-prone, and time-consuming. This chaos frustrates teams, delays decisions, and risks losing valuable insights from brainstorms or client calls. ChatGPT Record solves this by automatically transcribing audio, generating structured summaries, and transforming them into actionable outputs, saving time and ensuring clarity.
How to access: Currently, it’s only available for the macOS desktop app and for ChatGPT Enterprise, Edu, Team, and Pro workspaces. Visit chatgpt.com.
ChatGPT Record can help you:
- Transcribe meetings: Instantly convert audio from meetings or voice notes into text.
- Summarize discussions: Create structured summaries saved as canvases in your chat history.
- Transform outputs: Convert summaries into emails, project plans, or code scaffolds.
- Reference past recordings: Use prior transcripts for context-aware responses.
Example:
Imagine you’re leading a team brainstorming session for a product launch. The room buzzes with ideas- marketing strategies, feature tweaks, and timelines but you’re struggling to keep up.
Here is how ChatGPT Record can help you:
- Start recording: Click the Record button, grant microphone permissions, and confirm team consent per local laws.
- Speak freely: As your team debates pricing and launch dates, ChatGPT transcribes live, displaying a timer. You pause to clarify a point, then resume.
- Generate notes: After the meeting ends, hit Send. The transcript uploads, and a canvas appears with a summary, highlighting marketing ideas, assigned tasks, and deadlines.
- Transform: Ask ChatGPT to draft a project plan from the canvas, including a Gantt chart outline. Export it as a PDF and share it with stakeholders.
What makes ChatGPT Record special?
- Real-time transcription: Live transcription with pause/resume flexibility.
- Actionable outputs: Summaries can be repurposed into plans, emails, or code.
- Privacy-first: Audio files are deleted post-transcription; transcripts follow workspace retention policies.
Note: The tools and analysis featured in this section demonstrated clear value based on our internal testing. Our recommendations are entirely independent and not influenced by the tool creators.
AI handles 45% of insurance workflows, speeds up resolution
“AI is transforming insurance from a reactive service to a proactive experience. Nearly half of all customers now receive their policy within 15 minutes, down from 4 hours earlier, according to a new Policybazaar report.
Highlights
45% of all insurance workflows are being handled by AI
48% of customers now get their policy in just 15 minutes, down from 4 hours earlier
More than 30 of first-contact queries are attended by AI chatbots
15% drop in resolution turnaround time (TAT)
14x improvement in Early Claims Factor due to fraud detection AI
Customer satisfaction score (CSAT) has risen to 94%
AI is handling up to 45% of insurance workflows, reducing manual overhead and minimizing errors
Chatbots now attend over 30 of first-contact queries—up from 15% a year ago.
GenAI bots, currently in beta, can now explain complex insurance terms
Policyholders are getting better answers in real-time and in their preferred language
In Term plans, nearly 11% of cases are flagged by AI for potential fraud
In Savings plans, it’s around 16%
You may also want to read
AI’s biggest threat: Young people who can’t think
Apple explores AI search future with potential Perplexity takeover: Report
OpenAI scrubs mention of Jony Ive partnership after judges ruling over trademark dispute
Grok 3.5: Musk's answer to ‘Biased’ AI aims to rewrite human knowledge
US accuses DeepSeek of aiding China's military and dodging chip export rules
Zuckerberg leads AI recruitment blitz armed with $100 million pay packages
Hope you folks have a great weekend, and your feedback will be much appreciated — just reply to this mail, and I’ll respond.
Edited by Feroze Jamal. Produced by Shashwat Mohanty.