AI has made it easier than ever for student developers to work efficiently, tackle harder problems, and pursue ambitious projects. But for students earning technical degrees, these new capabilities also create genuine tensions around learning.
How much should I use AI? What should I use it for?
As 90% of technology professionals now use AI in their daily work according to Google’s DORA 2025 report, understanding how the next generation navigates these tools matters more than ever. Contrary to fears that students use AI to cheat or are becoming intellectually lazy, our research with UC Berkeley students reveals something different. Students treated AI as a learning partner rather than a shortcut, using it strategically for some tasks while deliberately turning it off for others.
As AI becomes foundational to software development, the question isn’t whether to adopt these tools but how to work with them thoughtfully. The students at UC Berkeley are showing us one answer: with curiosity, caution, and a commitment to genuine learning that technology can support but never replace.
The research
Our team of four student researchers (Andrew Harlan, Mindy Tsai, Kenny Ly Hong, and Karissa Wong) conducted a mixed methods research project with UC Berkeley students in Computer Science, Electrical Engineering, Design, and Data Science to understand how they’re integrating AI into their academic work.
A separate UC Berkeley study (conducted by Edward Fraser, Jessie Deng, and Eileen Thai) used eye-tracking technology to observe how developers with one to five years of experience actually interact with AI coding assistants. Both student teams were supported by dedicated mentors, with Googlers Harini Sampath, Becky Sohn, and Derek DeBellis advising the mixed methods research, and UC Berkeley Professor John Chuang, PhD, advising the eye-tracking study.
Together, these studies reveal three key insights about how students balance AI’s capabilities with their need to develop genuine expertise. The patterns emerging among students closely mirror what DORA research has found in professional developers.
Finding #1: The 24/7 office hour
AI as a tutor, not a shortcut
When asked to describe their relationship with AI, every student in our study used educational terms. They referred to AI as a “tutor” or “teacher,” not an assistant or productivity tool.
“AI is a teacher…in the sense that it is most helpful for understanding dense content and potentially parts of code that are prewritten in the database to allow for fundamental understanding of the project.”
“I use [AI] as my own private tutor…to [cover] any specific topics in the classes or lectures…not just in CS classes but in all classes.”
This framing matters because it reveals strategic use rather than dependency. Rather than asking AI to complete assignments, students described using AI metacognitively to identify gaps in their knowledge, clarify confusing concepts, and guide their learning process. They used AI to summarize academic papers mentioned in lectures so they could decide which ones warranted deeper reading. They asked AI to explain why their code produced specific errors.
One student explained their workflow:
“When I don’t understand what my professor is explaining, I ask AI to help me understand the concept or what a piece of code is doing. If I don’t know how to begin a lab, I give the prompt to AI to figure out where to start, then write the code myself and ask AI to correct my work.”
For students with learning disabilities, this constant availability addresses a real access gap:
“As a student with a learning disability, I need more time to understand a problem. AI has helped me a lot—it’s like having a 24/7 TA.”
By extending access beyond limited office hours, AI allows students to iterate on their understanding without waiting for help. This frees up cognitive space for higher-level thinking:
“I spend less time actually coding and more time on big picture ideation. Now, my time is spent thinking through logic, concepts, and coming up with ideas creatively, rather than producing code manually.”
These accounts portray AI as a scaffold for exploration rather than a producer of finished work. This mirrors what DORA research found: when AI handles routine toil, developers can focus more energy on delivering user value.
Finding #2: Active resistance to overdependence
Building guardrails to protect learning
Despite embracing AI as a learning tool, students expressed genuine anxiety about becoming too dependent on it.
“If AI disappeared, I’d struggle more with figuring out how to solve things on my own.”
In a recent study using EEG to measure brain activity during essay writing, researchers found that AI users showed weaker cognitive engagement patterns compared to those using search engines or no tools, and frequent AI users who later wrote without assistance remembered less of their content and felt less ownership over it, what the authors termed “cognitive debt”.1
Our research revealed a positive signal: rather than passively accepting this risk, students responded by establishing deliberate boundaries.
One mechanical engineering student described how she’s developed a competency-based system over years of working with electronics:
“When I use basic sensors like a servo or ultrasonic, I can still code that myself. But when I have more complex sensors where I don’t necessarily know the exact functions, that’s when I’ll use AI.” She explained her reasoning: “I have the background to understand why things aren’t working, but I don’t always know the direct language to fix it, so AI is good for helping overcome that.”
For a recent project building a tactile storytelling tool, she knew the basic concept but needed help structuring the counting and comparison system. “AI was really useful in setting up that structure, but I still had to code after to fine-tune it.” She’s clear about the division of labor: “I’m still working with doing the code myself. I wouldn’t say that I’m just handing it off like a technical expert. I’m working in tandem with it. I have to be the initiator of what I want it to actually do. If I just give it a blind request, it’s not useful at all.”
Even when students do engage AI, they often set explicit rules:
“Sometimes I tell AI not to give me the full answer, just to guide me in the right direction.”
Students have developed several specific strategies to prevent overreliance:
Limiting access to powerful models:
“I don’t want to pay for AI tools because it could lead me to overuse the models.”
Alternating between assisted and unassisted work:
“I have actually gone back to hand-coding for certain things, like a for-loop for example.”
Warning against “vibe coding”:
“AI tools can definitely be a good companion to boost developer productivity. However, one needs to be very mindful and not get used to vibe coding. It’s very important to understand and validate the code AI is generating and use it appropriately.”
This anxiety is itself metacognitive awareness. Students recognize that the path of least resistance may not be the path of greatest learning. This mirrors DORA’s findings: despite 90% adoption, about 30% of practitioners report little to no trust in AI-generated code. Effective AI use requires mastering critical evaluation and verification, not just adoption.
Finding #3: Knowing when to use AI and when to turn it off
What the eye-tracking data reveals
A separate study using eye-tracking technology provides behavioral validation. When researchers observed developers with one to five years of experience interacting with AI coding assistants, they found stark differences in AI engagement depending on task type:
-
During interpretive tasks requiring deep understanding: <1% visual attention on AI
-
During mechanical tasks like boilerplate code: 19% visual attention on AI
Developers actively ignored AI suggestions during complex work, even when those suggestions were accurate and could save time. AI creates cognitive load during deep understanding work, and experienced developers know when to turn it off.
Strategic selectivity, not blanket adoption
Students in our interviews echoed this context-dependent approach:
“I typically use AI to generate ideas for a starting point.”
“Despite knowing AI was allowed, I wanted to go through the friction of learning and failing and having space for creativity.”
Customization matters
Most AI coding assistants now let developers toggle inline suggestions, enable on-demand only modes, or adjust suggestion frequency. By experimenting with these settings, developers can align AI behavior with the cognitive demands of different tasks, reducing disruption during deep work while maintaining assistance for routine tasks.
What this means for the industry
Students are modeling the future of AI-augmented development
The students in these studies are ahead of the curve. They’ve developed a literacy that knows when to engage AI, how to verify its output, and when to work manually to preserve understanding. For teams navigating AI adoption, the student experience offers direction:
-
Experiment with customization to find configurations that support rather than disrupt work
-
Build verification practices into workflows rather than accepting suggestions uncritically
-
Create space for unassisted work on complex problems where understanding matters more than speed
As AI becomes foundational to software development, the question isn’t whether to adopt these tools but how to work with them thoughtfully. The students at UC Berkeley are showing us one answer: with curiosity, caution, and a commitment to genuine learning that technology can support but never replace.
To learn more about how professionals across the industry are navigating AI adoption, download the DORA 2025 State of AI-assisted Software Development Report. You can also read the full research articles from our collaboration with researchers at UC Berkeley.
1. Kosmyna, Nataliya, et al. “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.” arXiv, 10 June 2025, doi:10.48550/arXiv.2506.08872. Accessed 28 Jan. 2026.


