Leading and growing teams in the age of AI
Emphasising a love of learning
Bevan Wentzel
April 9, 2025
The problem
The recent explosion of AI capabilities in recent years has been both exciting and worrying for the development community. On one hand, AI capabilities have the potential to unlock an extreme boost in productivity while on the other, there is a risk of further erosion of job opportunities for software developers. Especially those that are just out of University^3.
The growing sentiment is that software development will evolve from being just "building" to more of a specialised management / orchestration role. We now have a massive cohort of software developers exiting university without the skill set required to fill this new role as some would argue that an AI agent will soon be able to replace the junior builder.
How do we find people who are capable of adapting to this new way of working while still having a grasp of the fundamentals?
Finding talent
It's no secret that educational institutions are struggling with detecting and enforcing AI usage bans for students as well as creating assessment mechanisms that are not easily cheated by using AI^2. This leads to a knock-on effect in the industry where doubt is placed on a fresh graduate's actual exposure to the curriculum. How do I, as a hiring manager, weigh a college education in my hiring considerations with these issues in mind?
Then, there is the issue of assessing junior candidates. Gone are the days of sending of a take home assignment to assess their competency.
We need to adopt new paradigms and techniques for assessing candidates. This may mean going back to basics with an old school white-board assessment. However, that is time consuming and I've found that white-board assessments are ineffective at determining a candidates skill. What may be required is that we embrace AI workflows in our assessment pipeline. What this looks like exactly, I'm not sure. I have explored a few avenues:
- Take-home assignments where AI usage is encouraged but the scope is larger and the standard is set much higher.
- A live-coding / prompting session where the candidate demonstrates their ability to effectively judge an LLMs generated code.
- Providing an example of broken or sub-standard code and asking the candidate to improve / fix the code.
Growing talent
We all know learning takes time. Shortcutting learning by rushing through material and getting straight to a result may be satisfying in the short term, but it actually slows down a person's learning in the long term.
AI narrowed the gap from "getting started" to "completion," which leads to a false sense of accomplishment without having actually learned anything or experienced any struggles to get there. When it comes to human psychology, dopamine can play a vital role in learning^1. Small dopamine boosts throughout learning can help the brain connect neural pathways that are needed to develop a skill. I remember the first time I created an anchor tag in a hand-written HTML document that linked to another page, and the rush I got from such a trivial piece of learning was immense.
There's a possibility that AI may be replacing slow learning that comes with frequent and small dopamine "hits" with "fast" learning that bypasses this process entirely.
Instilling a love for learning
Teaching people to love learning is key to their success.
- Celebrate small wins
- Safe guided exploration through pair-programming
- Emphasis on growth over output
- Implement a learning/growth plan
- Teach the value of testing your assumptions
A learning-focused mentality over a results-focused one will not only result in better outcomes overall but also improve the confidence of developers in being able to effectively use AI tools (and validate their output) rather than avoiding them entirely.
AI as a learning tool
While I may have been focusing on the negatives of AI when it comes to growth and learning. There are some benefits to using it as a tool for structured learning. A few examples where AI may help with learning:
- Creating a learning plan or curriculum.
- Using AI tools as a search engine or knowledge base.
- Assessing or suggesting improvements to your code as you learn (after you've struggled and produced something yourself) - Then actually reinforcing the learning by implementing the suggestions yourself and testing your assumptions.
"Getting sh*t done"
Moving fast and breaking things has been the mantra of many a startup. This was great because you would be able to build, make mistakes, then learn from those mistakes. One might forget that part of the "move fast and break things" cycle was fixing the things that you broke. It's a lot easier to learn from a mistake that you made than to learn from a mistake that an AI has made on your behalf.
In conclusion, we all need to be hyper-cognizant about learning. It's what makes us human. Learning used to be an implicit bonus while building, and if we're not the ones doing the building, who is doing the learning?
Citations
- Treadway, M. T., et al. (2012). Dopaminergic Mechanisms of Individual Differences in Human Effort-Based Decision-Making. [^1]
- Fintan Hogan (2025). AI cheats ‘slip under radar’ as few university students penalized. [^2]
- Sarah Perkel (2025). The bar for junior coding jobs has 'risen dramatically,' cofounders of interview prep firm say [^3]