7 June 2025, 7-minute read
One of the responsibilities of my new role at Sevenoaks School is to run various programmes relating to entrepreneurship and innovation. This included (rather excitingly) organising the school's inaugural hackathon. I was aware this was uncharted territory for myself and the school, so reached out to Australian wizard and hackathon afficionado, Tom Bizzell, for pointers. For context, Tom completed a hackathon tour of Europe, taking part in 18 hackathons with 13 podium finishes! We sat down to talk all things tech at Geoffrey's cafe by King's Cross - a conversation that lasted almost two hours.
Immediately, Tom asked me if I'd heard of Lovable - to which I answered 'No.' Gripped by a cocktail of disbelief and fierce excitement, Tom opened Lovable on his iPhone and asked me for an idea for an app. I shared one that I had come across and he typed this into the prompt box, then promptly put the device face-down on the table. "We'll let it do it's thing and return to it in a bit.", he told me.
Over the next ten minutes, I shared the vision for this year's hackathon: open-ended challenges that incorporated relevant technologies or tied into topical STEM themes. Among them was a brief centred around building a solution incorporating some form of local AI inference. From Microsoft Windows Copilot+ PCs to Apple's M4-powered iPads, there's been an abundance of products launched with a focus on local ML inference capability (usually measured in tera operations per second or TOPS). It seemed clear to me this was the perfect time to start thinking of how best to leverage this edge-computing power. Tom's reaction cemented the notion that this was worth pursuing and provided enough of a challenge to fill a 12-hour build sprint.
Tom then showed me the result of Lovable's LLM-powered labour. To my amazement, the tool had built a fully-functional web app, complete with working user flow, controls, and inputs! He told me all of the code lives in the user's GitHub and if a backend database is needed, Lovable supports the open source Supabase. He fired some quick changes at the engine, asking it to shift some UI elements and change how a certain sign-up feature worked – both of which Lovable executed flawlessly.
Now, I can’t speak to the technical debt of the code generated by Lovable. Regardless, this is exactly the sort of tool that empower anyone to build a functioning demo of their idea at, say, a hackathon. Having used various LLMs to ‘vibe code’ before, I was astounded at the difference between the workflow of using those in an IDE and Lovable’s solution – it was evident why this was Tom’s first and most important revelation to help with the hackathon.
Tom then went on to tell me about his journey in vibe coding. He held a successful dinner at his apartment where guests were encouraged to use Lovable to build an app demo and he even founded, built, and sold his own AI startup by himself. Tom reiterated that he had never had any formal computer science or programming tuition; Lovable made it that easy for him to build his product. He did spend his fair share of time tying bits of code together and it took him 900+ prompts to reach a release-ready stage, but what resulted was a product clients were willing to pay for. You can read more about Cheslin (now ääni) here.
Tom and I went on to talk about his experience at school and how that transformed his view of the world. “I remember someone coming in to show us Zapier.”, he recalled, “He build a working phone bot right in front of us.” Tom lucidly reminisced about this moment in a somewhat reverent way: the genesis of his entrepreneurial interests. “He showed me then and there that people like me could build real, working services!” Tom went on to build his own Zaps (Zapier flows) and was blown away by how easy it was to build his own internet services and bot backends. He told me he had a bot that could handle phone calls from his friends (sound familiar? 😉). Seeing these work was a huge confidence boost for him. From there onwards, it was all about turning cool ideas into real, working prototypes – a motive Tom clearly still embodies.
Grinning cheek-to-cheek and still revelling from the vivid memory of the Zapier tutorial, Tom was thrilled that today we find ourselves at a similar junction. Here I was with the opportunity to bring that same awakening to a new generation of young ideasmiths. And Tom was totally on board! Where it was Zapier and Microsoft Flow (now Microsoft Power Automate) that empowered people like us to build no-code backend prototypes when we were at school, today tools like Lovable and n8n are doing the same for innovators today. The key difference is in the level of development these new AI-powered tools permit: their output is far more comprehensive and complete from a product standpoint.
I had noticed something on a related strand in one of my lessons recently – a generational gap that I wasn’t expecting to reveal itself so early in my career: I asked a student who was lacking design direction in one of my D&T classes to compile a moodboard. The rationale being that in the completion of this exercise, the student would hone in on his preferred design direction and aesthetic for the project.
In my time at Loughborough Design School and in prior design projects, I had created many moodboards and collages – the process was so ingrained it was almost second nature: go to Pinterest, find some existing works (products, interiors, websites, etc.) that match the ‘vibe’ of your project vision and let the algorithm help you find harmonious images. Some of us at LDS had even pre-compiled Pinterest boards with specific design aesthetics for this very purpose. You can imagine my startlement when the student in question didn’t open Pinterest or Behance or any image search engine, but rather OpenAI ChatGPT. They asked it to generate an image of a room interior with hardwood floors and a modern furniture aesthetic. To clarify – the student didn’t ask ChatGPT to generate the entire moodboard (this would have clearly been a circumvention of the objective), but rather to generate the components rather than curate them. The paradigm shift from search/curation to generation was so stark, I shared my anecdote with Tom. He expressed a lack of surprise and summarise the situation beautifully concisely: “Whereas you and I are ‘web-natives’, these students are ‘AI-natives’.” Tom and I had grown up with iGoogle, YouTube, AskJeeves, and most importantly, a search engine accessible (more recently at all times in our pockets) – it was an unwavering constant for as long as we have known and conditions us to view tasks in the way the assurance of such a resource would. The unwavering constant for students coming into secondary education today will be access to generative AI: large language models, latent diffusion image generation models, and so much more (think music, video, etc.) Any educational programmes we design need to take this into consideration.
In the considerable time we spent at Geoffrey’s, Tom and I touched on many other topics. I came away from our conversation with a renewed energy for the work I do at Sevenoaks School, but perhaps more importantly, a viscerally optimistic (and grateful) outlook on where we are as a species and the unprecedented equity of opportunity today's technology affords us. What a time to be alive!
Save to say the man is full to the brim with interesting perspectives, experience, and drive wrapped in a dangerously disarming personality. He’s off to Australia for the next chapter in his adventure and you can read more of what he has to share on his substack.
Meanwhile, I have a lot to digest and a hackathon to prepare for! Till the next one.
- Yuvraj