Stop Treating AI Like a Toy – Building real Workflows and AI Digital Twins w/ Tim Rayner

About This Episode

In this episode of Digital Nexus, Chris and Mark are joined by Dr Tim Rayner — AI Philosopher, author of Hacker Culture and the New Rules of Innovation, and educator at UTS Business School — to unpack how AI can move from “smart autocomplete” to genuine teammates that help people think better, learn faster and build braver.

If you’re a founder, product lead, educator or operator trying to make sense of AI beyond the hype cycle, this one goes deep into the human side of intelligent systems: judgment, values, learning, and what it means to flourish in an automated economy.

In this episode you’ll learn:

Why universities are being forced to rethink the traditional degree How one business school is restructuring its graduate programs around AI, project-based learning and “build-first” mindsets, and what that signals for employers and students.

From tools to teammates:

What separates “AI as a fast calculator” from “AI as a collaborator”, and how to frame agent workflows so they support, rather than replace, human judgment.

Hacker culture and innovation inside big organisations

Tim’s take on the “hidden hackers” in every company, and how leaders can give them space, scaffolding and safety to run meaningful experiments instead of gimmicky pilots.

Cognitive offloading vs. cognitive laziness

When it is smart to lean on AI for heavy lifting — and where you need to keep humans in the loop so your team doesn’t atrophy its own thinking.

Philosophy as a practical AI skill

How ideas from Socrates and modern ethics show up in real product decisions: from incentives and power, to who benefits, who’s left out, and how you decide what “good” looks like.

Future-of-work reality check

The kinds of jobs and skills Tim expects to grow as AI spreads — and why curiosity, critical thinking and the ability to work with systems matter more than any single tool.

Designing AI into your workflows without breaking trust

Concrete examples of where AI is already creating value in research, analysis and decision support, and the guardrails you need so teams feel supported, not monitored.

More Episodes

Continue exploring our conversations about technology, innovation, and the digital future

How to get from AI founder to Invested business Episode 42 Annie Liao Build Club full

How Relevance AI & Build Club Turn Demos into Customer Ready AI Products w/ Annie Liao

Digital Nexus Ep 42: Annie Liao — From Community to Company: Building AI Products That

The Loveable for Games - Tempest ai the one prompt game creator with Jack Wakem

The Loveable for Games – Tempest ai the one prompt game creator w/ Jack Wakem

Can AI make game dev truly “prompt to play”? Jack Wakem (Founder, Tempest AI) breaks

From Mental Health App to Indy Jeff Quach on Building AI Families Actually Trust

From Mental Health App to Indy Jeff Quach on Building AI Families Actually Trust

From hacking mainframes to building Haven (mental health) and now Indy, an AI co-pilot helping

Episode 44 Transcript

Stop Treating AI Like a Toy – Building real Workflows and AI Digital Twins w/ Tim Rayner

Speaker: AI philosopher, educator and builder, author of The Hacker Culture and the New Rules of Innovation, Mark and I. Please welcome Tim Rayner. I, uh, am a philosopher and I hope to be forgiven for that. Uh, I think the answer is humility. You really can’t go past the Three Horizons framework there. Now, working with an AI that operates at their level. AI is kind of smart. Stupid. Yeah yeah yeah yeah yeah, yeah. You know, it’s kind of like people I knew in the university, they’re like, brilliant at doing one thing. Yeah. Where do you see us going with AI? The the routine tasks are going to be automated, which will lead to the transformations of jobs. Uh, AI pilots getting zero ROI. Uh, but ninety five percent of employees in these companies are using AI on the sly. You know, the shadow AI phenomenon. Because like, human plus AI, that’s that’s the magic philosophy teaches you to to, like, stay with the question and really explore the question. One of my key philosophical tenets that a life well lived is a journey of discovery, basically seeking to find out what you are capable of being at the culmination of your life. I think if you start your life with this idea of who you are and just kind of just pursue that groove person like me, it’s just a path to stagnation. Uh, grass roots experimentation going on at the moment. It’s it’s probably the most sort of rich and dynamic social experiment in technology we’ve ever known. Um, and most of it is just going on under the radar. Cognitive archaeology, excavate all that stuff, code it into AI agents, and then work with those agents to create that mirror effect, to be engaged with and to be used as a thinking partner. It was evaluated and trained by experts to ensure that it gave really sharp, reasonable responses to really deep, probing questions. AI is not so great at that. Practical wisdom all about contextual understanding, its experience and wisdom that separates us from this machine. Education needs to be re reinvented with a focus on AI. Absolutely. And the universities are so behind the ball on this. It’s interesting because they’ve got that old model which I guess, you know, it’s always like the industry complaint is that what’s being taught in the unis isn’t what’s happening in the real world, and hence why you have these like, guest lecturers and telling you what’s really happening and stuff there. But like AI is so new generative AI is so new that like both sides have that almost level playing field. So that’s interesting. Yeah, it’s incredibly threatening to most academics because it is, you know, knowledge on tap. You mean cheating tools, right. It’s not cheat GPT. I mean it’s only cheating if the universities haven’t properly thought about the structure of integration. Like if it’s cheating, if you walk into an exam and you’re not supposed to use it, use it, you’re cheating. But in the outside world, it is now such a fundamental part of everything that people are doing. If they’re giving coursework to people and they’re not considering the fact that they’re using AI, like it’s like you bring up the level. Yeah. I mean, I think, I think they need to be doing and I think everyone needs to be building first. I think we need to have a build first mindset. I think you should go into university, learn how to build and then be guided down a course of AI development, working with AI, learning, completing activities, building something new. You can have experts come in and review the work and engage with what you’re doing. But I think, I think the AI building needs a massive, massive opportunity there. Like we had Build Club and the founder, Annie, there, and that’s grown into a phenomenal thing with fifty countries now that they’re in mind and just mind blowing. Educational connections that they’ve got people going to them to do videos for, for different AI courses and stuff. So there is this a phenomenon going around there if you can capture it and like, you know, you’re tapped into, uh, UTS Business School. Um, I could see like I, I can see so many people would want to even have more interest in going to business school if there was like an AI component to what they’re doing and any students as well, because they are doing outward AI programs and supporting businesses like I, we did a pitch for it, and I think I don’t know who they gave it to, but they were building a sort of AI education framework for organizations to make sure they’re going through the right steps and guardrails when it comes to data organization, um, considerations around tool usage, security impact, all that type of stuff. And, um, so they are like UTS is doing stuff. So it’s interesting that that as important as that is on a business level that’s not trickling down into the student side of things. Well, I think the the challenge for UTS is it’s the same challenge that many businesses are facing at the moment. Um, embracing AI requires you to rethink your business model. Now, how are you delivering value? Um, and can you continue to deliver the same value at the same price point? Because if you can’t, then you’re just kind of sailing, you know, straight ahead into into an iceberg. It’s so true. And I don’t think that I don’t think that the, um, executive at UTS have really figured out where to go and how to what to do. Um, I think that is what’s creating the, um, you know, the rabbit in the headlights effect. I think it’s the same thing in, in corporate as well. Yeah, definitely. Is there an opportunity there? I reckon there is. Like if you, if you’ve got um well let’s take, let’s go back to like, uh, first principles. What does AI do. It uplifts people that barely have the skills to know, having skills, people that had the skills to being even more experts, and people that aren’t experts into superstars. Like if everyone is uplifted. It means when we were chatting about this in the pre-engagement that um, there are things that even for me, I would never have considered to do before, but now is worth considering because the impossible take out the Im. A lot of stuff is possible these days because of AI, so maybe there’s got to be something there. And Australia needs to have this like change, because we can’t just keep digging stuff out of the ground and shipping it. Exactly right. Exactly. I don’t know much about AI take up. You know, at the executive level, I kind of suspect that that it’s it’s slow at that level. Um, and I think that is a big blocker. Um, because you’re right, AI is raising everyone’s game. Um, but it’s, it’s the top level strategic thinkers who need to be elevating their game and rethinking the, the, the foundations of their businesses. At this point. That’s a really challenging thing, though, because there’s so much change and it’s leading us into such a strange and uncertain territory that leaders whose responsibility is to look after the, you know, the the financial wellbeing of their organisations are an incredibly difficult position. There’s so much risk that they’re carrying, and you can understand why they want to take things slow while the rest of the world is speeding up. And so they become the blocker. In irony, though, it’s like AI, if used in a certain way, can be a de-risking tool, though. So it’s this irony that you can use the tool to help you use the tool kind of thing. Well, exactly. That creates the risk. Well, it’s a risk cycle. It circles guys. Yeah. But yeah. Anyway. Yeah. Are we. Are we going, by the way? Is this. No, we haven’t been. But I mean, everything that was gold I can one hundred percent still use. I was gonna say this should still be part of the show. If you’re watching this, that means that Chris has left this in and it is part of the show. Sure. But just, just just before we dive in, I was going to add to it as well. I was going to say something to just let’s go first. So I just came from a session this morning with a with a business. I won’t name the business, obviously, but uh, talking about, um, AI integration within the organization. And one of the things, the key things that we said is sometimes businesses are thinking too much around AI. They’re like, oh, AI. AI is the big thing. We need to make sure everything is AI. When you break it down to the fundamentals, it’s like guys you need to solve, you know, pick key pain points within that journey. Don’t just think I need to plug this in. And it’s a magic bullet where. Yeah, it’s not a magic bullet, right? It’s where are the pain points? What? Where are the opportunities to fix things? Can AI support that and then move forward from that position? Like testing small things people just keep like the biggest issue they were saying is that, yeah, we integrated copilot and no one does it. And I’m like, well, first they’re using copilot. Yeah. Number two is like, how did you train anyone how to use it? Like, do you know how fundamentally what to do with it? Well, exactly. Not at all. Exactly, exactly. We were approached by a large organization who I will not name, who were interested in doing some AI training, and we were preparing to pitch them on a six week in-depth agenda of AI training program. But it turned out they wanted a four hour workshop, and they wanted us to tell their, their, their staff about AI in the first two hours and then run a build a thon in the second two hours. And we were like, oh God, no, come on, look, invest in human capability development. Invest in in innovation development because there are people within every organization who given, given the chance, will will rise to the occasion. They will use these tools to elevate their own thinking and their own practice. And you’ve got to you’ve got to bring those people up, make them central, and enable them to help you figure out how to de-risk the transition and lead the organization on transformation. It’s the it comes back to that chicken before the egg kind of thing, right where you move, make all these decisions on this thing and you’re like, oh, this is the thing, I’m going to do this. And then you’re like, oh, wait, I haven’t done this. Oh, wait, I haven’t done this. And then you end up all the way back to this point of where you should have been in. The first is, what’s your pain point? What are the things that are broken first principles. Exactly it. And then like take that progress to make the decisions. Yeah. Whether it’s transformation, whether you’re doing innovation, whether you’re a project manager, whether you’re, you know, the guy who’s serving coffee at the front, like it all has to come back to that sort of first principle mentality. Don’t just give someone something and be like, hey, cool, this is going to solve the problem. Yeah, exactly. Um, it’s just it blows my mind how so many organizations are saying, Here’s Microsoft Copilot, now you’re enabled. Go ahead and create change. And by the way, we expect you to be like thirty percent more productive by the end of the quarter. It’s like it’s fascinating. I’m going to put a sideline question here, like just to lead into some of the stuff, because like you wrote a book around the I guess the changing of what is happening in in digital industry. Um, the hackathon. Um, and I want to get the name of the hacker culture and the rules of innovation. Thank you very much. I had it written down, but, um, now we’ve gotten a different tangent, so I’m out of order. Um, but what I liked about that is it was touching on the change in how innovation is occurring within organizations where we used to have these, like, really long form ways of doing things. And now we’re into the sort of hacker culture within organizations, obviously with a certain framework. Um, we’re not just hacking and breaking for the sake of it. They’re still hacking with purpose, but we’re doing it in smaller bite size mentality. Yep. How do you think that has changed from when you wrote that book to where? I guess what we’re seeing now with this, this innovation of AI changing how we’re working and the people. Do you think we’ve shifted again, or is that we’re still in that sort of era? I think I think we’re in the process of shifting back to where we were, um, um, around the beginning of the twenty tens. Yeah. Um, the argument in the hacker culture book is essentially that the the culture of experimentation, experimentation and iteration and discovery that essentially grew up in the open source movement. I mean, it goes back to the early days of hacking, computer hacking. But it was it was spread around the world with open source. And then you had entrepreneurs like Jack Dorsey and Mark Zuckerberg and the whole Silicon Valley tech scene, who who cut their teeth on open source hacking and culture and then became tech entrepreneurs and introduced that whole mindset and way of doing innovation into the tech scene in the, in the at the beginning of the millennium. Um, that kind of like ruled the roost up until the, the GFC. Um, and then after after that, I noticed a lot of business leaders were saying, um, uh, hacking is great if you want to build apps, but if you want to, um, if you want to build big things like rocket ships and electric cars and all this kind of stuff, you need like a five year strategy and, you know, investment and stage gates and all that stuff. So we kind of went back because because things went from the hacking the web product to building the the hardware products. We went back to more of a long term vision of innovation. But I think with AI, the focus is turning back to that hacking mindset. And I think build Build Club is an amazing illustration of where innovation is at today. It’s just about bringing people together, creating a safe space for people to run experiments, try things out, share ideas. I love this weather and I know people always use these tools incorrectly. Sometimes with the vibe. Coding platforms like squarely fall into this category and bolstering that idea of breaking like building things, hacking them together, They’re testing and innovating and then potentially building them out into something bigger. Yeah. Um, yeah. It’s just something I’ve noticed more and more with the businesses who are doing this sort of hacking and innovation approach really properly, are utilizing these tools to bolster and expedite, I guess that that process as well. Yeah. Yeah. What inspired you to write the book? My work’s always lived at the intersection of philosophy, innovation and human capability development. Um, and I’ve kind of zigzagged between those things in the course of my career. It’s it has been a matter of constant navigation and navigation, trying to find this sweet spot where hacking your own life is it? Exactly. Exactly. Uh, I actually, I think I think my, my interest in hacking came out of, uh, a realization that my, my life and my trajectory was essentially about experimentation and and discovery. I actually believe it’s one of my key philosophical tenets that, you know, a life well lived is, is, um, a journey of discovery, basically seeking to find out what you are capable of being at the culmination of your life. I think if you start your life with this idea of who you are and just kind of just pursue that groove, then, uh, yeah, look, that’s good for some people. But for a person like me, it’s just a path to stagnation. Yeah, yeah, I need I need to be exploring. And I also need to be building one thing on on the other, um, like, pushing boundaries and stuff, pushing boundaries, learning new lessons and then going, okay, so so now I’ve learned this based on what I learned before. Where can I take things next? Taking out the AI kind of side of things, like if it was any other technology, let’s go back one hundred and whatever many years and stuff that the cars came about. People like yourself that would be and like us that want to push those boundaries. We would be experimenting with new types of wheels and new types of chassis and like, you know, we’d be experimenting with that then, you know. Dot com and internet or, you know, all the technology, technological innovations that have come about. There’s always those early adopters that, um, might not have created, but they’re definitely part of that. First, you know, five percent of folks that are getting in and then others will get in, but it takes the ones that are like building, experimenting, breaking things, making new ways of doing old things. I think that’s what we’re that’s the stage we’re in right now. And it’s exciting, right? Oh, absolutely. And what makes it just incredible is the fact that we’re building with with synthetic intelligence, which is something we’ve never done before. We’ve we’ve never had the capacities that we have now by virtue of the synthetic intelligence. And we’ve never been in a position like this where we literally do not know where this can take us. Uh, you know, our economies, our businesses and ourselves as, biological entities. It makes up less. Less biological. By the year, I think. Oh, yeah, there’s a whole like cyberpunk the game for folks that might be watching where it’s all like cybernetic kind of parts and all those movies, science fiction movies are about that. But on that question of intelligence, I think it’s made people wake up and go, hang on a second. Like, this thing sounds pretty reasonably like a human. And sure, there’s the tells. And, you know, we stir the pot a little bit on LinkedIn when you put out posts that talk about those tales and people get really protective around like, oh, no, I hate when AI writes stuff. And anyway, as an aside, but going back to I was like your biggest post, that was my big. It was like a hundred plus, you know, the likes were fine, but it was two hundred comments. Yeah. That’s cool. Was the killer? Yeah, yeah, because people were really like pro one side like like just get to the substance of the thing and then anti on the other side like I hate AI everything. It’s like okay you’re using it right now anyway, um, but going back to the intelligence side of things, we’ve got this thing that’s happened where, um, it sounds like us, and it’s like, okay, where does the machine sound like us? But what about the reverse? Like, we’re discovering about ourselves that we’re more formulaic, that there’s more, um, kind of algorithm to what’s running our brain, it seems like, than we first thought because they’re so similar. What do you think about that? Look, I definitely there is there is clearly a element of pattern matching to human intelligence. And additionally, we we are as social animals. We are enculturated to repeat patterns that we have inherited from our tribes, our cultures and and whatever languages like that. I mean, you listen to people talk and they’re just kind of saying the same kind of things. It’s that’s that’s how we work as social animals. AI has been trained on how we behave. And so it reflects that that back to us. And yeah, as it reflects us, back to us, we reflect back on ourselves. And so we we learn about our own, um, uh, our own modes of behavior and capabilities. What I find fascinating about that AI mirror, uh, is that it leads us to reflect on, on depths that we perhaps had not previously acknowledged. I think that’s really what super risk is all about. It’s about creating that, that cognitive recursion that enables us to go even deeper into ourselves and bring out, um, tacit knowledge and tacit understanding that we can then apply in our work. Yeah, but you studied, uh, you got your PhD in philosophy? Yeah. What? What drove you that connection from philosophy? Majoring PhD into then tech. Where did you see that connection? I guess. And that’s what’s led you into this world of AI. Look, um, when people, when people think of philosophy, they tend to think of old men with white beards and togas. It seems very sort of old and fussy, holding the the big books and, you know, with a quill. Exactly. And I think, you know that that is the view that I had before I started it. But when I went in there, I discovered that that there was a community of people there who maybe didn’t have beards and they weren’t so old, but they were they were going, going deep into some fundamental questions about our lives. But they were also thinking really broadly, like what you learn in philosophy is to think across centuries. You know, you’ll start with ancient philosophers and you’ll and you’ll learn about the whole development of the history of thought and ideas throughout the century. And when you put all that together, it really brings home to you just how much our present moment is. Just an iteration of this long journey, of this long conversation. Um, as as human beings have explored and built and learned and run up against the limits of their knowledge in the pursuit of understanding, more like what I realized was that philosophy is essentially this discipline positioned right at the forefront of that movement. It’s not a bunch of fusty old men. I mean, there are a lot of fusty old man philosophy, there’s no doubt about it. But but essentially a few people who studied philosophy and they are not crusty old man. They’re quite the opposite. The radical, the radical fringe of philosophy, which is where I always tried to be, is trying to position itself right at the forefront of of what we understand now at dealing with all those weird questions that are being thrown up in the dust of the tech innovators and, and, and the business innovators trying to process all this, all these questions that are coming up and make sense of things. And so when I when I realized that academia wasn’t for me, the only place that made sense was was entrepreneurship and innovation. And as soon as I moved into the startup world, I thought, oh my God, I’ve been hanging out with the wrong people for the past fifteen years because. Because suddenly I found myself amongst people who who also were like, wanted to be right at the limit of things beyond the limit of things. But they were actually building stuff. They were doing things. It’s interesting, was fascinating. We say that, like with anything in not just AI, but like if it’s blockchain that we’ve done some work with and in other areas of technology, the tech alone is not the solution. It has to be coupled with expertise. And when you think about like what AI is this reflection of language based on math and all that kind of stuff that large language models are. Um, but it is closer to that philosophy of like it philosophy is this science of thinking, right? And I like that you pointed out. So I’m just saying that AI is like quite a perfect fit for this. Yeah, I like that. You point out through centuries of of time, it’s this not just conversations. It’s the conversation. The conversation of humanity is like what this has all been about and stuff. So it’s interesting what you discover, you know, through Foucault, Heidegger, all of and, you know, Socrates and all these great thinkers that you would have had to now you’re applying it with AI to help business leaders and stuff. So yeah, yeah, yeah, yeah, yeah. It’s interesting. How do you how do you find that like philosophy tends to also be a lot about asking the question rather than providing a specific answer. It’s not a finite thing. Right. How do you find that especially when you’re dealing with leaders and who are they want to answer. They want an answer. They want that now thing that what’s next? How do you find that transition between the two? Look, I don’t think leaders, um, I don’t think strategists are going to come to a philosopher in, in in search of finding like the answer to to their questions. But I think philosophy is more like a capability development thing. Um, because philosophy teaches you to to like, stay with the question and really explore the question, try and understand different dimensions of the questions you’re dealing with. Look at look at the question from different angles. You know, consider different kinds of responses. And when you develop those kinds of skills and capabilities, that really, uh, increases what you can put into the mix when you’re trying to come up with a single answer. I love that it tends to, especially with a lot of the work that I do. And I’m I’m the first person to say I am no skills in philosophy whatsoever. But you might be surprised. Well, potentially. Yeah. Exactly right. Very philosophical answer. Um, the but what it does do I hearing what your response there draws a lot of similarities to, you know, asking the right questions when it comes to dealing with customers and understanding their needs and wants when you’re building products. Yeah. Asking those questions in the right manner and sticking with those questions, the hypothesis and come up with obviously, inevitably, a solution that has a lot of drawing lines between those two and how you can relate them to create the things that or speak with the people you’ve been able to speak with. Right. Yeah. Which I’m assuming is what led to a lot of your the books that you wrote around. Look, most of the things that I’ve done have come out of like, the sort of passion. Yeah. To to learn something or to, um, move my my, my thinking myself in a certain direction. Um, uh, like I said earlier on, my career has been, um, a zigzag zigzag through all kinds of things. I’ve, um, I’ve been a consultant. I’ve been a researcher. Um, my, the business I co-founded with my partner in twenty twenty was a, um, human centered research business. Um, I’ve worked in startups. I’ve worked for large companies. Um, I’ve tried to launch various ventures of my own. Um, but I’ve never to this point found us a adventure that kind of ticks all the boxes for me. And the three boxes are fundamentally philosophy, innovation, and human capability development. When I really dug into digging into AI in twenty twenty four, uh, it was this, um, watershed moment for me because I suddenly realized that here was a, a tool that was threw up so many fundamental questions that it demanded and demands a philosophical response. Um, it’s full on innovation. It’s innovation on steroids. And the interaction with AI is it’s changing us and it’s it’s raising our game and building our capabilities. So I was just like, bang, bang bang. Philosophy, innovation, capability development. Sounds like the Holy Trinity. But maybe just Trinity will call it like, uh, but who knows where released. Let’s get into the matrix. Yeah, well, actually, speaking of the matrix thing, like something came up the other day and I’d heard about it before, like, as part of, like a feature of AI, just given how it’s engineered. There’s a lot of the good. But then there’s things where it’s like, okay, it is, it’s weaker these things because it’s probability models and blah blah, blah. But lost in the middle is what they call it. And lost in the middle is where it remembers the things at the start of a long document, things at the end, but stuff in the middle it gets lost. And it’s it’s similar to how we think as humans. We’ve got recency bias. We’ve got primacy bias. And so we also get these things lost in the middle. And it’s just like this fascinating thing where um, again, it’s just a reminder of how similar we both are. And if we’ve got something that is similar to us, doesn’t sleep, doesn’t get overly emotional if we don’t want it to, we can get away from the sycophantic ChatGPT what it was back in the day. But imagine being like people that are in, say, the philosophy space or business leaders being able to chat with personnel personas that are based on those historical kind of thinkers, and having conversations that you just physically can’t have, but you could potentially have with, you know, set up the agents in the right way. I think this is like fascinating in terms of what that unlocks. What do you think about that? Yeah. No, I think you’re absolutely right. I mean, I have more to say about the, the, the co-intelligence and the similarity between AI intelligence, artificial intelligence and human intelligence. But look, I, I, I am I passionately believe that the future of work is an environment in which, um, professionals are building, uh, clones and replicas of themselves, uh, to expand their, their reach and their capability and amplify their core expertise. I mean, I, I, I’m so passionate about that goal. I think I would say that I’m devoting the next ten years of my life to actually participating in making that happen. It’s like a vision that I feel in my bones. What do you see as the impact it’s going to have? Like if we talk about the future of work, where do you see us going with AI? And and I guess impact both positively and negatively on. Well, look, obviously the the composition of the workforce is, is going to change. Um, a lot of the, the routine tasks are going to be automated, um, which will lead to the transformation of, of, um, you know, the mix of jobs. Uh, but I’m, I’m far from convinced that we’re going to see the, the, the massive, uh, redundancies that some people are projecting, primarily because I think that I think that by itself and this kind of takes us back to the question of a artificial intelligence. Um, by itself, AI is kind of smart. Stupid. Yeah yeah yeah yeah yeah yeah. You know, it’s kind of like people I knew in the university, they’re like, brilliant at doing one thing. Yeah, but ask them to do something practical or to show any kind of empathy or compassion for another human being made up, be made up everything. Yeah. The dumbest smart person you’ve met kind of thing. Like, it’s gonna be so dumb at those things. Genius level. Stupid. Yeah. Don’t get it to do anything else. Just get it to do this thing here. And it’s great. Right? And it’s very fast and it’s amazing at that talent. But yeah, but you put that that, um, synthetic intelligence together with a human being who has, who has contextual awareness, who has compassion, who has human empathy, who has a sense of skin in the games like jeopardy! And they, you know, so they actually are worried about outcomes. Yeah. That combination of the AI plus plus human is super powerful. So that’s that’s why I’m suspicious about the idea of massive job loss, because I think that at the moment, I think business leaders are focusing on what they do best, which is automating as much as possible to increase productivity, lower costs, get rid of unnecessary jobs. But I think pretty soon, uh, we will come to recognize that there is much greater potential in, in the human, uh, capability development side of things, uh, because like human plus AI, that’s that’s the magic. That’s what the magic happens. We already have seen this kind of stuff with the few people that are in. If you go to businesses, there’s stuff that I’ve seen is that the adoption is just not there across the board. They might have installed copilot and then saying, oh yeah, look at all the copilot use, and I’m just using copilot as the example, but not everyone is using it. But and it’s not say copilot is bad at all. It’s just a lot of AI has not been trained in terms of how to use it. So what if that other ninety five percent all of a sudden get that upskilling? And then there’s more ideas, because if it’s only the five percent that are giving out the ideas right now. We are untapped potential in that case. Yeah, absolutely. And what you say about, um, you know, people using, uh, copilot not getting value out of it. It makes me think of one of those findings from the recent MIT, uh, AI divide report. You know, this is the report that found talked about that a couple of times. Actually, ninety percent of businesses that are running AI pilots getting zero ROI. Uh, but ninety five percent of employees in these companies are using AI on the sly. You know, the shadow AI phenomenon. Yeah. Those people are not spending their time playing around with, you know, the the, the version of Microsoft Copilot that has been given to them by their workplace. They’re they’re going home and they’re installing Claude code. Yeah. You know, or they’re playing around with N810 using it right now, you know, in the background as we speak. So yeah, there is this this is what fascinates me. There is this massive, uh, grassroots experimentation going on at the moment. It’s it’s probably the most sort of rich and dynamic social experiment in technology we’ve ever known, and most of it is just going on under the radar. The media is mainly focused on big AI and the companies that are installing AI, but it’s all that grassroots activity that’s I think that’s actually where the innovation is going to come from. And all kinds of startups coming out of that, and that’s that is what’s going to change the way that organizations operate. I love this, that grassroots. They’re like this, you know, the shadow activity that’s happening in the background. It’s so true. Even today, having discussion with some people was kind of it was actually kind of a scary shadow type of situation because, you know, the situation with copilot being the example that’s been integrated and they’re like, yeah, it’s a horrible thing. Like I just use grok on the side. I think it’s so much better. And there was the part of me just like, oh my goodness. Using grok in the business is just I could just like see a security alert all over. But perfect example. He was didn’t want to pay for his own platform. He’s using the free one that actually gives you a lot of value and all the same, any more out of the getting more out of the system for it? Yeah. It’s just yeah, it’s it’s happening. Let’s get into, um, you know, there’s a philosophy side of things. You brought the all of these things together, the entrepreneurship, the philosophy, and like, the human impact side to create super risk for the folks that might not know what super risk is. Can you describe it in a bit more detail? I want to dive deeper into that because I find it fascinating what you’ve created and just through these first few cohorts, what’s been happening there. But yeah, what is it? How would you describe it? Uh, super risk is an advanced AI education program and capability accelerator or expertise accelerator. Um, we’re focused on teaching people how to build AI agents that are trained on their own expertise. Uh, and what the audience that we’re going for is, we figure we have this niche lying between the AI literacy courses that are introducing people to fundamental AI skills, the tooling, uh, the basic mindsets and approaches. And then on the other side, you have the more technical programs that are basically focused at the, the, the consultants, the engineers, the builders in the middle. You’ve got this great gap, um, which is just kind of like waiting to be filled by knowledge workers. Um, these people from these organizations who are being told to use copilot, but know that there is so much more potential that AI is offering them and they just don’t know how to access it. So super risk is going for those people where we are showing, uh, knowledge workers, subject matter experts, uh, managers, executives how to excavate their expertise from the back of their brains, because that’s that’s a difficult process. I call it cognitive archaeology. Excavate all that stuff, code it into AI agents, and then work with those agents to create that mirror effect where you see yourself reflected in the performance of the AI, you kind of objectify yourself. You understand what you’re doing even better than you did before, that that creates a kind of a yeah, a process of learning and elevation. So, um, the, the super S program has one foot in AI, engineering and AI building another foot in professional development. Wow. Bringing those things together because I’ve seen like one or the other kind of thing, not something that really brings it together. And I think it would be fascinating is like, imagine that feedback that you have a meeting or you write a report, or you give it a whole bunch of the stuff that you’ve written and you have this thing that knows you, that has been trained in details about you, that can give you pointed feedback, not just, hey, ChatGPT rate my thing. And it’s like, well, it’s going to rate it based on what? But if you’re giving it a framework where it actually has a good understanding of the the mission that you’re trying to drive and your background, I think it’s going to be much more pointed and intelligent answers that you would have been like, wow, I would have paid a lot of money for this. Yeah. I mean, look, I feel like what what I’m trying to do in super risk is to, um, is is to is to like, fill a gap that seems to me to be have been left vacant by, um, the, the, the tool focus, um, that has been applied to understanding, um, AI, you know, ChatGPT when it came out, uh, it was designed as conversational AI. Um, it was trained to very much to be engaged with and to be used as a thinking partner. It was evaluated and trained by experts to ensure that it gave really sharp, reasonable responses to really deep, probing questions. And so really, you’ve got this. You’ve got this synthetic intelligence there that out of the box is ready to be engaged with and be worked with as a thinking partner. But what is the mindset that we apply to this thing? We see it as another tool in our toolkit. It’s it’s a it’s another piece of software that we can use to plug into our existing workflows and get things done a bit faster. I think that’s actually I mean, you know, with all respect to everyone who was using AI just to make their work go faster, which is it’s a fine use case. Um, it’s actually reflects a bit of the failure of imagination because what you’ve what what we have here is not simply a tool alongside our other software tools. It is an out of the box thinking partner that we can use to, uh, engage and elevate our own thinking and go on a journey, uh, which is which is a fascinating thing. But this brings me to what I wanted to say earlier on about the difference between human intelligence and artificial intelligence. Um. You’re going to see right here how I tend to zig zag from innovation to philosophy to human capabilities, though I’m about to zigzag to philosophy here. Aristotle, you know, the great ancient Greek philosopher said, there are there are three. What’s the Aristotle? You must have heard some great guy. Who’s that guy? There are three fundamental domains of human knowledge. There’s there’s theoretical knowledge. You’ve got the laws of physics, math, science, all that kind of stuff. Right? AI is great at that. It’s got all that stuff solved, right. Then you have technical knowledge, which is basically art and craft. How do you do things? How do you make things? Human beings are great at that. But AI can also show you how to do that because it’s trained on on YouTube and Reddit and and everything like that. The third kind of knowledge, however, AI is not so great at. And that’s what Aristotle calls practical wisdom. And the reason why AI is not so great at that practical wisdom space is because practical wisdom is all about contextual understanding, right? Um, like we were talking earlier on about those academic PhDs who can solve Fermat’s Last Theorem, but they can’t, um, do something incredibly practical. They have that theoretical understanding. They don’t have any simple practical wisdom, but it’s practical wisdom that actually enables you to apply theoretical knowledge and apply technical knowledge and put it to some practical use. That’s why AI needs human beings, because human beings can be pretty dumb when it comes to like the laws of physics and pretty inept when it comes to like, you know, building stuff, using technical knowledge. But we’re actually really good at kind of just looking around and going, given what I know about this context here, I kind of feel in my gut that we should be doing that thing. AI sucks at that. AI can’t do that because it doesn’t know. It doesn’t know our context, but we do. And that’s why that partnership between AI and humanity is is the sweet spot. Putting that into context, this kind of explains where a lot of people are experiencing the job pressure right now. When it does come to to the AI is like it’s it’s not people in the middle management yet, not the people in the upper management. It’s those who are entering the workforce for the first time, who are doing the the basic creative activity, the basic math, the data entry. It’s all those lower level jobs that haven’t yet had that wisdom application part, which the later part of your career does, which obviously AI, as you’re suggesting, doesn’t have the ability to get plug that like its experience and wisdom that separates us from this machine. Right? Exactly. Yeah. So they’re feeling this pressure because they don’t yet have that wisdom because it’s like, well, look, they’ve probably got more than they give themselves credit for. I mean, I think, I think fundamentally wisdom is the ability to, to identify what good looks like. Mhm. And every teenager um, you know thinks they know. Well they have a, they have a point of view. Right. They do. They you know like sometimes and sometimes they’re, you know in, in with certain realms of knowledge like, you know, fashion or art or music or media. The, the, the contextualized point of view of a sixteen year old is probably a whole lot sharper and more interesting. They’re the creators of all the subcultures that happen in the world, right? Like, everybody comes from. Exactly, exactly. I think as human beings, we from from our earliest, Um, age. Um, uh, we develop this awareness of our environment and the ability to identify, you know, what is a good outcome in this environment, what’s a good outcome? What’s a bad outcome, and how should we act and behave in order to achieve that good outcome? That’s practical wisdom. Yeah. And I think that that is what we bring. We can contextualize a lot better. And look, there are, uh, the language models, the large language models, the way that they’ve been trained are far different from what some people in this space are trying to build, which is the world models that actually have more of that contextual like insights and thinking, like just stuff that’s only based on language is going to be far more limited. Um, because of what is there in language like? Language is, is a very strong part of like how we, um, share and how we build up intelligence. But it’s only a small part. That contextual side of things is interesting. It’s almost like in the future when they do build these world models, Like are we then training it like, uh, children like literally like babies and toddlers and teens and showing it the world and like, what’s good about that? We are now, right? We are now. Yeah. And you see these robots and all these things like they’re out there. But it’s fascinating where the world is going. But I think unless they have that going back to it, that framework that you’re trying to help people understand. Um, and I like what you said before. So super ESC is helping build that and capable leadership and stuff like that. But like what you said before where. Yeah, just using it for workflows. It’s it’s not really it’s it’s such a shame because there’s so much more that we could do given creativity that’s out there. And when I like do work with the companies I’m with, workflows is one part. That’s the thing we do today growth and thinking about opportunities. That’s the tomorrow or the stuff that we should start thinking about, you know, so it’s it’s fascinating that you’re seeing that in, you know, the, the tribe. I was going to call it the trifecta, you know. Yeah. I wrote it down. I think it’s. Yeah. So, so I think I think it’s really good because I want to see more of that creativity. I don’t think we know yet what tomorrow’s going to bring. We don’t know what tomorrow is going to bring. Uh, and I think we’re all invested in actually finding a path towards to get us to that point where we can actually see across the horizon. We’re in such an unprecedented situation, such an unprecedented moment in history. I when I when I think about how to compare where we are today relative to, um, historical, um, moments of transition in the past. Uh, I think I think where we’re at, it can’t be compared to electricity, even though there are parallels between AI and electricity. Um, uh, simply because AI can enable new kinds of workflows and new ways of doing things. But electricity does not talk to us. Electricity. Give us feedback. It doesn’t give us feedback. You get electric shock. Yeah. The only thing. There’s only one kind of. I get it wrong. Oh, I’ll do it. Yeah. Yeah. Sorry. I really, I really think. Think of a parallel historical moment to today. I think one has to go back to the age of revolutions at the end, the end of the eighteenth century, where you had, you know, massive wars in France, in the States and elsewhere. Um, and it just kind of it undercut the underpinnings, the foundations of, of of society. Uh, and yeah, when you’re in those kinds of situations, there’s no way of predicting the future. You’ve just got to go through the process, hope that everything doesn’t fall to pieces and and try and build, um, as, as positively and optimistically as you can, you know, on that French Revolution. Stop. And, um, I saw it with my partner. We watched, uh, during the French film festival, I think it was called The Deluge, and it was like one of those films about, you know, um, Marie Antoinette, her final moments with King. Whatever his name was. But in any case, I didn’t realize this, but there were it wasn’t just the one revolution. It was like, uh, rupture, repair and then rupture again. And for just decades, they had multiple things of this kind of change until they settled into their, you know, the new system that came about. But I just thought it was just a part of part of my French, but a clean cut. I thought it was going to be, you know, with the whole, like, guillotine kind of thing. But it wasn’t it was just this up and down thing until they figured it out. Sorry. What were you gonna say? I was gonna say there wasn’t a week in France that there wasn’t a revolution. But. Yeah, that was that was their thing. Uh, you weren’t French. If you hadn’t experienced the revolution. Um, going through like that suppressed experience, how are you finding people facing, I guess, this tool that they’ve made, this, this twin of themselves by the end of it. Like, have you had any interesting responses to their experiences after. Um, yeah. Yeah. For sure. Um, I should say that people don’t get to the point of seeing themselves reflected in the agents that they they build until the very end of the program. Exactly. Um, because there’s so much as, you know, there’s so much involved in building and training an AI agent. So we we start with the process of cognitive archaeology, uh, which, which, which is a matter of, uh, spending a bit of time working with an AI agent who who questions you, gets you to reflect on your professional life and experience the challenges that you have, the things that drive you, and then synthesizes all that and puts it into a set of system instructions to give you an agent that you can work with that will begin to reflect, um, you as you are working and, and, um, and that’s the first sort of mirror experience that people have. But then we have to introduce them to, well, we have an advanced AI prompt engineer that they actually work with to rebuild that. And then they have to go through a series of through the process of evaluations. There’s a there’s a whole week evaluations like they give their feedback kind of thing. Well, they, they, they work with another engineer who, um, they, they identify the problems that they’re having with their assistants. And then they, they, they use this other engineer to make improvements to their system instructions, leveling, leveling up, the performance of their assistant. And then in the final week, they’ve got that assistant, they’re working with, their assistant. They’re actually putting it to work in in their in their profession. Uh, and finally they work with a final agent that helps them think about a cohort of assistants that they could build, starting with this, this one agent, to basically capture all the dimensions of their expertise. Uh, and I think that’s, that’s where people need to get to if we’re talking about building AI agents to support you in your work. It’s really a question of identifying, I guess, the key pain points that you’re having, the key areas where you need support and and then building those assistants that can not only solve those problems for you, but do it in the way that you would do it. So you’ve got you’ve got those zipper clones that are easy to work with. Yeah, that reflect you at your best and enable you to bring more of that good stuff that you’re already bringing to your context. And it’s in that moment right at the end where, where, where people actually have the dimensions of their person, like, laid out. But I think, I think people start going, oh, well, I’m actually I’m actually pretty cool. I look at that guy. Well, this this is this is really what inspires me about the whole thing. I look, um, going back to the human capability development thing, right? Um, I’ve, I’ve always felt, Um, and this goes back to the beginning of my own, some of my earliest experiences. I’ve always felt that that most of us have magic inside us. Everyone has some little magic spell, some little thing that they just do like really well. And that makes them just great. Super, you know? But then we then we go into a job, then we go into the workforce, then we submit to the grind, uh, because we have to because that’s, that’s, that’s a career and that’s what gets rewarded. Right? That’s what gets rewarded. Right. And if we’re lucky, we manage to find our find a way to get to a place where we can actually bring that, that, that magic to life in the workplace. But the majority of people, you know, aren’t that lucky. Um, and, uh, you know, maybe they have to, to unleash that magic elsewhere in their life. So it kind of breaks my heart that, that so many people go through their work lives, their entire professional lives with a sense of like, ah, I wish I was somewhere else doing something else. Because I’ve got something to give, right? What I’m trying to do in super risk is to show people that with the help of artificial intelligence, if you can build eyes trained on your expertise, then you can bring that magic to your professional context. You can kind of understand it, unlock it and unleash it. Because I think, I think that’s what work should be about. And I kind of like I dream of a future workplace where everyone is able to step in there and just basically go, hey, look at me. I actually got magic to bring here. I could do really special stuff. And if you don’t believe me, I’ve just got a little team of AI agents here and just let me unleash them. Let me show. Have you seen my team? Yeah, it’s like a similar kind of vibe, but touching on the superhero thing, which is the creative element. I. The way I always position everything that I do is everyone has a creative part inside of them. Even if you’re an accountant, if you’re a mathematician or you’re the guy who puts data into a spreadsheet and and at work. Sometimes they do it creatively. Yeah, exactly. Numbers change. A lot of innovation projects. You get people coming in and they overlook everyone in the business that’s bringing them in that process. Whereas actually their specialty and their skill makes innovation much more than just the staple way of just doing things. Like, their creative touch adds a flair and a perspective that no one has thought about, which is the magic of innovation. Yeah, yeah, yeah. But we’ve just we’ve got to give people the confidence to to bringing it out and facilitating those situations is such an important believe in themselves to, to to to do that. Yeah. In that sense what is um. What’s a good example of, of someone’s role who’s gone through this process and has applied this twin of themselves? What’s an example that someone could relate to maybe themselves or in terms of what they’re doing going through this experience? Do you mean in terms of what they’ve built? Yeah, like application into their role. So I might be a person in a particular role. What would that role be going in to your process? Coming out with my twin. What would that twin be able to do for me? Okay. Example A friend of mine, um, was in the first cohort, and, um, uh, this guy, he’s, uh, he’s an Indigenous Australian leader. Um, and he is also, um, he’s a he’s a startup entrepreneur. He, he runs, um, a company called the Scale Institute. Uh, I don’t know if you know Stephen Rutter, I don’t know. We’re going to look that up. Pictures. Other thing. Yeah. Uh, anyway, so, um, he has he’s got six businesses on the go. He’s he’s got his training and his training in education business. He’s got, uh, an indigenous investment business. He’s on the boards of various foundations, and he’s just struggling to basically juggle all these things and also align them and find the kind of the core that brings them together. So what he wound up doing was building a team of agents. Um, one of whom was kind of front of house, that kind of took emails from people and figured out where to where to route them. Uh, you know, what what aspect of his world, uh, is this inquiry relevant to? And, uh, the the agent would actually, um, uh, kind of play a front of house concierge role for the, um, for the person who was making the inquiries. Uh, then behind that, he had a series of agents, one of one of whom? One. One of each of whom was helping him with a different aspect of his work. So that was kind of like the, um, um, the support team that he was working with to deal with these different parts of his business. But right at the center, uh, he had this kind of single source of truth, uh, agent that was trained on everything. It was kind of like a, um, an operation. Operations, uh, agent trained on everything, understood all the elements of his business. And that was really the the main agent he was dealing with when he was thinking about his work. Um, it was kind of like his, uh, his thinking partner, his his his his partner in his business, uh, that would guide his thinking and his relationship with the other agents in his network. I mean, most most of this, um, this cohort was just in design form at the end of the program. I still don’t know how far he’s got on actually building these things. Um, but the design was brilliant. And he was he came away thinking, um, um, okay, so now I now I not only have a more capability to handle the work that I’m doing with all these different ventures. Uh, but I have this deeper understanding of who I am and what I’m trying to achieve because I can see it helping validate reflected in these in these agents that I built, how someone not going through the course, how could someone get started on exploring this themselves if I think people are tools or what kind of recommendations? Well, I think people are doing this kind of thing at the moment. I’ve been talking to lots of people who are doing similar kinds of things using custom Gpts. Yeah, uh, I think I think you can build I mean, as I’m sure you know, you can build all kinds of incredible assistance using custom gpts and give them training files to build contexts, give them a clear understanding of who you are, who you are, and who you’re working with. Um, uh, what we’re doing at Super Risk is we’re working with, um, a platform called Typing Minds, uh, which is great. You can access pretty much every major, um, AI model through typing, mind. They’ve also got a whole bunch of software plugins that you can add to your agents, and you’ve got a knowledge base, um, as well, at least with if you get a personal license key for typing, mind, you’ve got a pretty good knowledge base. And you can also create a external rag training system as well. So you’ve got everything on that one platform for building very sophisticated agents with software, plugins and deep knowledge, uh, through a, through a Rag training framework. Amazing. Yeah. I got one more question. Do you have anything else? No. Go ahead. My last question is like, how do people find out about super risk? I guess, like how do they get involved? Go to the website, uh, see what we’re offering. You can you can register to learn more about the program. Uh, you can also sign up. Awesome. We’re not running a December cohort. Everyone’s too busy to do this kind of thing in December. Uh, but we’re going to be starting in January with a bit of a bang in twenty twenty six. Is going to be a big year, I think. I think it is too fantastic. We’re not far away. Um, but looking forward to seeing what all the super lessons or whatever you’re calling the cohorts get to create. Yeah, yeah, so am I. I mean, I think we should have a, um, some kind of, like, showcase or something at some point. It’s a great idea. It’s very hard to showcase as it is, isn’t it? Yeah. Here’s my coded bot. Yeah. Like it walks in or something. But yeah, I think there’ll be ways. But it’s just like, more like people doing, like a lot of, uh, these things there’s like personal or business confidential related stuff to really be able to show the value. But we could probably have some scenarios where, you know, they show, hey, normally I would do something like this with client A, here’s what it was able to do for me. I’m sure there’s going to be ways. Yeah, yeah, yeah, I tried to do a whole bunch of, uh, YouTube shorts where I was where I was building AI agents in real time. Um, it took me about half an hour to to build an agent, and I thought it was a really exciting thing to do. But I can tell by the the number of likes and shares that I’m getting that it’s actually insanely boring to watch someone building an agent. Uh, that has a lot to do with probably YouTube’s algorithm, I think. Yeah, yeah, yeah, there’s going to be ways to do it, like music and all that. And I reckon even if you put it out again, it’s just like the timing. Sometimes you have to put a video out a few times and stuff, but it gets to the right audience eventually and stuff. And even if like you don’t have it straight away on YouTube, someone in the business world that you’re trying to get to is going to find value in that. So that just keep going, you know? Yeah, yeah, yeah, yeah. Look, I think, I think the, I think that we’re still at a very early stage. Oh definitely extremely early stage. Uh, certainly adoption is people are just at the beginning of their learning journeys. Um, so it’s it’s a greenfield opportunity, I think for anyone who’s working in AI at the moment is fantastic. People. Check out super Ask Tim. Thank you so much for joining us. Awesome conversation. Thank you very much Chris. Thank you Tim. Cheers, mate. Well done. Enjoyed it. Great. Thank you. Yeah, that was awesome. Um, right on time.