Further Comments
Join legal technology experts Damien Riehl and Horace Wu as they explore the intersection of law and technology. In each episode, they discuss the latest trends, tools, and innovations shaping the future of legal practice, from litigation tech to transactional solutions.
Further Comments
We Crossed the Point of No Return Long Ago
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
Damien Riehl and Horace Wu record a market catch-up episode covering AI hype and positioning in legal tech. They discuss in-house self-serve AI tools for business users and whether they implicate unauthorized practice of law. They examine what remains uniquely human for lawyers — trust, intuition, integrity — arguing each may erode as AI improves, while also giving examples of human lawyers’ counseling value in Damien’s real boundary dispute. They address AI adoption barriers in firms, billable-hour incentives, shifting apprenticeship models, productizing scarce legal expertise via license fees, and end with guarded optimism and a plan to discuss guardrails for “vibe coding” next.
00:00 Agentic Legal Hype
02:04 Harvey Legora Market Map
03:22 Self Serve UPL Risks
05:57 Claude Code Copyright
10:21 Prompts vs Outputs
12:18 Lawyers Role in AI Era
13:35 Trust Intuition Integrity
15:22 Vibe Coding Trust Shift
21:02 Optimism and Policy Paths
23:06 Radiology and Automation
24:11 Chess ATMs Lessons
25:08 Jobs Disrupted Then Rebound
25:35 Will Lawyers Become Luxury
26:31 Antique Cars And Old Law
27:27 Fence Dispute Real Test
28:11 Counseling Beats Drafting
29:08 Specialists Add Hidden Value
30:19 Centaur Skills Still Matter
31:12 Training Without Apprentices
34:33 Dragon Riding New Work
37:20 AI Adoption Incentives Clash
39:56 Picking The Right Use Cases
41:51 Vibe Coding Versus SaaS
45:14 Invisible AI Wins Adoption
46:53 Productizing Legal Expertise
Mr. Damien Riehl. Hello. Horace Wu. I, I always love talking with you. I just went from listening to a great episode with us and Jae and Ed, to now speaking on an episode with you. So really thrilled to be here. Are you sick of me yet? Uh, I, that is never possible. Oh, it's so kind. Uh, please tell my wife that. I am pretty sure Paige, my wife, has never listened to any of my episodes. And Paige, if you're listening right now, I, I take that back. Well, she's not so, so, Damien, you're safe, That's good. A lot has happened in the last week, gosh, so many things. So for the listeners, we, we try to vacillate between having guests and maybe having these catch up things between Horace and me. This is one of the number two. We are just gonna be riffing on what's happening in the market like there's Harvey and Legora stuff, and today's April 1st. We are recording this on April Fool's Day. I may or may not be lying to you on some things today, Damien. See if you can catch my lies. That's, I, I think all of us, as we follow the AI hype cycle, uh, see a lot of truth and lies. Sometimes intentional. Let's start with marketing, right? Like there's just so much hype marketing at the moment. I got off a call with Jake Jones, who is the CEO of Flank. Um, I don't know if you've met him before. Um. Very, very nice guy who is based out of Germany. Um, I guess he's a Brit by background just because of his accent. And they've created an age agentic legal system for in-house teams. Um, and he's talking about how, you know, they aim to get like 80% of the way there with the agents who are autonomous, but then let the human in the loop fix the last 20%. And we talked about the philosophy and all that. He and I made some predictions at the end of the episode, um, about where the market's going. And, and my prediction was, well, the agent stuff is gonna be hyped for the next six months. And after that people are gonna pivot towards "we are data first", rather than being AI first and, and so on. And he gave a really, uh, amazing sort of chart. A two by two, and I'll share it with you. And I'm going to say this is entirely Jake's idea, not mine. So, so he gets all the credit. Focusing on Legora and Harvey, who are kind of leading the market in terms of noise. You look at kind of what they offer... is that good or bad? Well, VC money's gotta be spent on something, right? That's true. It's not gonna spend itself, uh, continue. Sorry. VCs genuinely want you to spend that money. So he, he breaks it down in a two by two matrix. Um, on one axis is, are you offering software or services? On the other axis is, are you offering it to SMEs or enterprise? And he kind of mapped what Harvey and Legora does. Harvey, uh, Legora being very product centric are more likely to focus on expanding horizontally to adjacent markets. So legal to compliance, compliance to insurance, insurance to et cetera, et cetera. He made the really wonderful prediction that if you look at where Harvey is, and if you look at the new mod law firms and where they're targeting, there is a gap where Harvey potentially will provide legal services. To enterprise, not small, medium-sized enterprise, but actual large enterprise. So predictions in the market, nobody knows what's going on. Everyone's having fun making guesses. Damien, what's the kind of wildest guess that you've heard in the last couple of weeks? I wanna pull on that thread about going to enterprise business customers, not the in-house legal customers, but the business people within the enterprise. I've been thinking a lot, since LegalWeek about a legal operations person for a company whose name, you know, that I won't say. This in-house legal ops said that we've created a bunch of self-help tools within the law department saying, here are the frequently asked questions, and we'll have the AI be able to pull from our knowledge base to be able to answer those questions maybe without even a human in the loop. And then if necessary, put a human on the loop, to oversee or, if absolutely necessary, then put a human in the loop. To be able to say, you know, it's a blocker to get this done until the human does the thing. So anyway, so I've been thinking about this self-serve versus human on the loop observing versus human in the loop where they're a blocker. And I was thinking about the self-serve. And the question is, is self-serve from an in-house counsel to their business people the unauthorized practice of law, right? Because I think it is. right. I mean because ostensibly, the business person is the client, right? And the lawyer is not actually observing the tool that is giving the self-serve advice. And so is this committing the unauthorized practice of law? Who's committing the unauthorized practice law because it's not the lawyer. They're not the one they are authorized. They're absolutely authorized to practice law. Yeah. They, they are. If the lawyer for the law department built the tool? Is the lawyer on the hook, for committing the unauthorized practice of law, despite that lawyer themselves being able to practice law, right? So, so this is the kind of UPL tangles that we get jumbled in. Alright, so now that's scenario one. I as an in-house counsel, provide this to my business person. Scenario two is, what if a third party does that very same thing. And what if that third party is run by a lawyer just like that in-house lawyer is a lawyer. And what if maybe there's high quality tools on both sides, both on scenario one and scenario two. They're grounded in yesterday's case, yesterday's statute, and yesterday's regulation, and mitigated the risk of hallucination for all those things. And what if double blind studies say that both scenario one and scenario two outperform lawyers in providing both accurate and reliable legal information? Then under both scenario one and scenario two, is there a difference between them? And are we gonna prosecute scenario one? No way. Who's gonna prosecute an in-house counsel for the unauthorized practice of law with their business people? Scenario two, should it be different? You are asking a really pointed question around something that nobody has any ideas about. It reminds me of a conversation this morning, and we actually had echoes of this conversation the last few days, where the source code for Claude Code has been leaked. Yes, And famously I think they said this was entirely written by AI in 20 days. yes. We both know the, the judgements have held that AI created works are not copyrightable. Yes, that's right. And, and really that's with good reason because, you know, if AI created works were copyrightable, then I would be the author of 471 billion melodies. Because I've brute forced every melody that's ever been and ever can be. And my machine cranked them out at 3000 melodies per second. Actually, take that back, 30,000 melodies per second. So really, if AI created, that is my machine created thing, were copyrightable, then I, no, too bad, no more music for anybody. I've just, I've just wiped out the slate. So in the same way, you know, if we just let AIs crank them out, that will just edge out all the human images and all the human writings. Because then you'll be able to get a monopoly on those things. So I think that policy saying if AI generated, then uncopyrightable is the right policy. Otherwise there's nothing left for humans to do. That's point number one. Point number two as applied to Claude and Claude Code. Man, we just have uncopyrightable turtles all the way down. Because listeners and people that have been in my talk know that one of my favorite cartoons is the person who said, look, I took a bullet point, turned it into an email, I pretend I wrote, and the recipient said, look, I took an email and I took it, turned it into a bullet point I pretend I read. And so the, it's funny but profound because if it started as a bullet point. If it ended as a bullet point, what's the point of the email in the middle? Because there could be a thousand versions of that email. There could be a million versions of that email, right? You could brute force the expressions. And so as applied to copyright ideas are the bullet points. Bullet points are uncopyrightable ideas because they're too short. Um, in the middle is the, if human create the copyrightable expression. But the copyright office has said no, if a machine makes it, then it is uncopyrightable. So you have uncopyrightable idea to an uncopyrightable machine created expression back to an uncopyrightable idea, and you have uncopyrightable turtles all the way down. And so there's a real question as to the Claude code is an uncopyrightable turtle? That is, they took an idea to be able to say, I want this large language model to create code, and then turned that uncopyrightable idea into uncopyrightable code in the form of Claude Code. And now someone has leaked that uncopyrightable code and changed it to Python. So taking expression number one of the Claude Code code. And then expression number two, a totally different expression of the ideas in Python. So different expressions. And if you take a uncopyrightable item that is the Claude Code uncopyrightable item, and then you make a derivative work of that uncopyrightable item in the form of Python. That's totally cool because I can take Tom Sawyer , you know that book has been in the public domain for, over a hundred years. I can take that public domain book and I can make as many derivative works as I want. I can put Tom Sawyer into Python and nobody would care. So there's a real question as to now that it's been in the public domain, that is the Claude Code has allegedly been leaked and converted into Python, is a game on? As uncopyrightable turtles all the way down. Well, you might have seen overnight, they've issued, I think, 8,000, uh, infringement notices, uh, asking people to stop using it or delete it or to stop forking it. So we are about to find out if, if this is actually copyrightable. Um, but two more thoughts on, on, on what you've said. First is, uh, flippant one, because it's April 1st, the reason why an idea gets dressed up into bullet points and it gets undressed into an idea again is the same reason why I wear a suit to a professional meeting. I am not a, I am not a smarter person simply because I wore a suit, but I'll get noticed and paid attention to because I've got all these bullet points and it looks like there's more substance. So A hundred percent. And, and that's why I, I actually have jet black hair, but I, I want to have gray hair so people take me more seriously. people spend time dying their hair black, and you're like, no, no, no, no, no. Gimme that bleach. Exactly. Well, and the second idea, um, and, and this is an interesting one, um, going to the substance of law, is, when I was talking about this with a firm this morning, their, um, head of innovation, um, essentially said, well, but hang on. That, that doesn't feel right. You know, for, for Claude Code to have been created, someone had to issue instructions. Someone had to, um, steer the machine. Someone had to prompt it and improve it and iterate. Shouldn't that be protected somehow? Uh, my response to that is that for something to be copyrightable, it has to have what courts have called a modicum of creativity. That it has to have a small amount of creativity, to be able to be copyrightable. And, but the thing is that what that person has described is that is the prompting to go into the output. That is not the output. What the python builders that created that derivative work, and that the cease and desist letters are going against these derivative people. They're not copying the prompt. And so this is not copyright infringement'cause they're not making a copy of what that person that you talked about, is claiming to be copyrightable saying, oh, that prompt is creative. Cool, but nobody copied the prompt. They copied the output of the prompt that is uncopyrightable because created by a machine. So if, if they were to copy the prompt, sure, if they were to copy it verbatim. But if they took the ideas from that prompt, the uncopyrightable ideas... Like in the legal sphere "This is an agent that does legal research, and tries to pull out the facts and find what is the law in California and Texas." These
are all ideas:totally uncopyrightable. So if someone steals the prompt that I just said, and broadcast to the world, if they take my ideas, I can't do a damn thing about it. Because it's not copyrightable, even though there was a modicum of creativity. See, I, I could stomp my feet and I could say, oh, woe is me. They're, they're taking away my valuable things. But really the question is like, what is valuable anymore? Well, that's a really good segue into what's the point of lawyers if agents and all of these machines can satisfy 90% of legal needs and perform 90% of the work that would've been created. Like we talked about this before, and you mentioned Jordan Furlong on a couple of, um, past episodes. But let's really drill down on this because I, I've had a lot of conversations recently where people are like, no, I don't, I don't know what I should be doing. I don't know what my junior lawyer should be doing. And so maybe we can spend a few minutes breaking that apart, like actually splitting the hairs and talking about each strand. I was lucky enough to be able to hang out with Jordan Furlong this last week. I was at the ABA tech show in Chicago. He and I had a really lovely dinner, he and his wife, there was a whole group of really smart people with Patrick Palace and Tom Martin. Anyway, they hosted this lovely thing that I was able to hang out with Jordan with. And Jordan, pretty much everything he says is right. Kind of like Jae and Ed, everything that they say are right. Jordan had given the keynote at the ABA Tech Show. He spoke in front of, it looked like 400, 500 people, uh, that are in this massive hall. And just speaking all sorts of truth. Largely expanding upon what you and I have talked about in the past, like what is, what is left for humans to do. And he has said, you know, at least three things and he expanded upon those three things. But at its nub are, trust. You can, as a client, can now trust me to be able to say, your problem is my problem. Two, is intuition, I'm not gonna have ChatGPT chatting in my ear while I'm talking to the judge or around the negotiating table, right? Intuition, you're gonna need to know the law cold. And number three is integrity. You as opposing counsel know that I'm not gonna screw you over, that I have integrity. AI doesn't have integrity. AI doesn't have intuition and AI really, can you trust it? Uh, we've seen Terminator after all. And so anyway, he said this and it was amazing, after the keynote and then there was a Q&A session. And the first question of the Q&A session, say, I, I heard what you said about, you know, there are human things, trust, integrity, intuition. And, that humans will rely on humans for those things for a long time going forward. He said, at the same time, people are having sex with robots, the questioner said — so, so how do you reconcile that? He said, how do you reconcile? I'm gonna go, I'm gonna go to a human lawyer for trust, integrity, and intuition, and I'm also gonna have sex with a robot. Uh, what, what, how do these things, uh, reconcile? So I'm gonna make a flippant comment again, April 1st, right? You go to whomever delivers the best service. So, It's, it's the, the world's oldest service maybe. But let's, let's break the, those three things down. Trust, intuition, and integrity. Um, in terms of trust, again, taking the, the devil's advocate angle here, isn't that just a matter of developing it over time. I mean, do you see five years from now, 10 years from now, 20 years, a hundred years from now, humans' trust of machines will reach the threshold of like, yes, I trust this just as much as I would trust a human lawyer. That, a hundred percent right. And maybe we're here today. So this morning, so we've talked about my work , when I was at the Vatican in November. And, And when I was there, I, I befriended, my favorite vibe coding Roman priest. His name is Father John D'Orazio. So this morning, we were vibe coding together. And this is getting to trust, where we, he and I, are trusting the machine more and more the more that we work with it. So what we are doing is we're vibe coding over this thing called Ontokit. FOLIO, you know, is a fork of SALI. Um, we're gonna allow crowdsourcing of FOLIO. Where if you say, Damien, this, a FOLIO tag seems pretty good, but it's got this translation that's wrong. Or maybe we should have these children. And so what we're allowing you to do is to be able to add different tags and be able to rearrange things. And what's gonna do is it's actually gonna create a GitHub repo without you as the user even knowing it. It's gonna create a branch on the repo. It's gonna do a commit. And all of that's abstracted away from the user. So anyway, this is called Ontokit. And so he and I were jamming this morning on building this, he for the Catholic OS and me for FOLIO. And he was doing a bunch of PRs, pull requests, I was doing a bunch of PRs and as, intermediaries, he was using something called Code Rabbit. Where it's an AI that goes through and makes sure that nothing is gonna be broken. It runs a bunch of linting and tests and that kind of thing. And I, on my side, had Claude Code, where I was doing the same thing for his. And so this was John and me, humans and humans working with Code Rabbit and Claude Code, robots and robots. And we were kind of working with each other in this kind of dance with the four of us. And I found myself typing into Claude Code, "I trust you, I trust you." Uh, because I said, uh, they said, do you. you. That's right. And as the word "I trust you" came out, I thought Horace and I have talked about, well, no, I trust humans. I don't trust the machines. Right? But here I am trusting Claude Code because it's a far better coder than Father John D'Orazio or I. This is today in the year of our Lord 2026. 10 years, to your point, 20 years from now, 50 years from now, how much will our children trust AI? And do we actually need the lawyers to have that trust? Precisely. So that's pillar number one, possibly being eroded already. Pillar number two, intuition. And I'm gonna argue again, devil's advocate, that intuition is just a matter of speed of processing. If you can brute force enough ideas to pick out the best one, isn't that the same as intuition? Yes. I think that's right, because right now, we've all seen the voice recognition like the voice AIs, where you ask something of the AI, dot dot, dot, pause. Then the AI responds, and that is awful for everybody. And you're like, oh, this sucks, and this will never take the place of humans. Um, but you're right that as the AI gets better and faster, and as the processing comes more quickly, you know, you and I bounce ideas off of each other really quickly. So if the AI, slash when the AI, is able to approach the speed at which you and I bounce our ideas, then intuition is just a matter of speed. You're a hundred percent right. So again, Nvidia producing faster and more chips and people throwing more processing power at it, maybe intuition's gonna get eroded away at some point. So then it leaves us with only the third pillar, integrity, which I would argue is the weakest of the three pillars because AI supposedly has no ulterior motive. It doesn't have anything that people can go, well, why are you telling me this? Do you have your own agenda going on behind the scenes? No, AI is programmed and it's been trained through reinforcement learning to be helpful. It does have guardrails, but you can actually see those guardrails. You can see the markdown files, you can see the Claude files, you can see the skills. You can see all the things that could be quote unquote, an ulterior motive. So in that way, the machines are actually, maybe have more integrity than any human because I can't see your markdown files. At least more transparent integrity. That's right, and, and, uh, and I can change the AI's markdown files. Can I change your markdown files? I can't. Number one, I can't. With enough dollars, Damien, with enough dollars, you can change anyone's markdown file. That's, uh, this administration has certainly shown that. And so, yeah, so I, I think that, I think that the transparency and malleability of the AIs — that is seeing the markdown files, being able to change the markdown files in the way that you can't do that with humans. That certainly cuts against my argument about integrity. So if we don't have trust. Maybe 'cause we're gonna trust the AIs. We don't have intuition because the process and speed is gonna go faster. And if we don't necessarily have integrity... and I would say one other aspect of integrity is that, at least with deterministic harnesses around the AIs, maybe those AIs are more predictable than most humans. Humans are very jagged in the way that they sometimes react one way in a situation, and then in the same situation they'll act a different way. And so in this way, we as humans are very probabilistic. But I wonder how much more deterministic the AIs are, therefore more predictable, therefore more integrity. Well, and that's the thing. If we take away all three pillars, one argument at a time, what is actually left that humanity can provide beyond a legally reasonable and competent piece of AI. These are all fine questions. And, you know, if you had asked these questions even three months ago, I think our answers would've been totally different. And so then the next, They were different three months ago! They were, were. Yeah, it even two weeks ago. And so now let's say, okay, a year from now, where are we gonna be three years from now? Five years from now. I think humanity is gonna have a reckoning. Um, my, uh. I joke about my potato farm. Uh, but it's, it's less and less a joke now. It is true. Um, it, it's funny that the Minnesota Department of Economic Development reached out to me, and just about an hour ago we had a nice conversation talking about the state of long range planning. And the person that did the interview is focused on artificial intelligence. And saying that for the state of Minnesota, like what should we be thinking about? What are all of the awful things that can happen with AI and all the good things that happened with AI? So we spent about an hour and it was essentially, me bombarding her with my, not an 18 minute TED talk, but a 60 minute TED Talk. Where I was actually going through all of the access to justice issues. And all of the ways that we can help that 92% of legal needs, if the cases, statutes and regulations are actually readily available. And, and how the unauthorized practice law statute really needs to be rethought of, from both the legislative and the executive and the judiciary within Minnesota. So anyway, so I gave her much more of a 60 minute TED talk than she was even bargaining for. But she said, she's been doing these interviews with dozens of people. She said, you are the first one that has an optimistic view rather than a pessimistic view about AI. I said, you know, honestly, like in the multiverse, maybe 90% of the paths that were going down are gonna end horribly, right? Maybe ten, five percent are and gonna be in a good path. And I, I said, you know, I wanna choose to maybe try to push us toward the 5% rather than just throwing my arms out and saying we're doomed. Right? So really my goal with this podcast, you know, you as a listener and, all of our listeners, let's go toward the 5% of good paths. Because it doesn't make any sense to just throw our hands up in the air and say, we are doomed. And, and between doing nothing and doing something, at least doing something can yield a better outcome. That's right. This, there's the old story about the person walking on the beach and throwing the starfish in. And there were, you know, the beach was littered with hundreds or thousands of starfish and the guy said, what are you doing? Like, that's not gonna make a difference. He's like, made a difference of that one. Right. So we're, we're, we're, we're pushing toward the 5%. Yeah. Well, um, I mean, I don't, I don't wanna paint us all with negativity and pessimism. But, um, I saw an article this morning which came from, uh, one of those like you know, medical journals and, and so on. And it was talking about how the president, CEO of, uh, NYC Health and Hospitals is now ready their hospitals to replace radiologists with AI. And you may recall this was one of the, the big talking points in the last couple of years, which was there was an early prediction from, uh, Jeffrey Hinton saying that aI is gonna make radiologists redundant, and it didn't happen. It created a bigger demand for radiologists. Well, now we might be on the other, on the other side of that labor market. So, signals, signals everywhere in the market right now. A hundred percent. I love slash hate that radiologists are now, coming down. So in a sense, I, I really dislike what you've just said; in the other sense, I, I, I kind of think that what you've said is continuing a trend. Where chess, for example, AI beat us in chess back in the 1990s. But then they said, well, AI beat us in chest, but now we have Centaur chess, where you have the humans plus machines playing against the humans plus machines, where the machines suggested some things. And the human plus machine did better than either the human alone or the machine alone, that the centaurs beat the machines. So that was the story for a while, but then it turns out that now the machines even beat the centaurs, Yeah, It's just, it's just, uh, the machines, uh, all the humans do in the Centaur chess is bring the AI down and the AI is like, "Hey, step aside. Hold my beer. Uh, I got this." So anyway, that's chess. Number one. Number two is ATMs. Where people said, Hey, people thought ATMs were gonna put tellers outta work, but it turns out because it was so much less expensive to start a branch, now we have more branches, which requires more tellers, and we have more tellers than we ever have in the past. So that's the happy story. That was true until it wasn't. Because now, there's a whole bunch of branches that are going away, because people don't even need that. And the tellers are. They, they all, they're all gone. Uh, so all and almost all the tellers. So chess went up and then it went down. ATM counterintuitively, you need more tellers, but now you don't. Now I think radiologists are kinda the same thing. Like, we thought that radiologists were gonna go away. Turns out we needed more, and maybe we don't. So I think this is, we've seen this movie before. Lawyers, are we, are we on that rollercoaster ride as well? It's the right question to ask. And, and you know, we as lawyers have always been a tool for the rich. If you are a rich person that wants to be able to effectuate a result, then you hire a lawyer to be able to get good advice, to be able to effectuate that result, whether it's a transactional result or litigation result. I wonder if we'll just continue to be for the rich. If I'm a rich person, then maybe it'll still be worth the trust, integrity and intuition. Even though I know that the, the person is maybe not as intelligent as the AIs or doesn't have access to yesterday's case, it just makes me feel better as a rich person to be able to pay money to this person to be able to, you know, hedge my bets. I, I wonder if, you know, maybe that's always been true. Rich people have been using lawyers as tools of their richness, and maybe it will continue to be true that even if we don't need the lawyers, the rich people will continue paying the lawyers. What do you think? You know what this feels like to me? This feels like people who collect antique cars and, and yes, you can buy a modern car with computers on board and so on, but I really like driving this gear shift. This, this, this stick shift car. And, and I really like the feel of 1920s leather. Like, it, it feels almost antiquated. Um, and I believe there will be a certain class of people who will always like that and want that and need that. But with the rise of like the, the new mod law firms and, and the alternative services, I think that's changing. I think it's, it's changing who needs the service and also the economic model of who has the money to pay for the services is also changing. Um, so don't ask me 'cause I have no idea what's gonna happen in two years. Let, let me put some optimism into what we've been doing pessimistically, uh, quite a little bit because we are a, of course, optimist at our core. Yeah. We've talked maybe in this podcast in the past about my neighbor who wants to build a fence almost right against my house. That's come to the head, uh, where they've said it's gonna go in when it thaws and greetings from Minnesota where it is now, April 1st, and there's a lot of thawing going around. The lawyer said that it will happen in April. So we are now, we have come to an head, there's no settlement in sight. They don't even really want to offer anything reasonable. So yesterday we filed a lawsuit against my neighbor. It's now in the courts and we'll see what the courts do. As part of that lawsuit, I've worked with some very human, lovely lawyers. Human lawyers that I trust and that are worth my time and money and energy to be able to work with. Their names are Jake and Mark. Jake has been helping me a lot, from October when this started all the way through now. Not just telling me the law, because I have Vincent to tell me what the law is. And I've vibe coded about 20 different versions of my complaint and, 20 different versions of my motion for a temporary restraining order. So I don't need him for the documents. I don't need him for, really, the law. But he's a really good counselor. That is, he says, you know, Damien. You could do that, but do you wanna do that? Like, let's, let's think about the second order effect of that. And even though I jammed with the machine a lot, and I do jam with the machine a lot, that machine didn't give me as good of advice as Jake did. Because he's truly a counselor. He's what the best lawyers have always been. To say, okay, let's think through the psychology of this. Not just in the, you know, what are the odds of this getting the result you want. But what is, what are the psychology and the probabilistic humans on the other side? That is the my neighbors. And the probabilistic humans from the judiciary side, the judges that are gonna do this. So let's think about optics. Even if the, we don't think about the law. So anyway, so that's, that's point number one that's really good. Then, more recently I brought in my second lawyer, a friend named Mark, who I clerked with at the Minnesota Court of Appeals. So he's been my friend for 25 years. He actually lives in the neighborhood. Really his focus is on boundary disputes, like mine. So if you go to his website, the second bullet is, you know, negotiating residential boundaries disputes. So he has a lot of really good insights on these types of disputes, that is hyperfocused on the type of problem that I have. And even though Jake is a really good lawyer, and I like to think of myself as a really good lawyer. Some might disagree. But between Jake and me, Mark came up with some really good points that neither Jake nor I had even considered. And that's because he's so hyperfocused on this particular problem that I have. So let's put some optimism. Here's to Jake. Here's to Mark who are the lawyers that give humanity and the counseling. And Mark, who is so hyper-focused on this particular problem that I have, that he provided insights that really good lawyers hadn't even thought about. Much less the consumers who are not lawyers, like Jake and me. So maybe there is some optimism to say that we as lawyers have something to offer. Not only other lawyers like Mark offers Jake and me, but definitely to the consumers that need us. I like that optimism. There's a lot of truth to that because if you were to give two people of differing skill sets the same access to Claude and to GPT, they're gonna get highly varied outputs because one person knows what questions to ask and the other one doesn't, and one person knows what to follow up on and the other one doesn't. Um, and we've seen this play out in reality as well. So I think there is still a lot to be said for people who have expertise who can apply the expertise. We go back to this is the centaur model, and I, I don't think we are at the point yet where the machine without the humans can beat the centaur. Um, will we get there? Maybe? Um, but I don't think we should be basing our lives on the negative. I don't think we should be living out our days to go, oh, well it's not necessary in two years, so therefore, let's forget, let's forget it right now. And I think that brings us to a really interesting question that a lot of law firms are asking. It, it is "what happens to the apprentice model?" What, what do we do with training junior lawyers? and the conversations right now, and we kind of mentioned this in the um, uh, finale episode of season two, which was. If it is true that law firms are dropping so many summer clerks and graduates from their, from their cohort, who's gonna become the senior lawyers in a few years? How do you train them if there are no people to be trained? The enrollment in law school has gone up, dramatically, and some attributed that to maybe our, our current president in the United States at least, to be able to say, oh, I need to do something. And, uh, what do I do while I go to law school to be able to enact law, to be able to fight against injustice? At least that's, that's at least what some people attribute the, the large influx into law schools. But with that large influx , my alma mater, uh, Mitchell Hamline College of Law, I hosted a panel and I asked the question of the panelists. I said like, aren't you gonna see this drop off? And essentially was thinking about what you and I have talked about of, you know, nobody getting hired post law school. And aren't you worried if nobody's getting hired post law school, then what Idiot is gonna spend, you know, incoming first year, 1L. What incoming 1L is gonna say, oh, well I'm happy to spend $200,000 for no job at the end of that? You're only going to either get idiots or you're gonna be taking money away from people that are idiots, right? I didn't say it that way, but essentially I was asking that question of the, of the administrator. And the administrator said, though I we're not seeing those numbers. This is not something that we're seeing, that they're seeing the uptick and they're, they're really not seeing the downside. So let's assume that the uptick is true, that there are more law students than there ever have been. There are more people that are optimistically thinking, oh, I'm gonna get that big law job, even though maybe you and I know that that's not true. If we are now building tools that is, you are building tools as Horace, I'm building tools along with my team as Damien. If we're building tools to enable the solo small lawyer to do things that used to only be possible with the biggest law firms in the world and to practice at a level that it brings up the floor of what it means to be a solo small, maybe do we need the 150 or 200 or 300 1st year classes of the big law firms anymore? Or do all of those, you know, first year associates from the big law just hang out a shingle? Now that the floor is a lot higher, and serve the 92% of legal needs that are unmet because we lawyers are too expensive. And when you say, you know, how do we apprentice those folks? I mean, maybe an AI could be a pretty good apprentice. And maybe they're not a solo, maybe that person that joins up with an older, more experienced lawyer and they have a two person shop. Then we can truly do an apprentice model like Abraham Lincoln had back in the 18 hundreds, where he truly had an apprentice where he just studied under someone and then handed it off. So I wonder if you and I, we often look at a big law lens and, I'm certainly guilty of that. You know, what's gonna happen to the first, second, third, fourth, before you become a partner at year 10? Which by the way used to be year seven, and before that was year six, right? So they've kind of always been pulling up the ladder behind them. Was that really a good apprentice model? Or, maybe it's the best apprentice model going out and doing it? Or maybe pairing with somebody else? I'm gonna have to think about that. I, I think a model for how people work is changing very quickly. It, it used to be in the apprentice model. You have a senior person who has gained all this experience over time, who's been battle worn, and, and, and knows the ins and outs of stuff. And then you have this junior person who is their squire in, in, in the olden world. Who follows the knight around and learns through hands-on practice how to do things. And, and this model was how humanity is always operated, right? The, the, the Knight Trains of Squire. The Squire eventually becomes a knight and brings on their own squire. And, and law firms mimic that model. But now you have artificial intelligence, which is a completely different way of operating. It's a completely different way of using and solving problems. It's not like a human, it's not making human mistakes. It doesn't actually learn over time. It's just a new context window with perhaps a memory injected as context. So how do people work with that? What is that like? And we try to use this analogy of like, well, yeah, AI is like a tool. But not like any tool that we have, we've had before, right? It's not like picking up a sword and shield for the knight. It's something else. It's a dragon. They're now riding a dragon. So what do you do with that? We have to reconceptualize the entire model. So to your question, I don't know. I don't know what's gonna happen to the apprentice model, but it's certainly gonna look very, very different from now on. I think that's right. I, I've said this a bunch in my talks that, you know, people say, what should I, what should I be reading about AI to be able to do my best work? And my response is that, and I've said this before in the podcast, what do you read to learn how to swim, right? You, you don't, you just swim. You can't read any book that will tell you how to swim. So it's kind of the same with AI. It's also the same for lawyering. How do you learn to be a good lawyer? Is it through apprenticing or is it just by lawyering? Is, is an apprentice just a guardrail to be able to kind of guide you along in the way to good lawyering that we've needed in the squireship of the olden days and to of the, you know, big law apprenticeship, if you can call it that. And certainly small law apprenticeship or can the AIs kind of be that harness? Guardrails to be able to maybe guide you, it's essentially raising the floor so you don't actually fall through the floor. And that's something that a, a mentor had kept mentees from doing in the past. Maybe AIs can serve that kind of apprentice role for those who don't have mentors. But certainly for those who are lucky enough to have mentors and people that can be apprentices under, that's probably the right way to go. So, uh, I'm gonna take this in a slightly different direction, because I think what we are assuming is that people are happy to use AI, people that, you know, will eventually adopt AI. I've been talking to firms and, the adoption of AI... It's, it's, it's that saying, it's that saying where, uh, I, I forget which movie or, or where it came from, but someone goes, "Hey, I thought you were dead." And the person goes, "oh, reports of my death have been greatly exaggerated." Um, it's, it's like that now with adoption of AI. Where they're going out there, a lot of people were saying, oh, we've got great adoption, you know, whatever percent percentages. This is our monthly active users. And when you kind of look at what monthly active users mean, it's like, so they opened it once a month, what's the actual adoption like and, and like, can we ever get to the point where we are properly leveraging the power of AI if people are not willing to use it? If they don't actually know how to use it. Um, and what does that look like? I mean, what are you hearing around the market at the moment around actual AI usage and adoption? I was just hearing this morning, somebody said, my associates hate it. They think that it's gonna put them out of a job, so why would they use a tool that's gonna put them out of the job? Especially under the billable law model. If I can say, Hey, uh, partner, uh, I'm using AI, quote unquote, wink, wink, nudge, nudge. Uh, but I still spent, you know, 20 hours on this thing that honestly I could have spent five hours in. But those, you know, extra 15 hours made you a lot more money Partner. Uh, right? So, so what, uh, what, what incentive does the associate have to use the tool to put them out of a job? Unless there's institutional incentives? That is the, the firm says, no, you're gonna use AI and we're going to judge you by the token use. That a lot of enterprises in the corporate side are saying, we are gonna look at the number of tokens you're pushing through Claude Code. And the number of tokens you're pushing through all of these AI coding things. Because if you're not pushing enough tokens through, then you can maybe find someplace else to work. That is an incentive to truly have people use AI. Uh, query whether law firms are providing that incentive. And, and if you're billing by the hour, why would you provide that incentive? Say, we're gonna judge you by your token use. Um, so you show me the incentive, I'll show you the outcome. And the billable hour, I think is the root of all evils, at least many of the evils that law has. And so, until we fix and jump over to the flat fee, subscription fee, success fee, all of those things that are not the billable hour, and a lot of firms are doing that. Of course you're gonna continue to have a hand wringing, say, "I just can't get my lawyers to go against all the incentives that we've laid out for them." Well, I, I think, I think that's one thing, the, the billable hour is, is one thing. And the other is, uh, I think the models perform really well up to a certain point. Lawyers are not yet used to where to take it from that point. Um, you know, I was talking to someone about how they were using AI to create a first draft and I was like, why don't you just use a template precedent like you always have, right? Um, but anyway, they were using AI to create the first draft and they found it was creating inconsistencies, like from draft to draft. And again, why are you using AI to do a first draft? They were doing it, um, and, and they were like, so eventually I got tired of that and, and I stopped using the tool because, because like it was not consistent. Um, and that's when I finally said, why aren't you using a template? And I think it's, uh, like. had these last three years where people have been throwing AI at all possible use cases regardless of whether they should or they shouldn't, just because they could. And I think like the sensible picking of use cases, the sensible application of like, where is AI strong versus where is humans strong or other technologies strong. Um, um, we haven't quite reached a consensus yet in the market and there's still a lot of R&D and experimentation going on. I think once these products, these sorts of use cases become more concrete, become better understood, become more recognized as like a standard way of doing things, the adoption of AI will accelerate.'Cause people are like, yes, for this use case, gen AI is perfect. And it's just, right now we don't have a consensus. You know, the future is here, it's just not evenly distributed. Right. That's, that's definitely true. That the future of which use cases are good for AI and which are not, that's here, it's just not evenly distributed. And what you said actually reminds me of a LinkedIn conversation I was having with Jason Barnwell, friend of the pod. We were having this conversation in public. I was commenting about vibe coding and I think it was related to when I was, on the plane, doing an SSH tunnel back to my Linux machine over here, and, vibe coding on this machine, even though I was at 30,000 feet and saying, what a world did we live in? And I was thinking in my fever brain of how awesome this is, that what does this do to SaaS? What does this do to any legal tech software, right? Like if, if I, as some idiot can be able to be at 30,000 feet vibe coding my dreams into reality, what hope does SaaS have? And Jason brought me back to reality, saying, "Hey, have you met your average lawyer?" And, and, and he, he made, he made the really good point that we as lawyers and law firms have a build versus buy versus rent, right? Uh, the uh, building seems pretty good'cause you get exactly what you want. But building is really hard because lawyers are not UX people. We're not UI people, right? Um, so anyway, building is hard. Buying is super easy, right? You just pull it off the shelf. But the problem is most of your users think the thing you buy sucks. Because it doesn't have the button you need, or it has 10 buttons where it should only have one button, right? So buying is easy, but it sucks. But then you have third like kind of renting. And I was thinking like maybe you have number four is kind of customizing what you buy. And I thought maybe like somebody could vibe code to the extent that I as a vendor provide an open source version of my thing. So if it has 10 buttons and you only want the one button you need, man, you just vibe code it and make one button. And then you just get the one button right? And somebody else wants 20 buttons. Cool. Now they get 20 buttons. So to be able to say that people can vibe code their own versions of the software in this kind of malleable way, I thought maybe this is the way. Jason Barnwell brought me back and said, um. Yeah. Have, have you, have you seen how many people stick with the defaults? Even though you can go into the system, the system settings that you could change, it's two clicks and people can make their lives so much easier. Two clicks is too hard for them. So for those people where two clicks is too hard, Damien, why do you think they would vibe code at 30,000 feet? Maybe the thing that excites you, Damien Riehl, doesn't excite 99% of the other people. And he said, I love a delicious meal, but I hate cooking. But I know other people love cooking, so I'm happy to give my value, in the form of restaurants to those people who love to cook. And it just reminded me that the user is not like me. That's something that is a UX, mantra. That we think in our brains the way that software should be the Platonic ideal. But the user is not like me. Your and my excitement about vibe coding... and by the way, we've done a podcast on this that will be released at some, one day. We are so excited about vibe coding, in the way that 99% of the populace is not. So, I think yes, people are lazy. Yes, people will stick to the default. Um, and mostly that's because they don't want to learn anything new. Like for, for me, if I need to go from A to B. I don't want to learn to fly a helicopter. I, I just, I just want to get from A to B, right? So yeah, sure. Or you can teach me to fly, but that's feels like a waste of my time. And I think that's what a lot of people feel when they get to the point of like, well, I have a piece of software in front of me. I need to get the output, which is why there's this, um, phrase that Pablo Arredondo said publicly, and, and I have my own spin on that. He says, um, ambient AI and I say invisible technology. There is a, and I think you, you would quote Steve Jobs to say The best UX is no UX, right, the best UI is UI. Um, and I think that's where we eventually have to end up for this technology to be adopted en masse. For everybody to say, yes, use AI and to make it completely invisible so they don't even know they're using AI in the first place. I think that's right and that goes back to our access to justice. You used to just plain old Google and do a Google search, and now people don't even realize that all of the outputs that they see on page one is AI. And so this is ambient AI. This is essentially people are saying Google's gotten a lot better because AI. Remember when Google first got into the AI game and it was such an awkward experience, you know, trying to catch up to chat, ChatGPT? Look at this now. Without our even noticing the Google search results, the synthesized answers are so good that a lot of links aren't even clicked on anymore. And meanwhile, ChatGPT, falling off a cliff. That's right. There's another whole episode about Google's business model of ads that if people aren't clicking on links, then where goes that ad revenue. Of course there's all sorts of antitrust discussions to be had there. Yeah. Yeah. Well, those are, those are bigger topics and than we can cover on this podcast today, I'm afraid. So my final question, this, this is about selling legal expertise via license fee. So there was a, uh, there was a LinkedIn post that, uh, Joe Cohen posted last week, um, about kind of what he's hearing from client meetings after his joined Harvey. And one of them is selling legal expertise via license fee. And that's the Harvey model and Legora model of like, you know, codifying everything and then letting clients log into a portal. I wanna ask, you know, what do you think of that? Or is that too big a topic to cover today? It could be too big a topic. It sounds like you're saying like productize my legal knowledge to be able to say that. Go ahead. exactly, exactly. It can like there actually a future in productizing the knowledge? I think maybe, if that knowledge is scarce, whatever is scarce is valuable, right? Scarcity equals value. Abundance means "not valuable." So there's a real question as to is the thing that I'm going to productize, just essentially, how to do civil litigation where there have been a thousand books on how to do civil litigation, that's not super valuable. In contrast, if you are the world's expert on private equity funds and the ways to be able to flip a company from, you know, x to 100 x. Last episode, yeah. That's right. That's right. They have a, a lot of scarce knowledge that is worth spending money on. So I think that whether one can productize legal knowledge or not, productize depends on how scarce the knowledge is that you're gonna trying to put into a product. So let's go to Optimisms. I, I'm, I'm optimistic that we are going to be coming out the other side of this. And I think there's gonna be a lot of societal turmoil, and I think there's gonna be a lot of unrest. There's gonna be a lot of pitchforks. So I'm not excited about that. I'm not optimistic about that. I think that will probably happen. But I am optimistic about coming out the other side of the pitchforks and maybe having a really more equitable society as a result. Where we are gonna have more access to legal information that we ever have in the past. And I think that society will be better. I think the lawyers that are gonna do more counseling work like Jake and Mark have given me counseling work. I think they're gonna be happier. I'm optimistic that the lawyers will be happier and their constituents and clients will be too. Uh, what are you optimistic about? So this week we, as a company, as a team, we signed up to Claude Enterprise for all of our engineers. Um, and it's a move that I resisted for a very long time because of complacency like. Like once you start using it, people will, uh, surrender your, your thinking, surrender your hard work. Um, and what we have found is that we can actually overcome this. That, that with enough training, effort and guidance, we can, as people, learn not to be overly dependent on generative AI. And I'm optimistic that we are gonna find a way through all the risks and dangers that have been researched and presented to us. So that's my optimism this week. So knowing that we don't have much time, how, how are you going to get rid of that complacency? It's, it's about teaching people what the risks. It's about teaching people what the gen AI is good at and what it's not good at. It's about showing them through experimentation, Hey, if you do this, here's the outcome, versus here's something more specific, which gives you a better outcome. And teaching people to work with this new type of intelligence that is not anything like a human, a human intelligence. I love it. Something that we'll save for in the next episode that will give it as a teaser for everybody else. I've had this very same conversation with my neighbor where he and I are vibe coding. And we are using a harness called "Getting Shit Done", GSD. And you can use Claude Code to be able to do things really quickly. But it sometimes does things poorly, whereas GSD puts a nice harness around it to be able to say, okay, let's go through the proper stages where you do a PRD for each one and we're gonna spend a lot more time. The benefit of number one just Claude Code is you do it fast, but shitty. But the problem with the number two is that you do it slowly, which my ADHD brain is like, I'll get to it later because it's taking so long. So I think we societally are gonna be, uh, essentially trying to get between these two. And it sounds like you are saying, Hey, let's go with more, with number two to be able to put more guardrails around this and think more harder about it. We are gonna talk about this more than the next episode because I've, I've got my own theories around this after having played with Claude a lot and done my own vibe coding with the games. And I think there are strengths and weaknesses to each one, but let's cover that in the next episode. Awesome. Horace, I I love every single conversation with you. Likewise, Damien. I am looking forward to the next one and we are soon gonna be releasing episode one with Jay and Ed. So, um, every episode is amazing and I'm looking forward to our next one. And thank you everyone for listening. Thank you everyone.