November 17, 2025

00:21:29

Austin Sun: The Two Biggest Mistakes Companies Make with AI

Austin Sun: The Two Biggest Mistakes Companies Make with AI
AI Chronicles with Kyle James
Austin Sun: The Two Biggest Mistakes Companies Make with AI

Nov 17 2025 | 00:21:29

/

Show Notes

In this episode of the AI Chronicles podcast, host Kyle James discusses the innovative AI startup Clausey with founders Austin Sun and Larry Galan. They delve into the challenges of contract management that inspired the creation of Clausey, the advantages of an AI-first architecture, and the importance of data security. The conversation also explores the differences between open source and closed source LLMs, the results seen by Clausey’s clients, and future initiatives focusing on agentic workflows to enhance efficiency.

 

Links:

 

Clausey: clausey.ai

 

GPT Trainer: Automate anything with AI -> gpt-trainer.com

 

Key Moments:

  • Clausey was founded to solve personal pain points in contract management.
  • AI is integrated at the core of Clausey's product.
  • The platform allows for multi-document context preservation.
  • Data security is a top priority for Clausey.
  • Open source LLMs can significantly reduce costs.
  • User experience is crucial in AI product design.
  • Clients have expressed a strong demand for effective contract management solutions.
  • Agentic workflows will streamline repetitive tasks.
  • The AI landscape is rapidly evolving with new models.
  • Clausey aims to enhance automation in business processes.

Chapters

  • (00:00:00) - Introduction to Clausey and AI Implementation
  • (00:01:44) - The Founding Story of Clausey
  • (00:03:47) - AI-First Architecture in Clausey
  • (00:06:39) - Preserving Context with AI
  • (00:08:25) - Data Security and AI Usage
  • (00:10:33) - Open Source vs Closed Source LLMs
  • (00:15:32) - Results and Customer Insights
  • (00:18:38) - Future AI Initiatives and Agentic Workflows
View Full Transcript

Episode Transcript

Kyle James (00:01.016) Hey, welcome to the AI Chronicles podcast. I'm your host, Kyle James. And today we'll be discussing how a company called Clausey is using AI inside of their own business. And we'll share the exact steps that you can take in order to implement AI for yourself. Now, before I dive into that, listen closely. Are you looking to implement AI inside of your own company or maybe just struggling to get your AI to stop hallucinating? Speak to GPT Trainer. GPT trainer literally builds out and manages your AI for you, eliminating hallucinations for good. Go to gpt-trainer.com. I promise you, it'll be the biggest time saving decision that you've made all year. Trying to set up AI on your own is like trying to build a house from scratch. Sure, you could do it, but the time and frustration is going to take you to get it finished. it may not be worth it. So it's a thousand times faster and safer. to hire professionals. Schedule a consultation today. Once again, that's gpt-trainer.com. Today I have with me on the show Austin Sun and Larry Galin, who are the founders of an AI startup company called Clausey. With over 10 plus years of experience in data engineering and legal, Austin and Larry are building Clausey to gather documentation insights and implement automation using LLMs to unlock intelligence from documents. Really excited to have these guys on the show. Hey Austin, Larry, welcome in. Austin Sun (01:30.893) Thank you. Larry Galan (01:31.029) Great speaker. Kyle James (01:32.908) Yeah, for sure. So we'll be going kind of back and forth here and there. So really, really appreciate you guys being on. And so give us a little bit background here. Like what is Clausey? How did you come up with it? Like what exactly are you guys doing? Like give us a little bit backstory there. Larry Galan (01:46.561) Sure, so guess thanks for having us on, Kyle. You in my past life, I've heard that people start businesses for one of two reasons. One, because an idea has a big tam, or two, because an idea solves a personal pain point. For us, was really the latter. was the personal pain point. So both Austin and I previously worked at different startups, and we both came into the same problem over and over, which is... Contracts are really difficult to manage, especially if you're doing B2B SaaS or even B2C. You have a lot of customers. There's a lot of customization. And every year, I would spend hours, same with Austin, on just negotiating the fine details of each contract. And then there are all these little things that oftentimes get in the way, like auto renewals or termination clauses, or things that just always can provide a risk to your business that are very difficult to track. So the idea for Clausey really came with, well, AI has had this amazing revolution over the last two years. Can we use it to build a better product, to track all of the key fields in our contracts, to get automated reminders and to just feel comfortable as individuals that there are no risks going throughout our business that we're not aware of. Kyle James (03:03.81) Hmm. Yeah. So it's almost like it came from like the, the, if it was founded, like the company was founded based on the pain, initially what you guys were having, was like, Hey, we're having these problems like for ourselves. Like we were working hand to hand with these contracts and there are so many things that are creating, I guess, like, I mean, especially with a contract, depending on how big the contract is, any, anything at all. It's like delays it can hurt business. would imagine like, depending on how big the size of the, of the contract is. Larry Galan (03:31.233) 100 % and it's so easy to lose track of stuff. Kyle James (03:35.084) Yeah. So, okay. Now you're using AI over it at Clausey. man, tell me like why, like what exactly is happening on the AI side that's maybe taking you to the next level, especially for those who are using the platform. Larry Galan (03:50.817) So we really, when we started, Kyle, we really wanted to create an AI native platform. And what does that really mean? It really means that AI is at the core of the product that we're building. So there are a lot of competitors in this space. There's a big acronym for the space called CLMs. But the reality is most of them are over 10 years old. They're not taking advantage of AI first architecture in how they built their products. So what does that mean for us? Well, really what that means is very simple user interface. You upload a document. All the data is immediately extracted by an LLM and it's double checked. to ensure we have fidelity of data and that it's 100 % accurate. The second thing it means is because we're building using an AI-first architecture, there are all these advanced frameworks that it advanced over the last few years that allow you to do well above and beyond what ChatGPT is doing. So let me give you a very basic example. Right now, if you go to ChatGPT and you upload a document, let's say it's a contract or a resume or anything, you can ask one set of questions or a set of questions exclusively from that contract. But your ability to preserve context between documents is very limited. with a system of AI agents that we're implementing at Clausey, that problem goes away. You can, for all of a sudden, preserve multi-document context, which is pretty revolutionary, because all of a sudden, you can now ask a question like, go through all my contracts and tell me what my projected spend is for the next three months. Go through all my contracts and tell me where I have variations of different insurance clauses. Kyle James (05:29.794) Hmm. Larry Galan (05:34.205) Eventually, if you go a step further and you integrate it with a solution like MCP, you can solve even more complex problems like go through all my contracts, connect to my QuickBooks, and reconcile expenses. So it's building on top of each other. But in order to do that, you really need to start from the ground up and build using AI-first architecture. And that was our objective. Kyle James (05:55.308) So, so you mentioned like preserve contact, the, like, I think the example was like really good. like, so when they go through like chat to be T they're not getting as much of a thorough, almost like magnifying glass. Is that like the way saying it? Like, like when I guess like kind of break down the preserve the context meaning. Cause I think, I think a lot of people might have questions on that. Larry Galan (06:17.813) So Austin is actually leading our full AR architecture. He can provide a better answer on this. Austin Sun (06:21.976) Yes, good call out. So thinking like a chaijbp, you have a tool, you have a platform that is pretty interactive with the user when it's typed in. So it's giving the flexibility for sure if you are poor for writing the prompts, then you can get a lot of things from that. And second is a lot of people, I was assuming that open chaijbp just asked question is, Kyle James (06:22.414) Awesome, you're on the spotlight my friend. Austin Sun (06:51.336) Tell me this, tell me that. So just one sentence, we call them like a zero-shot prompt. This normally is not working. So we especially adding the, I can say that it's more complex, it's more detail level of the regulations and how to let the prompt working in a way that is perfectly working. Then we help make a lot of testing on that. So we have a framework of the prompt. So number one, it works. We know how it is. how to train the problem to work in to understand the contract in a certain ways. And the second is the secret is we are not using ChaiGPT. And reason for that is not because number one is it's not secure most of the case. And the second is there's an answer in a given that we compare with other providers that is better. And then the lastly is we can leverage in a lot of the potential which is we can sell posting of LLM, which is for the customer who say, I care about the privacy 100%. I want to host that on private place. How can I use that? So that's the way that we're choosing. Well, we are not building a child DBT wiper, but we are using the magic of the LLM to answer the potential of the questions. Kyle James (08:11.212) Yeah. I think a lot of times, like, I see this a lot with like different people that will have on the show is like, they'll take, you know, seeing companies will like take their like, you know, secure, maybe documents, data contracts, agreements, maybe personal information, whatever it is, right? Like data and their own employees are throwing it into AI. Like, cause you mentioned something like you're, you're using, you know, not necessarily just using chat to BT, like you're, you're essentially having like a, a protective way where your data is being protected. Like how, how does that work? And like, are some of those maybe big mistakes companies are making who are saying, you know, I'm just throwing it into, you know, Claude, Anthropic or Google Gemini or chat to be T and it's giving me decent answers. Like, what would you say to that to say, Hey, maybe you should maybe rethink this. Austin Sun (08:57.88) So to give you more background, I do have a security background and then a legal background as well. So I would say for the industry, there's right now a lot of applications we have right now. It's not a security, someone has become a hacker, well hack your password. No, it's not that way. The security is for the AI is more regarding is the decision of those quality of these answers really put me in a dangerous spot. place in the business situations, which is called hallucinations. Or it just tries to tell you that something does not exist and then try to make some answer for you. So that is we consider is part of number one is not secure. It's not a secure answer to make me too comfortable, make a business decision. And then the second layer is the traditional cybersecurity, which is how to protect password. For example, we do not. saving any username, password. We leverage the OAuth, which is a pretty standard way that's just do the authentications, which is password protected by Google, password user protected by Microsoft. And then we leverage a lot on the cloud environment, is those has been approved for 10, 20 years. It's a secure method hosting applications worldwide. So that's all we consider in two ways. Kyle James (10:17.548) Yeah. Yeah. Cause it's like a both and or go ahead, Larry. Larry Galan (10:23.124) One thing I might add to that, Kyle, is I think this could be an entire podcast onto itself because it really gets at the heart of, know, do you want to be with an LLM provider that's closed source or open source? We, for example, we have very strict data privacy agreements with the LLM providers that we're using. But as Austin mentioned, we're also exploring the ability to self-host LLM models. And we think in the future... Kyle James (10:51.213) Tell me about, tell me, want to hear about that too, but you keep going. Larry Galan (10:54.44) Sure. So the idea there is rather than, let's say, using Chat GPT if you're a Fortune 500 company or a medium-sized business and API-ing into their LLM, which is closed source. You don't know the weightings on the model, but it works very well. You can actually use an open source model like Lama or DeepSeq. Put it. and self-host it on your own infrastructure, where it's not actually theoretically, it doesn't even need to be connected to the internet. It's just residing on a server, it's running on GPUs, and then the data is 100 % secure. There is basically no external API, if you will. There is a unique ecosystem and connection just associated with your data that you can interact with. So. Kyle James (11:32.686) you Larry Galan (11:44.202) We think this is one of the reasons why, for example, Facebook built their model to be open source. And there's a big discussion here about what's going to prevail in the long run. Will businesses want a closed source model, like ChatGPT or Anthropic? Or will they want something where they can completely can customize and where they can entirely control security over? Kyle James (12:06.958) Yeah. Cause like that's not talking about that. Like, you know, having the open source or closed source, like why, why would a company, you know, cause I know deep six, a a huge really, uh, whale of an LLM, right. But like having that open source, like why, why would a company go, Hey, I'm going to go open source versus going to a closer, so like Chad, CBT or caught in drop it. Cause I know you mentioned one of them, like, Hey, it's like the more customization, but like maybe a lot of it a little bit more like how it would benefit versus one versus the other. Larry Galan (12:36.214) I think, and Austin, feel free to chime in, the biggest reason I think most people would want to use an open source LLM is cost. So all of a sudden, you're not paying the API cost. You just need to buy the GPU or rent the GPU yourself and host it on the cloud. But all of a sudden, your cost to process data is a lot cheaper in the long run if you're going through fast amounts of data. Another potential advantage would be is if you're building a multi-agent framework. Okay. You might have different LLMs at working for each agent in the process. You, you, for example, we're building a master agent where the most powerful LLM is located, but for a lot of tasks, you actually don't need the most powerful LLM working every time. You can actually use a slightly less powerful LLM and get an answer that's just as good. And so by using an open source framework, you can cut that cost dramatically because agentic workflows will typically increase your processing costs by 15 to 20x, depending on how many LLM agents you're using in your framework. Kyle James (13:50.092) Yeah. So it's almost like, I can almost see this being once, once a company has it built out, like, Hey, here's our agentic framework and they have the process. have the workflow going. It's like, okay, how do we cut it back? think a lot of companies are like that too, where they, create something, right? They market something, they sell something and they go, okay, how can we save more cut costs in this regard? This is where I think you're going a little bit deeper is okay. We can cut it down by making sure that we're not using, you know, the newest You know, newest model that's taking 300 message credits. can reduce that to down to, know, 10 message credits or whatever it is for this task. But in this task, it requires more memory, more context, things like that. Okay. Might make sense to cut back and really do that more of a customization in that regard. What do you say? Larry Galan (14:35.586) So that's exactly right, Kyle. Just to put it in perspective for some of your listeners, when ChatGPT came out, the first version, back in, I believe it was November of 2022, it was charging $4 per million tokens. Earlier this year, DeepSeq was released. So ChatGPT is obviously a closed model. Kyle James (14:50.702) Hmm. Larry Galan (14:57.878) DeepSeq is an open source model, but DeepSeq also has an API that you can connect with. The cost for their API was about $0.14 per million token. So in the span of two years, you had a decrease in the price of over 99 % API pricing. Now, obviously, ChatGPT is still charging a premium. But then on top of that, DeepSeq is actually a far more capable model than the initial version of ChatGPT. So it's sort of a win on both quality but also price. But in the meantime, Chatchie PT has also improved its models. Kyle James (15:35.756) Yeah, it's like a race. It's like it's happening here. Like almost like Chad, GVT has that, has that first, first dibs on, right? Everyone knows Chad GVT, right. And slowly discovering all these LLMs. But in this case, like if you really want to go to that next level, especially for enterprises, you know, who are pulling a lot of data using a lot of tokens, like they might be worth doing the open source. If you have the team that can and self hosting. So, and Larry Galan (15:57.834) and self-hosting. Kyle James (16:01.73) I start transitioning here, kind of shifting gears, like what types of results have you been seeing maybe both internally for your team and then obviously like work with a lot of your clients. Like what have they been seeing that's been maybe easier than it was before. Larry Galan (16:16.576) I would say it's speed of processing and accuracy. And Austin, feel free to chime in. the way we built our platform is, and I think AI enables this, like if what makes most LLMs these days just pretty groundbreaking inventions are not just the underlying technology, but there's also in our mind been an improvement in the user experience. So you log into ChatGBT or Manus, it's one screen that has a list of all your chat and all your histories. We're trying to go the same route where we don't want to have more than two to three screens in our application where you can interact, you can drag and drop documents, immediately get things from them. You can customize things, but it doesn't require a PhD to be able to navigate the app. So that would be, I think, in our mind, the biggest revolution that's accompanied not just throughout the development of the LN, but actually how products are now being designed at a user interface perspective. Austin Sun (17:17.644) Yeah, from just from our experience in when talking with our customers. So I got the impression that is, I mean, maybe just I'm in this industry for many years. So I've seen that the people that we our customer segmentations, we talk with them. Well, they are struggling with the non-talk knowledge workflows. And we can clearly see that they have a demand. I remember there's one customer that's from Eastco. said, literally, she has been looking for a solution for two years until they see our email. Cozy has something that can work for them and they need to manage the contract and then get insight from them. So that is, for me, the most exciting point that we see that, our thinking, our own personal puzzles regarding with how to handle the contract. Kyle James (17:52.75) Hmm. Austin Sun (18:13.474) there are other people have the same pain point and we have market feed that is poor. The solutions is going to helping with a lot of other people. Kyle James (18:22.59) Yeah. I think it's like cool. Cause it goes back to like the initial, what you said in the very first question, which was like, was that our, it was our problem. It was our pains that we were experiencing before. And now you're taking that and going, okay, there are other people who are experiencing the same thing. And two years of search is a, that's a long time to find the right, like software and solution that's can make things easier for us. That just goes to show like how much of a really market value it brings. to the market because you guys are building something that's tailored pain and solving customers problems. And what would you say is like, especially over these next couple of months, things have changed, will change. What are some of those upcoming AI initiatives that Clausey's planning and where do you see it maybe AI playing some of the biggest role in your operations next? Larry Galan (19:11.734) think undoubtedly it's agentic workflows. your Chat GPT came out. It allowed people to answer questions very accurately. Now we have systems like RAG that basically allow users to analyze various different documents. The next phase of this is really agentic workflows. So now that you can preserve context between documents, you can create tasks. You can interact. with other applications like email, like QuickBooks, like for example, Gmail, and a lot of the stuff that is very painful right now for medium-sized businesses. And it's oftentimes really simple stuff like downloading. things into an Excel, resorting it, making sure it comes in the right format. I think in the next few years, the agentic workflows that are now being developed that we're certainly focusing a lot of our time on will help reduce a lot of the time that's spent on sort of those unnecessary but very time consuming tasks. Kyle James (20:18.71) Yeah, absolutely. Yeah. I could definitely see that. I mean, I always at chat GBT and call it, call came out with their multiple integrations, within their platform. I chat GBT is also come out with there. So it's like, have like a lot of companies who are building upon like really the pain points and you have chat to BT and LLMs are going, okay, how can we, they want to market share too. They're just cause they're the LLM. Like there's so much, and I think you're exactly right. Larry's like, no workflow making things easier where it's just like, instead of asking just one question, it's Hey, ask a question or have it do a task and it completes four or five tasks in one sitting versus manually one by one. You know what mean? So, and as we start wrapping up the conversation, Larry Austin, it's been great having you guys on like where can people go to learn a little bit more about you guys? And then maybe a little bit more about Clausey. They'd recommend them check out. Larry Galan (21:07.938) Sure, so you can check us out at Clausey.ai, C-L-A-U-S-E-Y.ai. Also, you can follow us on LinkedIn. We have a Clausey.ai page. And we post a lot of pretty good content there pretty regularly. Austin Sun (21:24.997) We have some blogs posting like white papers on our website. So it's breakdown by different topic, different industry, and can show that how exactly and why AI can help. Kyle James (21:37.39) Awesome. Love it. Thanks, guys. It's great having you guys on today. I appreciate you all's perspective and input. I everyone here is listening is definitely been enjoying it. And thanks again, everybody else, for listening in. Remember, please, please, please, I hope it's clear now. If you're to implement AI into your business, don't try and do it yourself. The time is just the AI could cause. It may not be worth it. So schedule a call with GPT trainer. Let them build out manager AI for you. Once again, that's gpt-trainer.com. Signing off for now, have a great rest of your day and looking forward to seeing everyone on the next episode of AI Chronicles.

Other Episodes