Episode Transcript
Kyle James (00:01) Hey, welcome to the AI Chronicles podcast. I'm your host, Kyle James. Today we're going to be discussing how Vector HX is using AI inside of their own company. Vector HX transforms customer touch points into strategic advantages by designing informed customer experiences, expert user experiences, and relevant employee experiences. They do all of this through deep research and expert digital design to help their businesses succeed. Before I dive into that, you need to listen closely. Are you looking to implement AI inside of your own company or just struggling to get AI to stop hallucinating? Speak to GPT trainer. GPT trainer literally builds out and manages your AI agent for you. Eliminating hallucinations for good. Go to GPT-trainer.com. I promise you it'll be the biggest time saving decision you've made all year. Trying to set up AI on your own is like trying to build a house from scratch. Sure, you could do it, but The time and frustration is going to take you to get it finished. Just isn't worth it. It's a thousand times faster and safer to hire professionals. Once again, that's GPT dash trainer.com. So today I have with me on the show, Eric Korofsky, the CEO and founder of vector H X. Hey Eric, welcome to the show. So glad to have you on today. Eric Karofsky (01:23) Thanks, Kyle. Great to be here. Kyle James (01:25) Awesome, man. So tell me, like, give us a little bit of background of Vector HX and like, how did you come to founding that company? Eric Karofsky (01:33) Yeah, so for about 20 to 25 years, I was a consultant for large agencies and worked with just some great brands and led projects for companies like Michelin, Royal Caribbean, Fidelity, and others. And then I went client side and worked for the B I started to hear a lot and realize that companies are realizing how important customer experience is and how they can help support it and drive it beyond just analyzing and sending out net promoter scores. And a lot of statistics started coming out, such as Deloitte's research saying that companies that have a customer-centric mindset are like 60 % more profitable. But people know that overall user experiences aren't great. You know, relating this now to the AI side, I started seeing a lot of people jumping on the AI bandwagon prematurely. And, know, the best, you know, item there is these chatbots started coming out that are really just poor. Kyle James (02:36) Mm-hmm. Eric Karofsky (02:38) experiences that they're kind of like the automated phone systems of the 80s and 90s. ⁓ That really just creates more frustration. So given my unique position of strategy and UX and product and AI knowledge, I realized it was good opportunity. Kyle James (02:43) Right. Okay. Gotcha. And so you made that transition. How long ago was that when you finally founded vector HX three years ago? Okay. And so you made that transition and I guess one of the bigger pieces of that was AI inside it was that like when you first found the company was AI at the foundation of it, or did you implement it maybe kind of later on, over the year or two. Eric Karofsky (03:01) three years ago. So one of my clients, a former machine learning scientist at the Broad Institute, which is just an incredible genomics research powerhouse. left there to become head of AI at one of the large pharmas. And he asked me if I could start doing some work for him. And that turned into more work and different things. That's actually what made me go ahead and leave and start my own company, was that sort of work. Kyle James (03:32) That's cool. Wow. Eric Karofsky (03:45) And I just saw some great opportunities in what he was doing, what his team was doing, and realizing that everyone was in the same boat trying to jump on the AI bandwagon. But there were challenges and places that I felt I could really help. Kyle James (04:01) Wow. That's, that's incredible. So it's funny how like, just like that, that connection with like, almost like a colleague turned it to, Hey, can you help out in this area? And then like, it was one opportunity after the other. And then that kind of sparked the interest to say, you know what? think I'm going to, I think I'm going to take that jump and, start doing, I'm guessing you're, you're glad you did. Um, so obviously business has been well. Eric Karofsky (04:21) Yeah, it's been a wonderful ride so far. It's been great. I've been lucky enough to have some great clients and work on some really cool, cool projects. Kyle James (04:29) So you mentioned some of the first initial projects that you were working on through your network, What were some of the, especially on the talking of AI, what were some of those challenges that you were trying to solve with when you found the company, especially on the AI side? Eric Karofsky (04:45) Yeah, so one of the things that we saw was that within the enterprise, there's really poor adoption. McKinsey actually has a great stat on this. More than 80 % of organizations aren't seeing tangible impact on enterprise level earnings from their use of AI. And started asking why, and I started looking at it and started realizing that AI right now in large enterprises anyways, it's in the engineer's hands. The engineers are the ones that are really, they're adopting it, they're playing with it, they're cultivating, and they're doing really cool things. But the problems are that they're not getting adopted and their projects kind of end in, often end in like skunk work projects. you know, we started looking into it and some of the reasons why is there's often very poor user experiences. This is usually an engineer or a team of engineers going ahead and creating what they feel is the right thing. And, you know, they have a great insight, but, you know, they're not necessarily, you know, they're not the users always. So they're not going ahead and touching base with the users. it's standard software design principles weren't being followed because it's just moving so fast. We saw a lot of issues around users with trust and explainability concerns that they see this AI as a black box. So how willing are they really to go ahead and bet their career on a recommendation? Kyle James (06:04) Mm-hmm. Eric Karofsky (06:12) Other items, not always a strong alignment with actual business needs. So they're developing cool things, but it's not always aligned with the KPIs of the organization. So therefore it doesn't get funding. So you put all that stuff together. If it's a bad experience, if I don't trust it and it's not solving my business need, I'm not going to use it. Kyle James (06:12) Hmm. Sure. Hmm. Eric Karofsky (06:35) And we started realizing, you know, how do we go ahead and start building in better user experience and human experience into this? There's a great quote from, and I love her title, Nina Kotler, the associate chief medical officer for clinical AI at Mass General Brigham. Her quote is, you could have the best AI tool in the world, a highly accurate one, but if users don't like it won't use it, then it's totally worthless. Kyle James (07:02) Wow. Man, that's it's wild. It's almost like discovering gold, but if you can't get the gold refined or diamond refined and presented in the store, like it doesn't matter. Like it's just, it's just in your hand. It's just a tool, right? Until it actually gets into the business and is being seen by others. It works. The customers are happy, but Eric Karofsky (07:17) Right. Kyle James (07:26) It seems like the, the user experience and like the customer side is like, it's such a huge part of it. And it can't just end at the engineer's hands. Like it's got, it's got to get to that next phase, which I think it sounds like that's what your team is doing is they're taking that to the next phase. Eric Karofsky (07:42) Yeah, and it's often kind of taken a step back and saying, know, how do we need, how do we go ahead and really understand the user and the business problem? And then let's design for that. Kyle James (07:52) So walk me through kind of like, you know, step by step here, like you get a new opportunity and you know, that you've got the project scope. Like what does the process look like for those who say, yes, I'm going to work with Eric here on getting this fine tune. Eric Karofsky (08:07) Yeah, so it can be a few different ways depending on how it comes in. But some of the things that we do to work with them is we help make those AI interactions just as transparent and understandable as possible. We'll do things like refocusing the initiative to make sure it focuses on the business problems. Sometimes that requires some interviews of talking with different stakeholders because ultimately those stakeholders are the ones with the budget that need to go ahead and fund this small project to make it a bigger project and actually roll out. ⁓ Other things that we'll do is making sure that Kyle James (08:42) Right. Eric Karofsky (08:47) We're measuring impact through real user and business success, those KPIs, not just technical metrics. Accuracy is critical, but we need to make sure that it's driving ROI. And how do we make sure that we align it with those ROI? And then there are things that, making sure that we design for the experience that users truly value and there's changing goals, almost changing adjectives of what is driving adoption. So back in the earlier days, user experience folks and a good application was based on, it intuitive? Is it consistent? Is it forgiving and efficient? And that was really appropriate for desktop and mobile. Kyle James (09:27) Mm-hmm. Eric Karofsky (09:33) applications, but when you started thinking about voice and when Alexa came out, I don't know, five, 10 years ago in Siri, all of a sudden, there's a personality there and you need to start thinking about how contextual is it? Is it conversational? Is it reliable? And is it personable? And also, is it not annoying? Sometimes you want a quick answer from Alexa and she'll go on and on and on. Kyle James (09:52) Right, right. Yeah, yeah, it's true. Eric Karofsky (09:58) you kind of need to Kyle James (09:58) It's true. Eric Karofsky (09:59) balance that out. And then when we start thinking about some of the newer AI models of generative, like with the LLMs, how deep is the knowledge before it starts hallucinating? ⁓ And how do you go ahead and make sure that the user understands that? How adaptive is it to what it knows about you and the changing models? And how explainable is it? ⁓ Kyle James (10:10) you Mm-hmm. Eric Karofsky (10:23) You know, of the things that I thought was really interesting, a few weeks ago, OpenAI rolled back a model because it was just too flattering. And I actually noticed this. put in a blog post and I said something like, evaluate this for me. And the response was, wow, what a great podcast. What a great blog post. You you're really onto something. it's like, what? You know, it just felt wrong. Kyle James (10:36) Wow. Yeah, like it felt like a human, like better than a human. Like a human, I've not been flattered that much in my entire life. Like, wow. Eric Karofsky (10:51) on. Yeah, and honestly, it wasn't that insightful. Evidently, they rolled that back and they tweaked some of the information to make it a little bit more relevant. Kyle James (11:00) Yeah That's so funny. had a, think about my wife. She, I've been trying to get her to use some of the AI and she's, uh, she's been using it pretty consistently now. And she had something, somebody she'd asked it a question. She's like, look what it said to me. It, look how it validated me in this house. was hilarious. Cause it's like, wow, it's, crazy. Like maybe I can learn a couple of things from the AI as far as like affirmations and things like that. But no, that's it's, it's absolutely incredible how much is changing. And like, when you mentioned earlier, I was thinking about this, like, Eric Karofsky (11:31) Yeah, that's right. Kyle James (11:40) with some of these different roles, do you get like, because what you're sharing with me is like, you take so many different approaches of like, if you've got one idea, here's this angle, that angle, this perspective, this person needs to have eyes on it. We got to get the funding improvement. Are people who come to you, do they have all these things, these answers like laid out? Or are they saying like, here's my idea, help me build this thing. And then you're taking it in and going, okay, let's take all the different angles and approaches. to make sure it is successful. Like, is that what you're seeing or? Eric Karofsky (12:10) Yeah, and sometimes it's pretty elementary stuff. It's just things that the engineers aren't thinking about because they're thinking about solving other problems. ⁓ And some of the things that we'll do is we'll use a design thinking approach. And that's kind of a loaded term that might mean something specific. Kyle James (12:19) x x y yeah yeah Eric Karofsky (12:29) What it's really about is how do we get a bunch of people in the room and start talking and they're diversified. They come in with different opinions and different thoughts and different expertise. And how do we come up with, you maybe it's structuring a brainstorm session. Kyle James (12:45) Mmm. Eric Karofsky (12:46) some sort of alignment, but making sure that the solution that's being crafted isn't just being crafted for that engineer who's coding. It's being crafted for a wider audience where they can get some real deep information. Another specific example is trust gets thrown around a lot and all often, and I learned my lesson with this. I came in and started talking to an engineer about we need to develop trust Kyle James (13:04) Mm-hmm. Eric Karofsky (13:11) And trust is very technical term. It's also a very non-technical term from the user experience side, where we're talking more about transparency and clarity of interactions. So you need to make sure that you're talking that right language. You also want to help people prioritize. Sometimes there's a lot of different ideas out there, and you want to choose. Kyle James (13:13) Yeah. Sure. Right. Mm-hmm. Eric Karofsky (13:33) the right ones and you want to roll them out in a systemic way and just making sure that's the most appropriate to get that adoption that you need. Kyle James (13:44) Yeah. Yeah. No, that's, think that's a big one too, because like, felt like with many people, especially those who are, are the, the idea creators here, right? Like they've got the idea or it's the engineer. Like there's so many directions you can go and there's so many things you can do and you have to pick and choose like, what's going to be the best investment of my time. That's going to get us the highest ROI. And I think it's paramount, you know, so it's not just like a one, here's one item. Eric Karofsky (14:07) And. Kyle James (14:12) It's pursuing multiple items to make sure that you get the result. that's getting that success that you're, that they're aiming for. So like, talk to me a little bit about, you know, some of the clients you've been working with, like what types of results have you been seeing, on your end and even what they're seeing on their end. Eric Karofsky (14:30) Yes, I'll tell you on one specific project and just a really cool project and just a great folk mini case study. So working with a large pharma, the pharma that I was speaking about beforehand. Kyle James (14:42) Mm-hmm. Eric Karofsky (14:42) And, you know, bringing a drug to market takes about 10 to 15 years. And one of the first things that farmers do is they do something called a literature review. And that means they go out and take a look at all of the different articles that have been written in the scientific publications on clinical trials or other research that's been done. And they want to learn and make sure that they can, you know, take what's already been done and build off of that that informs everything from an understanding of the market opportunity to understanding the different treatment options, buyer markers, and all sorts of technical things. To do that, they use specific vendors who go out and literally go through all of these different publications and pull the research. They'll come out with, let's say, a thousand different articles. And they'll work with the pharma to go ahead and narrow that down to about a hundred of the most relevant. And then what they do, and it's a really time consuming, excruciating process of you look through or the vendor is looking through that PDF and pulling out all of the factual information and they're dropping it into a spreadsheet. Kyle James (15:36) Hmm. Hmm. Eric Karofsky (15:58) And it's just excruciating, especially when you think about these publications. They're dense. They're written by PhDs for PhDs. They're hard to understand. There's a lot of information. To do that whole process takes about six months and $250,000. And they do this for many different, well, depending on the size of the organization, they'll do many different literature reviews. It gets very expensive and it takes a long time. Kyle James (16:06) Yeah, right. Eric Karofsky (16:25) With AI, what we're doing is we're building, bringing that down to about two weeks and $20,000. And that 20,000 is actually soft cost. That's just cost, the cost of the employees that you're already paying them. It's a huge win. And it's likely a lot more accurate because doing that sort of tedious work brings out errors. Kyle James (16:31) Wow. Hmm. Eric Karofsky (16:47) And it's also not using the scientists at the pharmas. You know, it's not using what they're trained for. They're trained in much heavier things than looking at a PDF and confirming and validating was it put in correctly. Kyle James (16:57) Right, right. Right. So it's like, so in other words, like they're able to, because like you can, you can shave down the time. And then obviously the funds to the science, these data scientists and these other people are associated with, they're not having to spend as much time on it. And they can allocate time doing other things that maybe require a lot more brain power, I guess you could say, right. That's wow. That's, that's all. Thanks for that. that case study. That's a, that's a really powerful, you know, results that you're getting. Eric Karofsky (17:22) Exactly. Yeah. And the big win there, just to jump in, the big win there is not only is it saving a lot of money and time, which is great for the bottom line of the pharma, but when you think about what the pharma is doing, it's bringing a drug to market maybe six months sooner, which is wonderful for the population. Kyle James (17:33) Yeah. Yeah. Yeah. I imagine like, even like revenue wise, like getting that drug out in the market in two weeks versus six months or longer. I'm sure that pushes revenue at a whole new level, for so many different, you know, customers and doctors and physicians out there. So transitioning a little bit here. So obviously you're using a lot of AI. I've got really good clients you're working with good case studies. What are some of those, you know, looking into the future with especially. crazy AI just changes so much and the train just going wild, right? What are some of your kind of upcoming AI initiatives and where do you see maybe AI playing a role in your operations next? Eric Karofsky (18:27) Yeah, so there's a lot of work coming in around AI adoption, the things that I was talking about beforehand. How do we go ahead and take these ideas and align them with the business needs, make sure good user experience principles are built in? So, you know, that's a lot of the work that's coming in and I'm talking with people. You if I think about, you know, another way of looking at it, a lot of it is around finding information within databases. These large organizations, they just have so many different databases and that's through just because different divisions have different standards, plus there's all sorts of acquisitions that are going on. There's different naming conventions. Kyle James (19:07) sure. Eric Karofsky (19:09) So another very quick case study is processes and procedures is a really big deal in most organizations. And following them to the letter, can be legally important to follow this process and procedures. Kyle James (19:17) Mm-hmm. Eric Karofsky (19:24) One of the applications that we're working on building right now is this company has over 500,000 documents over scores of different databases with all sorts of different naming conventions, metadata associated with it. And it's really hard to find information. But what makes it even more difficult is if you're following a process or a procedure, likely you need to know not only the process that you're working on, Kyle James (19:32) Hmm. ⁓ Hmm. Eric Karofsky (19:54) but it's usually part of a bigger process. And there's also sub-processes that are important. You can't find that sort of information now. That's just not around, but through AI, what we're doing is we're letting it loose on all the different databases. It's capturing metadata, keywords, relationships, versions, tagging, putting it all together and... Kyle James (19:57) Hmm. Sure. Eric Karofsky (20:16) the UI that we ended up coming out with is saying, OK, you're searching for this document, and here's that document that we found. But that's part of all of these other documents, and here are some other documents that support it. And by the way, here's the exact same document in Spanish, ⁓ which might be very important if it's a global organization or whatever other language. So that becomes really important. We're seeing all sorts of use cases like that cropping up. Kyle James (20:22) Hmm. Wow, yeah, sure. Yeah. That's huge too. mean, like I'm almost like visually, my mind went to like, almost like streets. Like you have like a main street that AI can go down or you can go down as a company, but now that AI is involved, you're able to go down those like salt, you know, those cul-de-sacs and those other small dirt road streets and search for more that you never would have even spent time on because it was such a small task for humans to do it. But now AI can go take that step further and dig deeper into the data side. Eric Karofsky (21:12) Exactly. Kyle James (21:14) That's amazing. where it is, as we wrap up here, Eric, we're for everyone listening in today and just super curious to learn more. Where should people go to learn more about you and about vector HX? Eric Karofsky (21:27) Yeah, the best places to go, my website vectorhx.com. And it's kind of like vector human experience, vectorhx.com. And LinkedIn, our LinkedIn page, my specific LinkedIn page. And feel free to ping me. Love talking with new people just to hear about what's going on and collaborate a bit. Kyle James (21:48) Awesome. Very cool. Amazing. Thank you so much, Eric. Man, it was great to have you on podcast today. Hopefully you enjoyed as much as I did today. Yeah. And remember, if you're looking to implement AI into your business today, don't try and do it yourself. The time of stress that the AI could cause just isn't worth it. Schedule a call with GPD trainer and let them build out and manage your AI for you. Again, that's GPT-trainer.com to schedule a consultation. Signing off for now. Eric Karofsky (21:52) Kyle, thank you. Yeah, this was fun. Thank you. Kyle James (22:16) Eric, it was a pleasure, my friend. Have a wonderful rest of your day and looking forward to seeing you on the next episode of AI Chronicles.