September 17, 2025

00:19:28

Rebecca Shaddix: Using AI to Run Research That Was Once Impossible

Rebecca Shaddix: Using AI to Run Research That Was Once Impossible
AI Chronicles with Kyle James
Rebecca Shaddix: Using AI to Run Research That Was Once Impossible

Sep 17 2025 | 00:19:28

/

Show Notes

In this episode of the AI Chronicles podcast, host Kyle James speaks with Rebecca Shaddix, founder of Strategica Partners, about the integration of AI in business consulting and market strategy. They discuss the evolution of Strategica Partners, the challenges of transitioning from in-house roles to consulting, and the transformative impact of AI on market research and client interactions. Rebecca shares insights on how AI has changed their workflow, improved research capabilities, and the future initiatives they plan to implement as AI technology continues to evolve.

 

Links:

 

Strategica Partners: strategica.partners

 

GPT Trainer: Automate anything with AI -> gpt-trainer.com

 

Key Moments:

  • Strategica Partners was born from a need for effective go-to-market strategies.
  • Transitioning from in-house to consulting can be anxiety-inducing but rewarding.
  • AI was initially met with resistance but is now essential to their workflow.
  • AI acts like a high-potential junior employee needing oversight.
  • The implementation of AI has significantly reduced time and costs in research.
  • AI allows for more comprehensive and confident data analysis.
  • The ability to conduct in-depth experiments has improved with AI.
  • AI helps in synthesizing insights for client-facing work.
  • The future of AI in consulting includes more autonomy and less oversight.
  • AI is transforming the way companies approach market research and strategy.

Chapters

  • (00:00:00) - Introduction to AI in Business Consulting
  • (00:01:23) - The Birth of Strategic Partners
  • (00:02:33) - Transitioning from In-House to Consulting
  • (00:05:24) - The Role of AI in Strategic Partners
  • (00:07:50) - Game-Changing AI Implementations
  • (00:10:49) - Transforming Client Research with AI
  • (00:15:27) - Future AI Initiatives at Strategica Partners
View Full Transcript

Episode Transcript

Kyle James (00:01.699) Hey, welcome to the ad Chronicles podcast. I'm your host, Kyle James. Today we're diving in headfirst into how business consulting and market strategy company called strategic partners is using AI inside of their own business. And we'll share the exact steps that you can take in order to implement AI for yourself. Now, before I talk about that, listen closely. Are you looking to implement AI inside of your own company or maybe just struggling to get your AI to stop hallucinating? Speak to GPT trainer. GPT trainer literally builds out and manages your AI for you, eliminating hallucinations for good. Go to GPT-trainer.com. I promise you, it'll be the biggest time saving decision that you've made all year. Trying to set up AI on your own is like trying to build a house from scratch. Sure, you could do it, but the time and frustration is going to take you to get it finished. It may not be worth it. It's a thousand times faster and safer to hire professionals. Scheduled consultation today. Once again, that's GPT. Today I have with me Rebecca Shadix, who's the founder of strategic good partners. Rebecca is a seasoned marketing executive and go-to market strategist with the proven track record of driving significant revenue growth across various industries. And might I say the host of time billionaires podcast. So excited to have this conversation today. Hey Rebecca, welcome to the show. Rebecca Shaddix (01:25.346) Thanks for having me. It's great to be here. Kyle James (01:27.831) Yeah. So give us some context here. Like how did strategic partners come to be? Like what exactly is it that your team does? Rebecca Shaddix (01:34.944) Yeah, it's a great question. like so many great things in life, strategic partners was a happy little accident of just consulting while I was still in-house leading marketing teams at software companies. I was at an education software company as the director of marketing when a former colleague who was the head of product and was starting a new company tasked me to build out their go-to-market plan. Similarly, a former VP of sales I'd work with was working for a UK based company asked if I could build out. their go-to-market US plan. And I just really love the high leverage that comes with really intentional go-to-market decisions, especially at those incremental, really important inflection points for software companies. so Strategic Partners started as just a little accident in helping people get more traction and leverage by making pretty key early decisions for the go-to-market strategy with some effective market research. So really market research, market insights was the foundation initially. and leveraging that into growth and revenue plans. Kyle James (02:36.129) Yeah, absolutely. And like what, tell me about that transition period where you, know, you were working for another company at the time and then you started getting people who were asking you for help in this. Like, did you like, was it like you were doing kind of part-time consulting and then like it finally shifted over or was it like, Hey, I'm getting a lot of like questions about this and I'm in there. I enjoy doing it and I'm going to go ahead and make the job. Like walk me through that like transition period that that had that you had. Rebecca Shaddix (02:59.404) Yeah, it's a good question. The decision to leave being in-house and go consult full-time was really a anxiety inducing and fraught one that I had to run through my value system of I love being in-house. You know, it's a good question. So there's a lot of safety and comfort that comes from doing something that is working. So leading marketing in-house was working. I was working for a company that was growing pretty consistently. Kyle James (03:08.802) Hmm. Kyle James (03:12.153) Tell me more about that. What do mean anxiety? Like that's a big one. Rebecca Shaddix (03:28.942) product lines were growing pretty consistently. It was at more of an incremental growth phase when I finally decided to leave being in-house and go consult full-time. The company had been acquired by a PE firm about a year prior. And so there was a lot of comfort and predictability with a team I knew, things were going well, a product line I knew. It was a lot easier to stay where I was. And so the decision then to go consult full-time, I had never run a business before. I had to learn things like payroll and invoicing and things that were really new and different. And so the decision to ultimately jump was one, I had to run through my core values and it was obvious when I did what the decision was, which was to consult, but because it was the scarier decision, it really just brought to light that it would be easy to stay where I was. I knew things could go well and predictably. There was also the big question I had of my resume. I knew that people could calibrate what it meant to be the director of marketing at a company growing a certain rate that had been acquired in a certain industry. But I didn't know the resume potential of what it would mean to now be consulting and what that would mean for my career trajectory. So ultimately, I just had to look at what did I really enjoy doing and that was growing companies really quickly and seeing the impact of my work having an impact on products that could change people's lives, getting into more people's hands. And so running through the, yes, this is scary, but am I taking up space, making decisions from a place of empowerment and not fear, not sitting on the bench? And just that was ultimately the jump. And I will say, I love being in-house. I love consulting. There's lots of pros and cons of each. And I just think that it's really interesting to think that the consequences of any one decision could be significant on a career, but if you run them through your principles, it's really hard to regret a decision that you know was aligned with your values, as opposed to just doing something that felt safe, kind of regardless of how things turn out. Kyle James (05:29.072) Yeah. Yeah, for sure. And so since making the jump, mean, obviously you're using AI, uh, right now at strategic partners. Like tell me a little bit, like, you, were you using it beforehand and, know, or you're using before and then also using consulting and like what specifically were like some of the challenges that you were trying to solve when you started, you know, implementing this new technology of AI in the first place. Rebecca Shaddix (05:38.274) Okay. Rebecca Shaddix (05:52.834) Yeah, it's a great question. So when strategic first launched in 2019, AI was pretty nascent. So I had launched an AI based product the year prior, but even the messaging around AI still had as much resistance and anxiety, especially in risk averse industries as it did excitement. So we didn't really lead with AI. We really led with the value proposition of this is what the tool does. And AI has completely transformed our workflow. It's been integral to the workflow in the last couple of years. But I think of it now as kind of like, as a very high potential junior employee who is capable of a lot, but needs a lot of training and oversight to make sure that the results are really what we want. And so now it's become essential to everything. We're really constantly asking, how could we be more effective, more efficient? Can we do better research, more in depth? Can we synthesize? inputs that are really hard for us to do manually with AI. And that's become a powerful market research tool, especially when we do things like look at survey responses that could be in pretty disparate formats that would take a human a long time to go through. But pretty reliably, AI can look at different formats of answers and start codifying certain patterns. So to answer your question, initially, AI was pretty nascent. Now it's absolutely essential and it's really accelerated the process and the quality of the results and the go-to-market strategies that we're able to produce and give us a more in-depth testing ground for how we think about testing things even before we would look at the more expensive, cost-intensive, even beta testers for new things. Kyle James (07:34.795) Yeah. Yeah, for sure. So what would you say is like been the biggest, you know, time saver, since implementing AI, like both, I say maybe time, maybe money, both like internally for your team that when you're using it, then even maybe like when you're, when you're working with some of your clients, like has been like, man, this is right. This right here has been game changing for us in this area. Like what, what would you say would be that game changer for you? Rebecca Shaddix (07:59.278) Yeah, I'd say that running in-depth experiments used to be a lot more time and cost intensive and sometimes prohibitive based on what the company's goals were. So we really adhere to a concept that we call an acceptable mistake or an acceptable trade-off, something that is not going to compromise the goals of a project and is okay to sort of, if you want to think about it, corners on internally. We can go slower if the goal is to have more comprehensive testing, for example, that would be the acceptable mistake. But for lot of the really in-depth research and experimentation, A, for companies that are either just getting started or don't want to expose a product to a real user base early on, they may not have had statistically significant results for the basis that we could test things on pre-AI. And inversely, it could take a really long time for us to go through manual user feedback and survey results that may use similar terminology, but really I thought took a critical eye to be calibrated appropriately. And I was even anxious about giving more junior folks the power to score it, to basically categorize what they were saying. So taking disparate inputs, disparate responses to form fields and starting to develop patterns and identify groupings and non-obvious breakdowns has been the biggest game changer. So I'd say as far as the market research goes, what we think of as being time and cost prohibitive has actually changed and grown. So it's a lot easier now to collect and analyze more results than we could have before. And evaluating them has become much more effective because we don't need a human to manually go through all of these disparate recordings, et cetera, and pull out things that are interesting to click into. It used to be honestly sometimes that we would ask salespeople to flag gong or chorus calls. Kyle James (09:50.499) Yeah. Rebecca Shaddix (09:55.736) then go sit through manually to see what the insight was, document that manually and try to pull out a pattern. That took a long time. It was pretty fallible. It relied on salespeople thinking it was worthwhile to flag it to us. It relied on us being able to actually go through and listen to the insight that was relevant. But I think even more importantly, the way that we used to categorize persona groupings wasn't necessarily by the actual, as granular for the behavior that we're tracking now. Kyle James (09:59.193) Hmm Rebecca Shaddix (10:23.98) So it used to be a lot of firmographics, demographics, which products are using, we would break down by the size of the customer and the revenue. But there's a lot of nuance in, especially a B2B motion, who's buying and who's using and how it's being used and what's driving retention of a product. And a lot of that was just missed. And so the things that actually look like the activation motion that we want to replicate, we've gotten a lot more granular with. able to segment a lot more granularly what this actually looks like for the behaviors that we want to replicate to drive quicker time to value. And I can give more detail on any of that. Kyle James (11:03.575) Yeah. So would you say that like, it's been like kind of the biggest takeaway and you don't have to necessarily give me percentages, just, you know, trying to put it in my mind here is like mostly internal usage has been the predominant, like helping teams. But then also I see that you're saying like the surveys that you're getting from some of the customers, like, I mean, like your team has spent the time, but necessarily now the AI is like spending all the time and not the human side of it. But like, I guess my question I'm trying to get to Rebecca is like, what, what types of like results. have you been seeing? I know I mentioned the time part of it, but maybe more towards the client face. How has it impacted some of the client experiences since you've been utilizing more of these AI and fine tuning a lot of even the surveys that you mentioned, even just different projects as a whole? What would you say those results are? Something worth mentioning. Rebecca Shaddix (11:54.368) Yeah, I'd say it's transformed the type of research we have confidence in conducting at scale. So it used to just be that if a company wasn't a certain size with a certain team size, with a certain customer base size, that there were a lot of insights that we just didn't think were time and cost effective to even try to collect. And we really acted blindly in this for the name, for the sake of moving quickly, but now it actually is a lot more possible. to get insights that were truly not reasonable to expect to collect because the lag time was just too significant when there was so much human intervention involved. So I think it's completely transformed what we think is possible to collect and act on. And really, I think our internal standards of the type of data we should have before we draw conclusions or assumptions and even how we evaluate the data that we do have, there was just a lot of, I mean, you can pull up any dashboard. and have four different conclusions about what it means, what could be driving it, what should be causing it. And all of this is through bias of your perspective, what you've seen before, and it's never comprehensive. And so that's one of the big changes is just the way we have confidence in testing the micro assumptions that go into interpreting the data sets that we already had. And then testing and iterating with confidence that we can actually act on the results. So. We've always adhered to this framework that goes, define your problem statement first, then we're going to define two to three hypotheses for each. Then we'll design experiments to test those. And so the problem statement is usually big, high level CEO sets it. This is the biggest problem that we need to solve. The hypothesis then could be, have hypotheses about what's driving that on the product side, the marketing side, it could be the customer facing teams, et cetera. And then we'd run experiments to test those and see if. Kyle James (13:31.505) Hmm. Rebecca Shaddix (13:50.082) those hypotheses are correct. But the cadence of those experiments and how much fidelity we have testing the hypotheses has completely changed when we actually have more confidence in the data that we can collect and evaluate accurately, right? Because your inputs have to be reliable for the outputs to be actionable. That's true of any data set, any system with or without AI. And now that we have more confidence in the reliability of the inputs, Kyle James (14:01.273) Mm. Rebecca Shaddix (14:17.538) the outputs are much more actionable and they're fueling this accelerated loop of better hypotheses, more reliable, robust experiments that are quicker to run, fueling better insights and this growth loop really happening because of it. It was a lot slower, more manual, more fallible before AI was a really core part of this process. Kyle James (14:41.431) Yeah. Yeah. I love that. I was picturing when you were sharing just like how it just slows you down because in this case, like the, like you have to get a really good clear vision. love your problem. Like, like the process of, know, that you're going through there. And I was literally picturing my wife and I, went to Virginia a couple months back and, we were driving to the Hills. Like I'm driving, she's in the passenger seat, but like we're going 50 miles an hour, but we were missing so much because I have to one, keep my eye on the road. And then also like looking off and I only see a couple of things, but I almost visualize it as like the AI is like, it's slowing you down to allow you to see the details. Like you can see the trees, you can see the hillside, you know what mean? Like that's what I pictured. And like in this case. Rebecca Shaddix (15:17.557) Yeah. Yeah. And the shape of the leaves on the trees, if I can borrow your analogy. Yeah. I like the definition of mindfulness to notice something new. And I think to leverage your analogy, AI is letting us notice something new about data sets that we've had access to for years. And so we can notice something new and actually act on it and have confidence in the insight in a way that you don't to your point when you're taking this 50 mile an hour, 30,000 foot view. Kyle James (15:24.995) Come on, I love that. Rebecca Shaddix (15:47.426) but also just things that just felt impossible or impractical to actually try to get data on, to act on for companies of certain sizes, for products with certain goals, completely different now. Kyle James (15:59.553) Yeah. Yeah, absolutely. I love that. I appreciate you sharing that perspective. And as we transition here, so tell me like, there's a lot of, there's a lot happening in the AI space. It's so fast right now. It's kind of scary, but what are some of the maybe upcoming AI initiatives that strategic a partner is planning for and where do you see maybe the AI playing some of the biggest role in your operations next? Rebecca Shaddix (16:09.933) Mm-hmm. Rebecca Shaddix (16:23.03) Yeah, right now to your point, we still have very human supervisory roles with AI. And so one of my favorite prompts is whenever we do any kind of analysis that we ask AI to evaluate patterns and the data is to ask it to write an email to an executive explaining the rationale and inputs for it. And really we use that internally to say, I as the human with experience conducting research like this, agree with the factors that were weighted, et cetera. Because of course, the more robust a system gets, we don't know exactly what went into the conclusions and the outputs. And this is right now our intermediary attempt to say, do I, the human, trust the inputs that you're weighting? And it's still largely internal. We would never share that email with a client, for example. But I think there's really, there's two things that I've seen AI do well in. expect to continue to do well. One is, like I mentioned, collecting and aggregating and analyzing disparately formatted data sets. So there's some uniformity so we can evaluate it. And two is just the language. It's really great at ideating new messaging and language. And so I'm excited for AI to take a less human intensive role in client facing work. And what I mean by that is right now, it's very much a thought partner and very much AI is helping us synthesize insights that a human then aggregates and translates to a client and develops campaigns around, develops messaging around. I think we're getting to a point where we can trust more of the outputs from AI without as much human oversight. And the better we train it, et cetera, the more we constrain the goals, et cetera, the better we are. again, with that thinking of this as AI as a junior employee with a ton of potential and it's really eager and can do a lot, but needs a lot of oversight. I think it's going to need less of that oversight and become better at interpreting the goal and the output the more we train it and the more it advances. Kyle James (18:31.42) Yeah, absolutely. love that. And as we wrap up, wrap up today's call, Rebecca, where can people maybe learn a little bit more about you and a little bit more about maybe strategic partners that you'd recommend, you know, that they've check out. Rebecca Shaddix (18:43.444) Yeah, let's connect on LinkedIn. Rebecca Shattuck and I also have a podcast on a totally unrelated topic, which is time management called Time Billionaires, which is about how to spend the 90 second to 15 minute gaps between the structured parts of your day, which I call micro moments. I post about those on LinkedIn as well, or you could follow the Time Billionaires podcast. Kyle James (19:03.499) Awesome. love it so much. Rebecca, it's a pleasure having you on the show today. Definitely enjoyed your perspective. I know people are listening in probably feel the same way and hopefully we'll have you get on the show in the future. Who knows? Yeah. Awesome. those for, and again, for those listening in and you're looking to implement AI into your business, please don't try and do it yourself. The time is stressed. Yeah, it could cause it may not be worth it. Schedule a call with GPT trainer and let them build out and manage your app for you. Rebecca Shaddix (19:14.976) That would be great. Yeah, this was a fun conversation. Thanks, Kyle. Kyle James (19:31.897) Once again, that's GPT-trainer.com. Signing off for now, have a great rest of the day everybody and looking forward to seeing everyone on the next episode of AI Chronicles.

Other Episodes