Getting to an amazing FP&A Data story – Brandon Wilson

Continuing our “Masters of FP&A Data” series, we have the privilege of hosting Brandon Wilson, Founder and CEO of Steady Dynamic. Brandon works with clients from early-stage startups to Fortune 500 companies giving expertise to those who lack experience building digital solutions. He encourages big ideation, assuming technology can solve any problem, and works to prioritize and constrain scope relative to business objectives. “FP&A has to some degree implemented predictive analytics” he says. “Whether that’s custom or bespoke modeling or just using tools. The next generation is unlocking prescriptive analytics. That is not just presentation or data for the purpose of extracting insights, but insights that come with recommendations and, and the ability to run multiple scenarios. “Real time data acquisition and analytics is also empowering us to do things more on a daily, if not hourly basis, and see things way sooner, uh, from, from an analytics and forecasting perspective.”

In the second week of data-nerding-out we have a treat:

  • AI’s transformation of finance and financial modeling
  • Predictive analytics to prescriptive analytics 
  • Moving from cost center to “value add” in finance through the data
  • How best to deliver and integrate your data to enhance financial functions
  • Studying sentiment analysis in your CRM to investigate pipeline 
  • Complex models in FP&A including clustering and Naive Bayes
  • SQL vs no SQL and FP&A
  • The Panama papers
  • What the data environment looks like for FP&A in the next two years 
  • Zero shot prompts and chain of thought prompting 
  • Reverse engineering to get the best AI finance results 

Connect with Brandon Wilson on LinkedIn:
https://www.linkedin.com/in/bkwilson/

Glenn Hopper:

Welcome to FP&A Today, I’m your host, Glenn Hopper. Our guest today is Brandon Wilson, a digital transformation pioneer with a comprehensive background in customer software solutions, automation and ai. Brandon has driven multimillion dollar initiatives in both startup and enterprise environments, mastering innovation in the digital era. He excels in sales and business development, establishing platforms that address challenges in digital product development, process automation, AI, and large language models. His specialties include strategic planning and analysis, new business development, key account management, digital transformation, product management, go-to market strategy, project governance, client relationship building, and team leadership. As a solution architect, Brandon focuses on embedded systems, iot, AI slash LLMs and Web3, empowering individuals in software and technology. He collaborates with clients to create digital solutions and conducts competitive analysis to ensure market positioning. Brandon leads cross-functional teams, ensuring synergy and exceptional results, and maintain strong relations with key clients and stakeholders through excellent communication skills. We’re excited to have Brandon Wilson on the show today to share his insights and experiences in driving digital transformation and innovation. Welcome, Brandon. Thank

Brandon Wilson:

You, Glenn. Happy to be here. Appreciate it.

Glenn Hopper:

Yeah, I know we’ve got a lot to a lot to cover today. One of the things I’ve loved about this show is the number of data geeks I’ve gotten to talk to lately, and I’m, I’m I’m putting you in that bunch and I, I think that’s probably a a moniker you would proudly wear.

Brandon Wilson:

Yes, I’ll accept it. Gladly.

Glenn Hopper:

I guess just as a little background, gimme a, a bit, I mean, I, I, I went through, you know, kind of your professional bio, but tell me about your career journey and kind of what led you to your, your current role at Steady Dynam.

Brandon Wilson:

Yeah, sure. So, I started my career and, and most of my career was spent in telecom equipment. And I had the opportunity to lead an initiative for the introduction of what was a very early version of smart home sensors and devices. And this was done for the, the cable television industry through a platform called I Control. And although the, the sensors at the time were simple, they were really the first, I guess, broad use of what I’ll call smart home features. And so this is where I kind of caught the bug of, of how software going from unintelligent to intelligent devices and how software and then data collection from these sensors could actually automate trigger, you know, and, and enrich an experience for, for instance, in smart home applications. And so that’s where I, for that was probably, you know, a good 15, 18 years ago at this point. But that’s really where I first caught the bug and started to, to see the importance of not just intelligent or smart devices, but then leveraging data from those devices for you know, a number of beneficial outcomes.

Glenn Hopper:

Gotcha. And there, there’s something about telecom, and I think it’s because there’s so much data. So my start was in telecom as well, and it’s really, I think that’s where I got my love for data and realizing how you could integrate data from different sources and put it into financial planning and operational planning and all that. So it’s yeah, I’ve, I’ve talked to several people in telecom, had a few guests on the show from that background and, and to a person they were very data-centric. And you know, you can see from the telecom start how that Yeah,

Brandon Wilson:

How, well, I mean, they, they sit on significant pipelines for data collection, if you think about it. And so it makes sense to their, their, their, that industry to be kind of the pioneers in that space.

Glenn Hopper:

Yeah. And it’s been, for me, it’s been two decades since I’ve been in telecom and I mi I’ve at no company. I haven’t been at another company that had the level of data that we did at Telecom. So I’m still kind of I I miss those days where I <laugh> had you know, just a full universe of data to play with like that.

Brandon Wilson:

Yeah. I can remember watching the progress ofATT Pleasanton, California data center and watching how much it grew for the, the, the accommodation of the, the vast amounts of data that they were collecting. And it was, it was a wild ride to watch from its very early days of what it was and how it grew and how, how quickly it grew. There’s a, a, again, a just a ton of data collection out there. And I guess, you know, part of today’s discussion is, is really being productive with that data. How do you use it? You know, how, how, what are the, what are the pros, the cons, the risks, the rewards, and, and how do you really bake it into doing more advanced analytics as it might relate to, you know, financial planning and forecasting and business health management. It’s, it’s a, it’s a really exciting, you know, time to be in this space. ’cause There’s a lot of great progress.

Glenn Hopper:

It’s, and we’re, you know, we’re about to shift it up even to another level as, as the proliferation of data and the availability and the amount of data that’s out there has grown. But now with all these supercomputers being built around to, to power the training of these new LLMs and frontier models that are out there, I mean, we’re about to see the, you know, next level of the data. So Yeah. Like when, when you and I started our, our career and, and telecom and, and watching that, you, you’re seeing the next level of it. So tell me, you know, from that start, so now you’re, you’re the founder and CEO at Steady dynamic. Tell me about steady dynamic, what you guys do and, and some of the key services and solutions you offer. Sure.

Brandon Wilson:

Yeah. So at our core, we are a custom software development consultancy and, and agency. So we build bespoke custom software applications for our clients. That in large part includes, you know, traditional web and mobile applications. But for the better part of the last, say, two years has seen a significant uptick in the implementation and integration of ai. And, and probably majority of those applications and, and use cases are for internal business process, either the promotion of efficiency or enrichment. So thinking, you know, kind of right data in the right hands at the right time for the right reason. And, and so it’s been really interesting to watch. I think we’re, we’re probably at coming out of kind of phase two of what I’ll call the, you know, kind of ai maturity model where a lot of our clients have done the homework, maybe even some experimentation.

So they recognize there’s value there. But I think it’s in that same process of experimentation and, and diligence that they’ve also recognized the limitations and that, and that they, there’s a need to call in the pros to help ’em move to what I’ll call more production environments, where not only are they getting the results that, that are expected specifically from the, the AI engine portion of the application, but also are being mindful of, you know, security and governance that go along with it. And so we’re seeing a dramatic uptick in, I think that client persona that has done the experimentation, knows there’s something there is prepared to invest, but is now, is now requiring kind of outside help to make sure that, you know, there’s a better chance at harvesting ROI, but again, really built in, in a production way that is, is secure and safe to, to roll out for both internal and external use cases.

Glenn Hopper:

Yeah, and one of the things when you and I were talking before the show, I realized we we’re both going after the same goal, but we’re coming at it, you know, you’re, you’re coming into the house through the door, and I’m climbing in through the window <laugh> as the finance guy in that we’re both trying to get access to the data to make the business run better. And, and with what you guys do, you know, it’s, it’s on process and operations and, you know, and the automation and using data to to drive decisions and, and, and build out what you can do with the data, the amount of data you’re collecting and everything. And for me, I want the same thing, but my initial desire for this data was because I thought, I, I can use this to help drive my models and help, you know, make, give, let me, give me more data and I can make a more accurate model and, and better predictions.

But we’re going after the same thing. And I know, you know, I think when you’re, when you engage with a, a client, probably the CFO is not the first call that you have with, with the client. But I, I do think that it is, there’s crossover in all this data. So I’m wondering, you know, as you, you, you come into a business and you’re seeing how they use the data and now understanding where, how finance can use it, and you talked about us being at this sort of next level of AI maturity as we get more mature with our data. How can you see ai, generative AI and their traditional machine learning and AI that’s been out there, how do you see it further transforming the finance industry? And I’m thinking particularly in finance forecasting, but I know it has other applications as well. You

Brandon Wilson:

Know, I think for the most part, most people, let’s say in the FP&A industry have to some degree implemented what I’ll call, I guess, you know, predictive analytics, whether that’s either custom or bespoke modeling or just using tools that come in SaaS platforms that they’re using today and have a, a pretty good handle on how that applies. I think the next generation, and it is related to a number of advancements I think in the, in, in the industry from a technology perspective, are the unlocking of prescriptive analytics. And so, you know, couple, not just data presentation or data for the purpose of kind of extracting insights, but now insights that come with recommendations and, and the ability to run multiple scenarios. Even thinking of it as kind of a digital twin. So you could, you know, simulate or, or create millions of simulations based on, you know, whatever inputs you, you want to change in, in this in order to gain prescriptive analytics.

And then I think if you couple generative AI into that equation can also, you can harvest the best, some of the best features of generative AI in that it is natural language in order to create reports and insights for different stakeholders in the FP&A industry. And because they all might have kind of different requirements or different views, so to speak. And so the rapid conversion of those prescriptive analytics, so say for a report generated for the accounting department or, or the CFO and or the you know, business unit leader, I think is a really exciting opportunity because it, it, it makes the availability of that data and insights, I think, more consumable in a sense and, and more applicable to the stakeholder involved and what they’re really trying to gain from, from that data. And, and I think probably the other piece of the puzzle is getting to more real time analytics.

And so, you know, tra traditional cadence of, you know, an FP&A professional might be to do weekly and monthly and quarterly and, and really, and a lot of that, a lot, some, sometimes that work is very kind of forklift or kind of waterfall. You know, you, you gear up a bunch of effort, you produce a document, and then, you know, you move on to the next kind of checkpoint in the sequence. But I think real time data acquisition and analytics is also empowering us to do things more on a daily, if not hourly basis, and see things way sooner from, from an analytics and forecasting perspective.

Glenn Hopper:

Yeah. And that real time part is so key because I can remember, you know, back, back when I started where everything you were, you are waiting on some crime job to run overnight and you’re just waiting for reports and everything. And now as you shift to more and more live data and being able to use it, and you see companies going to you know, realtime close, kind of the always be closing where you’re not just waiting for the end of the month, now, there’s the official financial close that you still have to go through, but having access to all this data as you go along. And another thing that really, I, I, I think for finance people, when we think about analytics, you know, I think about the, the level of analytics you can do and sort of the data maturity of a company where the first thing that companies do, and hopefully in, in 2024 <laugh>, you know, we’re all there now, but I know there are still companies that are struggling here.

But just that, that descriptive analytics of just, this is the universe of data we have. What does it mean, what do I know about my customers? What do I know about customers who’ve churned about our sales process, our pipeline, you know, just taking all the, taking stock of all your data and saying, here’s the charts and graphs that show kind of where we are today. And then evolving that into, okay, based on this, I can now apply that to my modeling and I can have predictive analytics, which is, is great because it lets you, you know, come up with those more accurate forecasts based on more data. But I think that the way that this resonates with finance people is to move from that sort of cost center label that we get saddled with, where we’re, we’re considered backward looking. And even if we build good models, it’s like, yeah, that, that’s fine.

You’re, you’re modeling out a potential future, but that does, how can I turn that into strategy? And you mentioned going to prescriptive and for finance people, that’s here’s our data, here’s the KPIs we’re tracking, but now I’ve found the levers, if you want to change Yeah. The future, pull this lever and all that. So being able to identify that yeah. Is that next level where, oh, now we’re suddenly a strategic partner and I think AI and, and data is, is the way that we’re gonna really drive that home and increase our value in the company.

Brandon Wilson:

Couldn’t agree more. And you taking that approach, obviously, and to your point about kind of changing it from a cost to a value driver in that respect, is, is exactly the kind of the point of what you’re gonna, you know, implement those type of strategies for, is if you think about just principally the impact that something like one of those levers being pulled from in cashflow analysis and cash acceleration. I mean the, the, the, these are really important, let’s say variables in the, in the health management of, of a business. And, and the larger you get, the more important, impactful and important some of those decisions are. And so I, I couldn’t agree more. I think that’s that, and I like saying it like the move from the kind of the cost to the value add model by being able to know which levers to pull is a great way to look at it.

Glenn Hopper:

I think for a lot of our audience certainly finance professionals like everyone else, we’re getting more data savvy and we understand the value of it and how to use it more and more. But I think what a lot of us are challenged with is, we know data’s out there, we’re not the gatekeepers of the data. We know there are certain data points in our CRMs and our ERP systems, maybe in project management tools. There’s these data points that are out there, and if we could get them or understand them, and, you know, if you’ve got a company that has a really well-defined data warehouse where you understand, you know, the, the source of truth and everything and what they all mean, that’s huge. But I think one thing that we need to figure out, it’s, you know, we don’t need to get into the technical details of how you extract this data, but maybe thinking of, we have this data in these various sources, how do you see, once we solve for the delivery of it, how can that be integrated to kind of enhanced financial functions? If I know, you know, something about pipeline, maybe there’s this mentality of, well, that’s in my CRM tool, but we never won the deal. So is that data valuable to me? Or, you know, maybe different points that we don’t think about. Can you think of some examples of how that external data might be able to be integrated to enhance modeling and other financial functions?

Brandon Wilson:

Yeah, sure. I mean, I think first and foremost, if you think of all the, the, what might be might be viewed as disparate systems, right? So CRM, ERP might have project management tools, you know, a number of things that are used to harvest data or at least have the ability to harvest data. If you think about them all as opposed to being disparate tools, as interconnected tools, they allow a more just for first and foremost, a more comprehensive kind of 360 view of your total business. And then, you know, maybe the example one that you started there is if you think, well, I guess I’ll, I’ll kind of stick to, to cash to book as an example here. Which even though you might not be able to do proper revenue recognition until the order is actually booked, you can study sentiment analysis or, you know, behavior patterns from your CRM tool about the, like, cadence of inter you know, inner touch points between a client as you’re going through a sales funnel to start looking at maybe more prediction toward whether that client is gonna convert.

And so while that may not be traditionally viewed as revenue recognition, it’s a leading indicator that revenue is going to be booked, right? So you can get even further ahead of the actual revenue recognition event. And then on the other side, as an example, you could use, you know, various clustering models, ones that are just kinda widely available and known to analyze your expected payment terms. So how much cash is gonna be out a a a according to, for instance, the different sizes of companies, the different industries that they represent, the, the different products that they buy, and really get prediction models around, you know, okay, their terms are net 30, but when do you actually expect to be paid? And so you’re actually kind of lengthening the overall, let’s say, pipeline of analysis to maybe enable you to do better just, you know, kind of threshold analysis.

So you have a, a low watermark expected and a high watermark. And if you start blending in some of these other components that come from these other tools, you might gain, you know, not only better analytics, but going back to now starting to look at the, the levers you pull, you know, maybe you can change those dynamics. Is there something you can do to accelerate from the CRM perspective in that sales pipeline? You can accelerate that conversion and or on the other end you know, being able to, to collect the cash payment quicker in the cycle based on your understanding or expectations that the analysis provides.

Glenn Hopper:

Yeah, and I love that. ’cause I think of, you know, it’s funny because in recent years I feel like sales and marketing has really, they’ve gotten deep into data analytics and they’re using these tools a lot more. And I think fi finance and fp a are, are, are catching up now, but sales and marketing really was on the leading edge with using machine learning and using customer data and using all this to, to do their forecasts. But as we look at what data we have in the different systems and you know, whether it’s predicting customer churn or understanding customer acquisition costs or lifetime value, I mean, it all, it all goes in and feeds the model and the financials. So it’s, we are beyond just analyzing our financial statements, it’s again, going to those levers of what impact. So this is our pipeline, this is what we won, this is what we didn’t, what can, how can we predict to make better forecasting models?

It’s, we need to know the data about as much data about the ones who didn’t sign up as the ones who did. So it’s sort of looking at the broader, you know, beyond just the, the GL and the other parts of the business, right? To, to do our jobs. Yep. Another thing in the data analysis is say we have, say our company has reached a, a level of, of data maturity where we understand, you know, we’re all we know have known KPIs, we have known sources of truth, we understand that data and disparate systems and what to use to use when. But I, I think for a lot of finance people, it’s, you know, we build models out on Excel and we kind of, we know, okay, I’ve got this data and I’ve got this data. Let me put it in and see how it informs the model.

But I’m wondering, you know, one of the other things you and I talked about before the show is, is moving kind of beyond Excel and moving into, whether it’s r or Python or something where you kind of programmatic languages. But tell me some of your experience, and I think this would translate into finance, but where you’re using these more complex models to find, find correlations maybe that aren’t readily apparent and that you wouldn’t even, you know, if you weren’t doing a correlation matrix, you wouldn’t even know these two things are correlated. But some of the complex models that you use to sort of make sense of this big data where you take your internal data sources that you have, and maybe you start looping in external data sources, macro economic information. Do you have have some examples of that?

Brandon Wilson:

Yeah, sure. And I think although it’s an obvious statement, this is really the, the, the expansiveness of data availability is really what drives the need for the models in the first place, because that’s just simply not possible to do as a human. I’m not sure if you remember seeing the movie The Accountant with Ben Affleck in the room and he’s writing everything and he’s pouring through papers and writing. So, okay, not all of us are savant. So this is why we use machine learning. And the good news is that there’s a lot of off the shelf models that have been perfected, that are useful as it might apply to, you know, a finance profession and industry. So, you know, you go with the, the, the tried and true kind of og linear regression which you know, is still, you know, what I’ll call a fairly a simpler, a simpler model that you use, but, you know, relying on historical data to project the future.

But it’s still important from a perspective of vast amounts of data, right? But I’m, I’m a big fan of clustering, so like nearest neighbor models, which might reveal insights based on other, for instance, if you’re trying to look at other either transactions, other entities, kind of other scenarios, how they relate to other knowns in the system to be able to, to drive correlations that way. And then another example is using naive bays, which is, is really starting to get cool into probabilistic basically probabilistic outcomes based on prediction. And again, these only work if they’re large data sets. And so, you know, you you, you would implement them only if you have significant data availability, in which case you kind of flip from, you know, going from, you know, human heuristics into proper machine learning and, and using these models. But I mean, like I said, the, the really good news is that a lot of the established available machine learning models, while they may not have been necessarily designed specifically for financial analysis directly apply, but it’s always about picking the right model for the right reason.

Glenn Hopper:

Fp and a today is brought to you by Data Rails. The world’s number one fp NA solution data rails is the artificial intelligence powered financial planning and analysis platform built for Excel users. That’s right, you can stay in Excel, but instead of facing hell for every budget month end close or forecast, you can enjoy a paradise of data consolidation, advanced visualization reporting and AI capabilities, plus game changing insights, giving you instant answers and your story created in seconds. Find out why more than a thousand finance teams use data rails to uncover their company’s real story. Don’t replace Excel, embrace Excel, learn more@datarails.com.

To your point on, you have to have the data to do anything with it. I think about companies that have had great success and, you know, using machine learning for more than a, a decade now. So e-commerce companies have so much data and they’re able to do, they, they can do their clustering and, and segmentation and, and building out the models based on that. But if you’re, you know, you’re a, a chain of dry cleaners that have 20 locations you know, what data do you have that you’re, you’re able to use? So it’s, it is a, a challenge. And I think even, so I guess there’s on, on the low end, there’s people who just don’t have enough internal data. So how can you factor in, you know, how, how could you use AI to improve your business? And, and that’s a challenge. But, and then even on the other side of it, there’s sort of you know, the too much information. Yeah. Trying to figure out what of it you can, you can use. So from your viewpoint, and as you go in and work with clients, what are some common challenges that companies face in accessing data and, and making use of it? And, and how do you help them address some of these challenges?

Brandon Wilson:

Yeah, sure. It probably still remains the largest challenge to successfully implementing really any analytics or AI strategy is the usefulness of the data. It was interesting, I, in prep for the show, I’ve known these numbers, or I’ve, I’ve known at least anecdotally the significance of some of these numbers, but I actually looked it up and there was a recent report from NTT Data that said 80% of the world’s data, so this is all data availability is unstructured and 90% of that unstructured data. So 72% of all data is not used in any form of analysis. And, and it begs the question why <laugh>? Well, sometimes the answer I think is people don’t really know how to use the data. As an example I think people over collect and store data, maybe in some cases unnecessarily. I’ve had a recent conversation with someone and they were contemplating the elimination of some historical data, and I asked, sit down with your team and think about if you can determine any usefulness to it, keep it, if you can’t get rid of it, ’cause you’re paying for it <laugh> like you, you’re paying to store data, you’re just creating reams and reams of data that you’re, you’re not using.

So but at a structural level you know, it’s important to note that it can be expensive to convert unstructured data to structured data environments, right? So getting from data lakes to data warehouses, and while I think that’s the right approach to implement, if you’re really locked down in terms of the purposefulness of your kind of AI or ml implementation, so meaning you know exactly what you’re looking for, you know exactly the data that’s gonna support it, in which case you want to construct, you know, just rock solid lockdown repeatable and, and scalable models. But what’s been interesting here more recently, I think is the proliferation and advancement of tools that allow you to interact with unstructured data. And so while that may not be the production choice, I think it, it provides an opportunity to, that historically maybe wasn’t always necessarily available or even practical, has allowed people to experiment with unstructured data in ways that, you know, okay, it’s not that recent, but let’s say generally speaking in data science is more a more recent phenomenon.

And that that can be through tools like an Al Alteryx, which helps you, you know, kind of, well, I wouldn’t wanna say autonomously, but at least assisted unstructured data, a preparation for analysis. You’ve got, you know, obviously huge platforms like Databricks, which foundationally are data lake, not data warehouse, so to speak. You know, there’s Talon, which is a, a great tool for, for, for organizing disparate sources. So there’s been a lot of progress in terms of the tools that help account for let’s say some of the challenges that are related to making data useful and then not the least of which is what we’re seeing with the LLMs and, and some of these foundational models that it doesn’t care whether it’s structured unstructured, you know, multimodal or otherwise, right? So it’s gonna take a look at just kind of every, every data source you’ve got and, and do its best with, with, with what you put in the engine. So, yeah, I, I think it will always remain a challenge, and there’s a reason why from that statistic so much is unused, but I think generally speaking the tools that are, that the technology I should say that is more recently been introduced to the industry is making it easier to put that structured and unavailable data to work.

Glenn Hopper:

Yeah, and actually I kind of, I wanna go ahead and, and dive into that because you and I talked kind of at length about this, just thinking about where we are with LLMs right now and being able to take these, you know, highly trained generalists and be able to apply them to our own data sets and be able, you know, we talked about the limitations of retrieval, augmented generation and how companies could actually use generative AI to really focus on, on their data and turn those generalists into very specialized tools to use for their companies. As we dive into this, let’s talk about, I know SQL can be massive databases, but I, I think about sort of historically and, and where I got my start where thinking I was super smart for being able to write SQL queries and be able to get access to this data, but it’s grown so much.

And to your point on data not being used, so much of the data out there is unstructured now. And I’m thinking about, you know, the, the no sql sort of structure and even, and maybe if you could touch on internet of things, the amount of data that’s out there that’s not being used, I mean, that’s, that there’s a, a treasure trove out there, but it’s, it’s so much in just trying to figure out what’s useful with it. But one of the things you and I talked about was using graph databases and finance operations and the potential there. So I’m, I’m throwing a lot at you, but what I’d like to, based on what we talked about before and kind of where we see this going, talk to me about the advantages of, you know, we have all the structured unstructured data, internet of things, putting it into a graph database, and then what the advantages of graph are compared to maybe traditional indexing and just being able to try to make sense of all this. If you could walk me through that, keeping in in mind that our audiences that we’re gonna, they’re gonna be switching off here if we get too far deep into the <laugh>, the technicalities of the data side of it, but thinking of it the advantages of that for finance operations.

Brandon Wilson:

So I think first it, it’s helpful to just kind of give an oversimplified comparison of, of SQL versus no sql. So if you think about traditional SQL use of traditional SQL queries, I would say that has been largely used when, again, you know exactly what you are trying to achieve. You know, exactly the data source, the data type, its cleanse data, kind of the perfect implementation, right? So I mean, you know exactly what the inputs are of exactly what the outputs are, and, and, and that’s probably historically been the primary use for SQL, whereas no SQL you, you get a lot more flexibility with things like, you know, the schema, right? That can, that can change over time in no SQL. And so I, I would just say it’s somewhat inappropriate to look at it this way, but I think no SQL allows you to do more experimentation because it has less limitations of structure, if that makes sense.

And then graph databases are actually a derivative of no sql, so it really fits into that space. And I think the, the best way to think about a graph database is that it is by design intended to analyze the relationships between data points, which are called nodes in, in a graph database. So it’s mostly concerned with the relationship between two data points as opposed to the data point itself, so to speak. So what’s what’s really great though about a graph database is it’s not bound by the rules of SQL of joiners, right? So you don’t have to do all these complex join functions that can get really complicated and outta hand, and if you mess one thing up, it, it kind of defeats the whole purpose of the entire analytics or models. So, but I think what graph database in, in the case of like, in the financial and ana analytics industry is, is, is really important because first and foremost, I think like Power BI and Tableau allow you to visualize data.

Seeing a graph of your data is impactful by itself, specifically when you think about complexity and dependencies. And so, as an example, if you manage a company that has multiple subsidiaries and then mul maybe multiple domestications of incorporation and you know, a number of other complexities, it can be really interesting just to see the entire graph of how your company looks and how the relationships exist. Specifically, do you have to deal with things like you know, intercompany transactions or transfer pricing, you know, things like that. So you can really help get a visual impression, which is the start of the analysis. But then the relationships and the semantic the kind of the semantic relationship analysis of those data points is, is really what makes a graph database powerful. So as an example, and, and probably most people watching this were familiar with the, what was it called, the Panama Papers, where, where they, they actually reconstructed all the complexity of all those offshore accounts in a graph database.

And, and how they got to it was actually looking at the transactions themselves. So if you consider for instance data that, that would be used in a graph database, so you start looking at like the transaction itself, and then you start adding metadata to it and, and its relationships to say geography or entities or, you know, the different banks that were involved, the different let’s say more firmographic information, like the names that were used, the passwords the email addresses, the, you know, things like that. It’s what allowed them to construct the graph of how all those transactions ultimately, you know, fraudulent transactions effectively were, were, were moving through the system. And so and, and the, the use case of, of graph database is, is currently really common for actually looking at fraud laundering things related to, you know, KYC and identity management. And so it’s a, it’s a really powerful tool. It’s maybe a little bit lesser known in the grand scheme of things, but again, from a, if you, if you’re trying to look at how data impacts other data, so that semantic relationship between them, that’s really where you can get a lot of great value out of using a graph database versus a traditional relational database.

Glenn Hopper:

Yeah. And you know, for anyone who hasn’t seen ’em, to me, these, these network diagrams from graph databases are visual and they are telling, and you get, it’s, it’s like looking at any other chart or graph when you start seeing all the, the nodes and then the vectors coming off of them connecting to other areas, and you start seeing the relationship and how connected things are you, it helps you to be able to visualize and understand how, if you were querying along that path, how you might be able to access the data versus, you know, if you’re looking at a traditional SQL database schema, it’s, you know, it’s, it’s nothing visual about that. But seeing it in the networks, and I think the reason you and I were talking about it, and the reason that it’s I important now is if we’re trying to take generative AI and use it on our data, that that network, those connections, that graph database lets the LLM know where to go look for something specific.

Whereas if you just dump a bunch of, you know, you, you have a, a bunch of documents that you put in a, a vector database and the LLM is going through and searching, it’s got a limited context window it’s gonna go through and, you know, may peter out may not get what you result, but if you can start it down the path of following these graphs, and then you could have other indexes in there as well, that really, it’s, it’s like a map for how to get to your data. And to me right now, that is very important to be able to use, you know, to be able to use some of this data that we’re not able to tap into right now and use the benefits of AI to do that, to find these connections that we on our own couldn’t.

Brandon Wilson:

Yeah. Yeah. Well, well said. And I think two, two parts. One, I I’m still a huge fan of, of just generally speaking, what visualization does to improve your, your, your kind of thought process related to analytics. And so to, to your point about being able to see it, the other great thing that it does is allows you to see the change. So as an example, if you’re doing scenario generation or simulations with a graph database, and I’m just, since I said it, transfer pricing, and you say, well, there’s a transfer pricing rule change, I’m gonna implement the change in the scenario and now see how it impacts the, the entities, it can represent that data like that. So it’ll actually change the graph to match the, the variable changes, which is, which I think is just really powerful and also fun. <Laugh>, I can’t believe I just said that was fun, but yes, <laugh>

Glenn Hopper:

That’s the first time transfer pricing and fun have ever been <laugh> put in the same sentence.

Brandon Wilson:

Alright, fair enough. Well, we did, we did start this, this podcast by saying we were data nerds. So, but on on your other point too, you’re, you’re absolutely right. There’s, you know, a new, i I keep saying everything’s new, but I think it’s just relatively new. But approach to, to graph rag, which is what you described, and I think it is a powerful addition in the arsenal to not only making data more interactive, but making the output of, let’s say, the conversation you’re having with these LLMs for the purpose of analysis to be more precise, to be more impactful, and even more insightful, right? And so if you think about, as an example, your ability to say, well, I’d like to understand all relationships to this vertices or this connection point between node two nodes in, in a graph, and then you look for the commonality across that type of relationship on a massive data set. I mean, you can do those kind of queries with natural language using an LLM and graph rag versus let’s say, traditional rag, to your point, which is, is picking up a level of context that might not have been present in traditional a traditional vector database and rack strategy. So yeah, I think it’s an improvement exactly as you described, which is the contextual understanding of the data as it relates to itself and one another as opposed to just kind of a, a giant static dataset.

Glenn Hopper:

Yeah, and I, you know, so for a lot of our audience, obviously a lot of very data savvy people who are using this using more complex tools, but I’m, I’m gonna take it all the way back and say, you know, imagine I’m an FP&A person, and I do, and we still use Excel. I mean, we, you know, we love to bash it, but Excel’s the first place I go to do any sort of analysis. It’s just the easiest, you know way to access. And I think for finance people, that’s, that’s always where we’re gonna go. But you know, we, we dove straight into the deep end, but we, if we back up to a higher level, we have access to things right now as generative AI lets us use natural language, our, you know, just interact with our systems the same way we would interact with another human being and not have to translate into Python or, or, you know, whatever we’re doing.

But I’m, I’m trying to picture now, a lot of our listeners may be hearing what we’re saying and think, yeah, that’s really great. I don’t have access to that. What does this mean to me? So I’m thinking about, we see the very near future where I was just reading about llama’s new agentic rag that they’re, that’s come out with that model and, and some different applications of that. And, you know, so all these things we’re talking about in the abstract, but to try to take ’em for our users and help them understand what they’re gonna be. So using LLMs, using these LLM powered chatbots ai, we’ve got democratization of data. We may have, you know, increased access to data that we didn’t before. But of course, in having that, we have to have this data science understanding that we didn’t before to some extent.

Because it’s kind of like, if you’re not a finance person, you have to know the difference between EBITDA and net income to really, you know, to have value there. So you’ve gotta know the terms. You may not have to be able to be the machine learning engineer that knows how to write the code to get to them, but we’re talking about all this from a super data geek perspective. But for frontline FP&A people right now who say, you know, I’m not gonna go learn Python and all this MM-hmm, <affirmative>, but what, how do you see the next, I don’t know how long, every time I make a prediction, I’m wildly wrong, so I need to get out of the prediction game. But what does the data environment look like for say, the next 2, 3, 5 years for finance professionals and how are they gonna be able to leverage it? So all this stuff that we’re talking about where we’re getting deep in the guts, where we’re talking about, you know, graph databases and all these connections and, and being able to access the data, but for end users of this data, what is, what does the future look like for them and how are they gonna be able to take advantage of this kind of technology?

Brandon Wilson:

Well, I think first and foremost, it starts with the availability of tools, not the least of which is the, the LLMs and, and the implementation of things like Chat GPT, or GI should say GPT-4 and, and the LlAMA models is to allow you to interact with your data without being a data science. So, so first and foremost, that’s obviously one of the greatest advantages. And and I think your only limitation at this point, assuming you have data to interact with, is curiosity. And, and so the, the first, and, and probably best thing that has happened with progress with LLMs is the ability to just obviously use natural language processing, natural language to have conversations with your data. And I’ve experimented with this extensively. You should assume it knows everything about data science that needs to be known. And you can tell that I’m not a data scientist and instead have just a, a curiosity conversation, the what ifs, you know, go back to very simple <laugh> back to this maybe some very simple excel formula, ways of thinking.

You don’t need to worry about the complex models at least certainly not to start. So, so great. It’s a great pathway maybe to get to more advanced, more advanced data. And I, you know, I, I think the, the future of data is both a blessing and a curse. And, and I think one of the things that’s going to impact us most significantly in the near future is deeper implementation of edge ai. So more data collection and more processing at the edge which might include things like IO OT devices, telemetry devices, but but also includes decision making at the edge. And so, you know, I’m sure most people have kind of heard about all the, the kind of recent developments for, for even the implementation of small models on mobile phones. And so you might start to see, instead of having to move lots of data to more centralized points, you’re gonna start seeing models working where the data lives, or at least where the interaction lives.

And while there’s gonna be some opportunity in that for better decision making and maybe more real time processing of things, things like, it’s also gonna create a lot more data <laugh>. So, you know, it’s a bit of a blessing and a curse. So it has propensity to allow more value creation, more value harvesting, but it also is the propensity to bury you and more problems that you already have, having too much, you know, unstructured data that’s kind of already unusable and, and hard to manage. So yeah, I, I think it just really it just really takes, I think some, some kind of focus and dedication, getting that the, the very core and, and, and, and meaning, so the core business value and use case for data and being very focused on that, and maybe less about the technology and the tools that are around it because there’s many good, like there’s many good technologies to help solve the technology problems. So just stay super focused on, on the use case, the business value, how you’re gonna implement it, how you’re gonna implement the change that goes along with it, and how you’re gonna measure it. I mean, I think if you get that recipe right, then some of the rest of the stuff is, it requires expertise, but, but in is somewhat secondary to just making sure you’ve got a really well designed system of extracting value from data.

Glenn Hopper:

We, we knew this could potentially happen before we even started recording, because when you’re talking about the, the edge AI and the, and the small models which are getting better and better and better, I, if you’ve, I haven’t played around with it yet, but even like the latest GPT-4o mini, the performance on the benchmarks and, and these, some, the, going back to lama the, the 7 billion in, in smaller models, how good they’re getting right now that you can run on your local laptop and what that means for, you know, guests for big projects and, you know, general things using the frontier models is gonna be the way to go with their billions or trillion plus parameters and all that. But these small models are getting so much more cost efficient and thinking about data privacy and security and the idea of fine tuning those, but that’s a whole other episode, <laugh> <laugh>, right?

So yeah, we’ll have to save that for a different day. But I guess, you know, and, and maybe we should have started before we just dove straight into the deep end. I do think that for certainly at this point, excuse me, certainly at this point most listeners of this show certainly have gone out and they’ve done, you know, they’ve, they’ve done some work with these models, but I think I, I still have people asking me for prompt libraries, which just drives me crazy. I think that’s completely the wrong way to think about interacting with these models. It’s not, you know, if you wanna prompt library, you should just have a, a big control panel where you’re pushing buttons and say, push the financial statement analysis button. And instead it’s more like, I need to understand what this technology is and how to interact with it.

And one thing that I encounter a lot is people trying to understand how to interact with these models and understanding the way these models were trained, that that RLHF, the reinforcement learning with human feedback, they’re designed to be zero shot prompt. It’s like, you give me, you know, you give me a question, I’m gonna do everything I can to answer that whole question all at once. And, and so it’s, it’s not, so if you default to asking it a question, no matter how big or small it is, it’s gonna try to answer it there. But one of the things that you can see better results from is sort of the eat the elephant one bite at a time where instead of saying, do this massive thing, let’s talk to it as we would an intern or another, or a coworker or an analyst we’re working with. But I know getting back, you know, out of, out of the weeds of, of how these models work and how we implement them, just kind of a user tip. Talk to me about the difference between zero shot prompts in, in something like chain of thought prompting and any, any tips you have on Sure. How our users can, can use these interact directly with these LLMs to get the best results.

Brandon Wilson:

Sure, yeah. Yeah. So, so I mean, I think what most people are probably familiar with, it certainly in their beginning experience with an an experimentation is zero shot. So you’re, you’re basically giving it an instruction regardless of whether it’s simple or complex and well thought out or not well thought out. It’s, it’s really based on just a single step answer. And like you said, it’s gonna give you everything in its power to be able to answer that maybe if you’ve given it some condition. So it kind of goes wide and based on some settings you can change in the LLM, it might go too wide, it might go too narrow. It’s you know, kind of less predictable. And then if you asked it the same question again immediately after you cleared your cache on the first one, you might get a different answer. And so it’s just got a lot more variability, but I think it’s a great place to start.

And then what I would say that where the chain of thought prompting is more significant is, is as you’re refining, so think of it more of a, as a conversation that you’re having, you’ve presequence it so you’re not actually doing it in real time, but think about it as steps in a conversation where each of the outputs of the prompt, because they are still in the chain zero shot, right? But each output from the results of the prompt are being used to refine and improve the output of the next sequence in the chain. And based on the way LLMs work with memory that’s involved, it starts to construct a more refined understanding because the data that’s being used to prompt the second that’s being used in the second prompt in a sequence is now informed by the results of the first one and so on and so forth.

So it’s technically getting narrower as you, as you make your progress through the steps. And so you might think from a, like, from an FP &A standpoint, you say, well, listen, let’s speaking very colloquially, but chat GPT or GPT tell me, you know what the, the forecast is for, for next quarter. And then, okay, now we have that result. And you say, well, now let’s add in previous three quarters. So I wanna start looking at a comparison and then tell me what from the original forecast to the actual results of Q1, Q2 and Q3, where were the variances? And now, now, now let’s go to the next sequences. Now that I know what those variances are, let’s add in all the cost predictions to the fourth quarter and then the next sequence you’re saying, now take those same things that we’re responsible for variance in Q1, Q2, and Q3 and apply them knowing these cost impacts that are forecasted for Q4, and tell me if you see any, any, any correlation or any variance, anything I should be aware of.

And then at the very end of the final sequence, you say, now take all of this information I just asked you for and write a Q4 forecast prediction summary for me Bob, you know, so, you know, as you can see, you’re kind of having that conversation and, and like you said, it’s almost like you’re asking, you’re giving directions to, you know, someone who’s on staff or, or even yourself was saying like, here’s the things I need to collect, right? And then I’m gonna, but it’s actually, you know, providing the analysis automatically as it’s going through it. And so I think hopefully I give, helps give a good example of the difference of uses. You know, I think people initially were kind of instructed that you should spend a ton of time perfecting the zero shot approach. And I have seen 2000 line prompts, like amazing stuff and maybe that does work.

So I’m not really kind of commenting on that. But I think what I’ve had better luck with in creating the, the chain of thought approach is actually reverse engineering. So if I tell the LLM what I’m trying to accomplish and how I want the end result to be prepared or, or demonstrated, I can start asking it questions about what it needs to help me fulfill that task. And if you start going kind of, kind of like a re-engineering recurs iveness through the sequence, you can get to the origin and then you can take that sequence it mapped out and create your chain of thought prompt. And so it, it’s at least a, a little bit different way of ar arriving at the same result. So either if you, if you, if you kind of really already know how you would define the sequence, have at it and, and set it up as a chain of thought prompt. But if you’re not really clear on how to get certain bits of the information and the sequence correct, it will help you sort it out. As long as you can tell it what your goal is, it’ll help you start going backwards through the sequence of getting to the origin, which can be then the origin of your chain of thought sequence.

Glenn Hopper:

Yeah. And it’s real, it’s, if you think about it, it’s the way we would solve a problem if somebody gave me, you know, wanted me to do that, I would go through the same steps that you described for doing the forecast and all of site, I need to understand this and, and do it. So if you, you know, ’cause there’s also different approaches, I guess if you went with that zero shot, if you wanted to come up with a new way to do it, rather than going through Well, this is the way I would do it. That might be a, it, its own use case is say, well, what would you do with the forecast? And then, you know, feeding it different information. But the danger of zero shot, I guess, is a lot of times you don’t give it enough information upfront for it to, to drive those decisions.

Brandon Wilson:

Yeah, agreed. And, and, and that, and, and just, it is more generalized. You, you’re never gonna get at to quite as precise an output. I think if you’re, if you’re diligent about the chain of thought sequence and prompting, it’s, it’s just really, it’s just two different use cases if you ask me.

Glenn Hopper:

All right. In the interest of time here, we’re gonna make a hard right turn. I got two more questions for you. Okay. so <laugh>, so first off, and I, I always love this one because you never know what you’re gonna get. You get answers all over the board. But the first question is, what’s something that not many people know about you?

Brandon Wilson:

Probably because I spend a great deal of my time, both personally and professionally talking about technology, is that I am a very big fan of the outdoors and, and the awe that comes associated with seeing really beautiful landscapes and, and outdoor environments and have a, a very deep appreciation for actually turning technology off.

Glenn Hopper:

Yeah, you gotta have that balance, right? <Laugh>, and you’re in, you’re in Arizona, right? Oh yeah. So you have plenty of scenic beauty all around you, so that’s great. This one’s gonna seem weird I think considering probably where you spend most of your time. But again, I ask I ask every guest this and it’s, it’s always interesting to hear what people come back with and asking you, as someone who probably doesn’t spend a whole lot of time in Excel, what is your favorite Excel function and why?

Brandon Wilson:

Yeah, so, well, I’ve spent plenty of time in Excel throughout my career and, and have a deep love and appreciation for pivot tables. I, you know, my answer is actually gonna be a simple one, and it’s just using the filter tools. So for me, when I look at ar arguably and related to kind of the data science side of things, I’m a big drill down guy. I’m looking for, you know, data matches. So if something occurred at a higher level, if I keep drilling down, do I see the same pattern as an example? Or, or do I see it across, you know, kind of let’s say different, you know, different instances of different funnels in, in, in, in what is ultimately in the Excel tables itself. So I’m just a big nerd of changing the filters on things to continue helping to answer those questions. So if I saw a phenomenon in this business, in this business at the top level, was it across all geographies? So I can filter that, I can start looking and then I, if, if I find two or three geographies where I did see the same thing, now I can filter it again and look for, was it, you know, customer related or product related. And so I’m a big drill down guy and so kind of live and die by the ability to toggle on and off all sorts of different filters.

Glenn Hopper:

I love it. And as a, a stats guy, you know, I’m, I’m guilty of this, but it’s, it’s like p hacking where you’re going through and you’re like, well, this is the data I have. Let me, you know, you’re not, it’s not what you’re testing for, but you find things. So, but because we’re not in academia and we are just in, in doing business data analysis, I’ve, I have actually found value in doing that very thing where it’s, it’s kinda like finding those correlations where you just drill down and drill down and say, you know, it, does the pattern repeat or is there a new pattern that I didn’t see before? So, yep, I’m right there, right there with you. Yep. Yep. <Laugh>. Alright. Well Brandon, I really appreciate you coming on the show. I guess finally, how can our listeners connect with you and, and learn about more about you and, and your work?

Brandon Wilson:

Yeah, sure. So my company is steady dynamic. We’re based in Phoenix, Arizona. We’ve got offices in Warsaw, Poland as well. So we’re a global company. Our website is www.steadyoneword.com. And if people are interested, you can always find me on LinkedIn under steady dynamic. Brandon Wilson, I think my LinkedIn is BK Wilson, but you’ll find me if you visit Steady Dynamic and be happy to connect. And also, Glen, thanks for having me. I really appreciate this. I know again, we, we probably could have gone much wider, so we’ll have to organize the second one, but this was really enjoyable and I hope the listeners benefit from the conversation.