Two/thirds of the SaaS Finance executives in this week’s webinar audience said they don’t feel they are getting a high level of return on their R&D investment. Tom Huntington, an experienced SaaS CFO and I explored the R&D “Magic Number,” and how the SaaS model and Lean Start-Up methodologies are helping to make R&D investment more accountable to revenue growth.
Here’s the podcast of the SaaS Conversation, followed by a transcription. Want to watch the webinar on demand? Click here.
SaaS Conversations with Lauren Kelley: Are you Getting What You Need From R&D Investment?
Lauren Kelley: Hello everybody. This is Lauren Kelley at OPEXEngine and I’m pleased to have here with me, Tom Huntington, CFO of QStream. We are going to talk about: Are we any good at R&D investment in the software and SaaS world. Thanks everybody for being with us.
Lauren Kelley: I’m Lauren Kelley, CEO and founder of OPEXEngine, the SaaS and software benchmarking platform and finance community for software and SaaS. We’ve been doing this for 12 years now. I think lots of people in the audience know us and have participated in the benchmarking. And thanks for that. I hope we have provided some valuable content. We work with software and SaaS companies with revenues between $1 million and $500 million and we do monthly webinars like this with Tom. We cover lots of different topics, operational metrics, and topics of interest to the finance organization. So with that, I’m going to ask Tom to introduce himself real quick and then we’ll jump into the topic.
Tom Huntington: Hi, I’m Tom, CFO of Qstream. This is my fifth startup. I had been CFO, CEO, cofounder, product manager of several product marketing and, a little bit of sales, most roles outside of actually writing code. But my love is tuning the business model. There are two things I don’t like. I don’t like to be wrong and I don’t like low growth. I think a lot about how to gain leverage, as we work down the income statement, and how to gain leverage in COGs. There’s a lot to say about how to tune sales and marketing, but when you get down to the R&D line it gets stuck. And so that’s what today’s topic is all about. How do we get leverage out of R&D investment?
Lauren Kelley: Tom and I have been talking about this for a little bit and we had done some research together a couple of years ago that we’re going to show you.
I’m going to launch a poll just so we get a better sense of who’s in the audience. I’m interested in how many people are in finance versus actually in R&D. How you feel about your own company’s R&D investment and whether that’s closely and tightly associated with your business metrics and performance returns. Tom, you’re a CFO of Qstream right now but you’ve had some interesting experiences and some great exits before that. Do you want to mention a couple of other companies you were at before Qstream?
Tom Huntington: My first startup was a telecom hardware at Quantum Bridge. We made fiber to the home systems, it was acquired by Motorola and if you have Verizon Fios, you have a 50% chance of using our equipment. I was also CFO of Valla systems. We made mobile software for construction. That’s now a fast-growing division of Autodesk.
Lauren Kelley: The poll says 97% or almost everybody is in finance. I think we have a good discussion group. Second question is split one third to two thirds. Two thirds of the folks online are not feeling like they’re getting a high level of return on their R&D investment and about a third are. So that’s actually, that’s pretty good.
I’d like to start this off looking at a metric we’ve been tracking and calculating. And I think for most folks in SaaS finance, you’re familiar with the concept of the magic number, which traditionally, or since the SaaS industry’s been looking at it is something that was made popular and articulated by the CEO of Omniture, who’s now the CEO of Domo.
This is comparing sales and marketing expense to revenue growth. There are some pretty simple, straight forward ways of doing the same thing with R&D. Taking last year’s R&D expense and calculating that against new revenue growth in the coming year. Let’s say we’re talking about 2019. If you have $30 million of new revenue in 2019 and you divided into that your R&D expense last year in 2018, you would come up with a number. Lo and behold, companies that have really successful business models like Salesforce.com have quite high R&D productivity. Apple has been one of the highest R&D productivity companies out there, that may change slightly as things evolved with China, but they’ve been in the $14, $15 range, but Salesforce was at $10, which is quite high for a SaaS company, and continue to increase that when they went public in 2004.
Most private SaaS companies that we see, if you aggregate it all, the companies in the $20 to $50 million range are more in the $2 to $4 range and that’s decent. So it’s really the very best in class that are approaching the $10 that Salesforce is getting. If you think of that as sort of the standard that you’re trying to approach, I asked Tom to talk to us a little bit about his experience on why this is so difficult and also what he thinks can be done about it and also his experience at Qstream.
Tom Huntington: The R&D magic number tees up some important things, one that we should expect revenue growth out of R&D investment, that there are very few pure cost saving plays and so most of what you is going to be an upsell, a cross sell, a churn reduction strategy and adjacent market add on, something that’s going to move the top line. And we could take that $2 to $4 number, the largest mentioned, and we can end the conversation right here and say, all right, that’s the target, let’s go do that. But, we need to ask the question, where are we relative to that target, how far away are we and how do we get there if we’re not at that? And secondly, how do we outperform? How do we turn that dial? Lauren and I teamed up about two years ago, and I looked at a hundred private SaaS companies, 44 public ones over a 10-year period, controlled by size and revenue, also controlled for levels of sales and marketing spend. Is that what’s driving revenue growth? And if you keep that constant, can you observe something about R&D?
There was no correlation in any way that I looked at the data. The red dots are private companies. The blue are public companies. There’s more dispersion on the private side, but there’s no correlation in either. There’s no statistically significant relationship between dialing up R&D, which is the x-axis percent of revenue spent on R&D and revenue growth, which is the y-axis. And you can turn that dial from 20% to 40% to 60%, and there’s no predictable reason that you will get any more for that. And that’s a real puzzle. I think there are some, some key issues here.
First of all, this is a dirty little secret of the industry. We are software as a service. Our business model is around building product. This is what we do, and we don’t know how to do it reliably from a financial perspective. The second thing is that this is also a huge opportunity because this is an unoptimized part of the business. So figuring this out is incredibly powerful, it’s going to give you more leverage than tuning sales and marketing or services. You’ve got more to gain because we’re collectively, as an industry, not as good at it. So, this is a big opportunity to create value. I’m a glass half full guy. I think it’s pretty exciting. So why is it so hard? It’s simple to say innovation is speculative. We know we’re placing bets. Sometimes we win, sometimes we lose. But that only goes so far and that’s not the basis for operations.
Lauren Kelley: I think it has changed a lot over the last 15 to 20 years. Having been in the tech industry for a long time, I think the now lean startup methodologies are expanding beyond primarily early stage companies. Everyone is following a quicker iteration on efforts and projects, and when you couple that with sprints, and really tightly well managed companies have very tight sprints, you get much more productive R&D. One of the things we’ve talked about in the SaaS finance meetups that we have with finance folks, is the finance organization helping the R&D organization to almost think about sprint forecasting in the same way that we think about with sales. In sales you evaluate whether you’re on top of your sales process by how good your forecasting is because if your forecasts are always wildly off, then you’re really not on top of your sales process. And if your sprints are always wildly off, then you’re not really on top of your R&D process and your development process. So, either you’re over committing or underperforming or both. But things are changing things somewhat.
Tom Huntington: If you take some high performing companies in this regard, like the Salesforce example we looked at the outset, you can ask the question, are they high performing because they have engineering discipline, like the sprint forecasting you just mentioned? Or are they high performing because they happen to have a great market and be in the right space. I would argue that those are one and the same. When you get into, and this is why for the second bullet point, this is why measuring product development is hard because if you just focus on metrics that you can observe inside of the engineering organization, you’re going to get a picture of some discipline. But ultimately what you want is a measure of payout, and that’s what the lean model is about. Run experiments, build something, see adoption, see sales, see whatever your market metric is, and then go back and build again and test again. And that requires two different sides of the house working in sync. The engineering side, you’ve got the sales and marketing side testing your experiments.
Lauren Kelley: We’re seeing companies where it’s not two different sides of the house. We see some companies that have the Chief Customer Officer who owns customer success and product and associates all of that together and it’s running off the same metrics. I think this is so interesting because, and maybe I’m creating a false parallel or comparison, but I think it’s a lot like sales and marketing was 20 years ago, where in sales and marketing a lot of what was done was just measuring activity and how many meetings did you take and how many calls did you make. You had activity and then you had at the end of the day your revenue, and there wasn’t a lot of pulling that together. Our experience with working with lots of companies is that’s how it’s happening in R&D. There’s tracking of activity in R&D, but that activity isn’t as closely aligned with performance metrics for R&D associated with revenue growth as I think there should be.
Tom Huntington: One area of pushback that you can get when looking to measure engineering activity is that this is a creative process. You need to let me build what’s in my head or we don’t know how long it’s gonna take to solve this problem from an engineering standpoint. If you work closely with engineering organizations, you’re in the face of a creative activity and turning that into a process can sometimes feel like an oxymoron. If you have a team that understands lean process and is accustomed to running experiments and really working closely with customer success or sales and marketing, you may have less friction there. But if you don’t, you may run into the sort of creative objection and getting over that takes some work.
Lauren Kelley: You sound like as a CFO who occasionally has had these conversations that you’re not understanding the creative process.
Tom Huntington: One of the cofounders of Pixar, I’m forgetting his name at the moment, he wrote a great book called Creativity Inc about how to run a company built on the creative process. Pixar has very high performance to their movies, particularly in contrast to Disney, who we usually think of as being an animation powerhouse, and he has a lot to say about how to create processes around creativity and some of those are transferrable to engineering.
I think there’s also a real problem, this is an estimating problem around balancing optimism and realism. When you’re innovating, you’re creating something new. You have to be optimistic about it and that can lead to over over estimation or you’re being too optimistic. And then being realistic. If you’re very realistic or too conservative, that can sometimes kill risk taking and kill a good idea. So, finding that right balance is very hard and everybody has a bias towards one or the other. And putting all that together requires teamwork, and this is not a one-person job. I don’t have a formula for you to crunch in the finance department, there’s not a single solution in engineering or any other part of the organization. People need to work together on it. And that’s always hard.
Lauren Kelley: You think it points to the fact that the issue of estimation, optimism, creativity versus realism and business drive is just good management in balancing those kinds of things? If your R&D leader doesn’t have that, then you make sure that the next person down is the opposite of the leader. You could have a very creative, amazingly innovative leader in R&D, and you want to make sure that their number two is process and metrics driven. That’s also part of why SaaS is nice because SaaS is such a collaborative business model and finance has to step up and play that role. In the past, in more siloed, traditional software companies, I think it was tough because R&D was its own castle. God forbid you were always harassing them if you’re asking them when they were going to deliver and what the cost was because the response was just, these are tech companies, this is the gold don’t touch it, you don’t understand. That’s not the way it works in SaaS anymore. You need that teamwork, not just within the R&D department and within finance, but between the different disciplines. If product management sits somewhere outside, then they need to be fully integrated. So, all of that needs to work together and tied together with the people who are trying to upsell and retain customers and tracking engagement with customers and make sure that that’s all pulled together. It’s a great collaborative enterprise now.
Tom Huntington: I very much see this as a people problem. If you work in a company where everybody gets it, and they bring this awareness and this collaborative view already, then great. And I’ve worked in a few companies where everyone is already pre wired to work together and to run experiments and to testing the marketplace. SaaS has exploded. There’s a huge war for talent out there. And I think for many companies, you find yourself in a team where some people bring that experience and others need to learn it. A lot of this is taking this perspective and sharing it and copying it or multiplying it across the organization.
Lauren Kelley: And keeping the business focus on, this is why we’re here, folks, this is what we’re to do. We can’t just develop for the sake of developing.
Tom Huntington: These are the five are issues that I’ve seen multiple times and there may be others.
Lauren Kelley: We’re going to start talking a little bit about some of Tom’s thoughts about what we can do about that. The world has changed definitely from the time when I’ve been running the business side of a business and having the engineer’s getting really angry when a customer used their product incorrectly and they complained that it was all the customer’s fault for being so dumb. I think we’re pretty far from that, but it wasn’t that long ago. And when you are in big enterprise businesses you still get a little bit of that sometimes, like why aren’t they using all these features? Why don’t they understand how great this application is? And the reality is, it’s all about the customer.
50%, of this next poll said that it is because measuring is hard. And I think that’s partially, the nature of our audience because everyone here is in finance and finance wants to measure things, but part of it is exactly that even if you do just measure activity that’s not that easy.
Tom Huntington: Measuring is hard, without a doubt, and I feel that’s a key issue. Innovation is speculative. If we can measure it well, it shouldn’t be speculative. We should be able to run an experiment and see the results and go from there. Intuit has a CEO fund where anybody can apply for 1 million bucks to go run an experiment and come back and bring results of that, and based on those results, you can get funding for new initiatives. I really like that because it exemplifies the experimentation process. Here’s some money, come back with results, and then we’ll go from there.
When I did this analysis with Lauren two years ago, I took the results around and showed them to a number of investors that I know and said, what do you think of this? Does this reflect your experience, your portfolios, and what did you do about it? And the answer pretty universally was, yes, this is a big problem and a real headache. A lot of the stories tie back to innovation as speculative. Which is they felt like they gave companies and CTOS a lot of rope, a lot of leeway to go execute on their vision and then things didn’t work out for a variety of reasons and they came back and said, what, we shouldn’t have taken such a big swing. We shouldn’t have given them that much rope. We shouldn’t have waited so long to see results. Let’s measure sooner. Let’s make it less speculative.
Lauren Kelley: I think it’s all about the lean methodology, which would not have you waiting a year, even two years sometimes, with some companies to see the results. Let’s talk about what do you do about it.
Tom Huntington: One obvious answer from the finance department is, if we don’t know how to place these bets well, let’s not place a lot of bets. Let’s spend less. That makes sense, but also there’s some positive behavior around that, which is, use the budget constraint to drive better choices. The product management process as distilling the great ideas from the merely good ones. And in order to do that, you need constraints and you need to force people to choose. And if you don’t have to make choices, you’re going to end up funding bad ideas. So, I have a lot of respect for the first one.
Lauren Kelley: I think that the best companies have a good process for requiring a business case, and also holding the team accountable to the business case. It’s one thing to argue, if we build this feature, it’s going to drive the new customers or whatever significant revenue is in your company, but a lot of companies forget to follow up afterwards and say, did it really track to that? Best in class companies that I see, when they decide to invest in a new feature, a new product and there’s a business case made up front, that business case is put in the management reporting all along the way and there’s accountability to that business case. And you might miss the mark by 50%, but you better have some good learning about why you missed that mark, because learning from your mistakes can be as equally as valuable as the success, although the success is nice. Not enough companies follow through the whole closed loop of making sure that the investment is tracked all the way through.
Tom Huntington: A lot of that has to do with the timeframe of the business case. I remember reading an interview of a successful product manager here in the Boston area. She’d had multiple, very successful products in a year. How do you have so many successes? So many product managers have failures, why are you so consistently in the black? And her answer was, oh, I fail all the time, but I do it quickly and in small amounts, and we learned from that and we fix it, and then it becomes great. It’s easy to require a business case, particularly if you’re a finance person. What does the business case mean? I think a lot of your colleagues will have different impressions on that, is that a one slide thing or is it a 30-page thing?
In my view it has two essential elements. Some level of investment, we’re going to build something, and some level of market acceptance or market adoption, whether that’s usage or revenue or what not. I, I talked to a senior engineer at TripAdvisor who said that every night the CTO takes home the activity files and reads them, and comes in next morning and recommend changes to the website and they’ll make some changes and by the end of the day they’ll have some results and he’ll go home and he’ll do it again. TripAdvisor gets 20 plus learning cycles a month, where if you have a monthly business case, you’re going to get one learning cycle a month. And if you have a quarterly or annual business case, well you’re going to go a lot more solely and we don’t all have a high volume B2C business models, like TripAdvisor, that allow that kind of experimentation.
Lauren Kelley: I think the bottom line is you can’t leave it to annual checkups. Let’s look at your next point. Frequent milestones, demos, experiments, results. I think by going all the way up to the board, you’re talking about accountability and also visibility. That it’s not some deep dark black hole that somebody off in the corner is working on, but everyone is on board with what’s being done, then tracking it.
Someday, I want to do an analysis of SaaS companies that have been founded by product people, engineers, and their percentage of spend in R&D versus SaaS companies that were founded by sales and marketing people, and their percentage of spend in product. I am just anecdotally guessing that you could segment that and see some difference there. But I think regardless, it’s important to have product people on the board, not just business people. Have you had experience with product people on your boards?
Tom Huntington: In some cases, yes. One investor that I talked to about the results of this study pointed out that a lot of board members are finance folks, obviously coming from a venture or private equity, and a lot of them also tend to be former CEOs, which has a high incident of sales background. It’s easy to get a lot of sales talent on your board, it is easy to go out of finance talent on your board, it’s harder to get product talent, and even if you’ve got a product CEO they may or may not want that oversight. I worked with the CEO to a previous company who was a product guy and when the board asked him about product, he smiled. He said, we’ll get back to you, and he took that question, he stuck it in his pocket, and it stayed there. And we never talked about product support, and that’s a problem.
Lauren Kelley: I think that’s a great point, that if it’s a product CEO, it’s good to have a check of somebody else coming from a different perspective who’s equally matched on the product side. And if they’re a CEO that’s not a product person, they need the board member to support them as they dig into what’s going on in their own company.
I think we only have a little bit of time and we’ve got some great questions that are building up. Let’s go through your experience at Qstream because I know you guys have done some cool things to clean up the product and engineering side.
Tom Huntington: If you follow the press, it’s easy to find out that we did a $15 million series B, led by Polaris in 2016. The company doubled in size ‘16 to ‘17, and increased R&D spend to 25%. It built and launched an upsell product and a year later cancelled the product. That’s the relevant portion of today’s discussion. I came in mid ’17 to help it kind of rationalize the growth plan and bring the company forward. As one of our investors likes to say, startups go from jungle, the dirt road, to highway, and some teams have a harder time making that transition to each of those stages.
We’re going from dirt road to highway right now. So, what did we do? Well, in 2018, we made product a regular board agenda item. Not every meeting, but, at least every other meeting. Let’s make sure that we’re talking about it, that the board understands we’re doing and, we have visibility to demonstrable progress and let’s put product metrics in place. This is attachment rates in our sales. That meant rebuilding the skews in Salesforce, looking at adoption rates, that meant writing new reports in our product to see product usage. We also bought Mixpanel and use that to track a lot of activity in our product. And we now look at those on a regular basis. And still some of the most important measurements are lagging indicators.
If you’re looking for lifetime value to customer, you want them to buy the product or buy the up sale repeatedly over time. And you’re not going to know that until after you decide to build something. Build it, launch it, sell it, have the customer adopt it, and then continue with it, and that could be a 15 to 18 months interval. So, getting leading indicators, getting reliable, leading indicators is critical because some of the best financial ones just take way too long.
Lauren Kelley: What would you say are good leading indicators?
Tom Huntington: I talked to a guy from Kaplan who said that after the first week of a Kaplan class, they can tell the scores people are going to get on their standardized tests based on study habits and on how they’re engaging with the material. I think it depends on your product, depends on the adoption process, but look for those indicators of success. Of user adoption and user engagement in what’s the behavior. If they’re not using the product, they’re not going to pay a lot for it. I think everything begins with usage.
Lauren Kelley: I agree that customer engagement is definitely a leading indicator. I want to clarify, what do you mean by attachment rates?
Tom Huntington: Attachment of a services, attachment of add ons and upsells in our solution when we sell a core license. But they’re ancillary things that we sell and if you roll out a new feature you want to look at the incidents of the people who buy it and if they don’t buy it, then they’re not gonna use it. If what you’re measuring can be defined as a separately priced add on. Sometimes it can.
Lauren Kelley: And attachment rate is interesting because it can be a positive indicator and a negative indicator. A lot of SaaS companies use professional services because their product actually doesn’t set up very well or it’s very hard to get started or to engage with the product and they use professional services to make that happen. And that’s actually not so great if you’re promising that it’s an easy to use and easy to implement application. Some companies track it as a declining indicator to show that the product is getting better and better and that they’re actually accomplishing what they set out to do. Other companies look at attachment rates and that’s their business model. They want to sell this additional service on top of the application, so they’re tracking that very carefully and managing that. It really varies on your particular business model.
Let’s take a look at some of these questions here. There’s a great question of filtering public company against the rule of 40 and whether the companies that meet or exceed the rule of 40 if R&D efficiency versus not, and we haven’t done that at OPEXEngine, but we can because we’ve got all that data. So, I’m going to take that as something that looks like an interesting analysis.
Another question is an accounting question. But because most people coming out of finance oftentimes are tracking things in R&D on an accounting basis, I think it’s worth looking at. They’re saying, as is common that most companies R&D has many cost centers and some of them go into COGs, some into non-COGs, and so it makes it complicated to meet your accounting requirements to separate some of those expenses into COGs and operating expense. When you’re looking at it from a performance perspective, you don’t want to exclude the things that are going into COGs just because of accounting, if it really relates to R&D. Do you have any thoughts about that Tom?
Tom Huntington: Your capitalized software policy can affect this measure, right? Because you’re looking at R&D expense. You can look at it on a whole company basis where you’re going to think about comparability on things like internal use, software, accounting. I look at it on a more granular basis. What are we doing in the next sprint or in the next quarter? And that really requires more of an activity-based costing approach because you’re not putting the whole team on it. Lauren, I think both in your blog and just in our conversations, you have pointed out the need to separate sustaining engineering or expenses allocated to tech debt. Taking R&D as a whole I don’t think gets you to the granularity for which you want to measure this.
Lauren Kelley: If you look at it at the granular level, you’ll be able to associate it against the productivity that came out of that particular activity. How would you think about, or measure, an investment in R&D that is tech stack rewrite to address tech debt? At OPEXEngine we definitely advocate that companies track their spending on tech debt. And I know it’s not as easy as saying, okay, I’ve got 10 people and their compensation is this so now I know my tech debt expenditures. It can be a little tough to judge sometimes, but it’s an important metric, and what I have seen from the best in class companies that we’ve worked with, they know what they’re spending on tech debt and it’s important to not spend too much on tech debt. But, if you are spending too much on tech debt, either you’re fixing something and that’s going to change over time or there’s a real problem there. But it’s also important not to spend too little on tech debt because then you’re probably building up a big debt that at some point you have to pay it like student loans. Roughly what I’ve seen, at least the best in class CFOs that I’ve talked to with pretty successful SaaS companies, try to keep spending on tech debt running between 20 and 25%. And if for some extraordinary reason you need to go over that, then you do that at a point in time.
Just like any benchmark, it’s important to know what the benchmark is. It doesn’t mean that’s exactly the blueprint for your business right at that moment, but it means you should know where you stand in terms of the benchmark and be making conscious choices to go above or below the benchmark at some point. Would you agree with that Tom?
Tom Huntington: Yes, I think that it eventually catches up with you. If you run lean on maintenance and sustaining engineering, we can call it tech debt, the best use of that resource is refactoring where you’re not just fixing problems or putting Band-Aids on previous problems, but you’re actually refactoring it and streamlining it. I think the question around platform rewrite gets at the heart of that on a larger scale that we’re going to refactor the whole thing. If it’s a full-time form rewrite that’s typically going to take you a year plus. Let’s just say for the sake of argument that you stopped building new things and you just did that, and that we’re a hundred percent tech dead for a year, well your board’s going to want to know why you’re doing that and you’re doing it for a reason, and it’s because it’s going to lead to revenue growth in the future.
There aren’t enough costs in a SaaS business to do that purely for cost reduction. We have very high gross margins, so unless your gross margins are really terrible, you’re doing that to drive revenue growth. You’re going to end up with the same relationship we’re going to spend on this rewrite because we want higher revenue next year and the year after.
Lauren Kelley: Just a couple of quick questions, one which is actually close to home, because we’re doing some building out of some new products and redoing our platform, and I just had to answer this internally, so I’d be interested in your comments, Tom. On best practices for setting reasonable spend for AWS for R&D teams, is there any kind of benchmark around the percentage of spend on AWS to the spend on R&D?
Tom Huntington: I don’t have an answer to that. I think that your benchmarking template has a hosting line, so I would defer to your dataset there.
Lauren Kelley: We track hosting as an expense line separately and it is part of COGs usually. But what are comparable companies spending on hosting? I was thinking based on this question, it would be interesting if there’s any way, and maybe it’s not possible because it’s just too random to associate AWS to the dollars spent on a particular development project, but it’s something interesting to think about.
Another question is what do you recommend in terms of finance R&D check ins as it pertains to tracking to budget weekly, monthly, quarterly. There’s both the checking in on the budget, but there’s also sort of the strategic check and what we see with the companies that we work with in terms of the finance teams. You’re starting with an early stage company where there’s hardly anybody in finance and you have a CFO, maybe one or two other people in finance, where you are really tracking R&D just as an expense and trying to stay on top of it, all the way up to the point where you have a fairly full-fledged finance organization and you may have a BizOps organization and it makes sense to have weekly and monthly, and that’s typically a company that’s $100 million or more. Then you would be having weekly, monthly, and quarterly meetings. The weekly meeting is maybe just on activity tracking, the monthly on expense, and the quarterly on strategic goals. What do you think about that, Tom?
Tom Huntington: You should be checking in as frequently as you can, and for reasons we talked about, the smaller the experiments and the faster the cycle, the less speculative it’s going to be, the more successful you’re going to be. I think it depends a lot on how you define budget. If by that you mean costs tracked in general ledger accounts, I don’t think you’re going to get a lot of insight because you’re going to have a salary line that has a lot of expenses. Either you have the heads or not, but they’re not a lot of other decisions to make in the general ledger around salary.
The real question is what are they doing? What’s in the sprint? How many story points does it take? Or what’s the tee shirt size of the thing that they’re working on? And what are you going to get from it? If you think about the business case as being driven by some market adoption measure, it’s very hard. It’s unlikely that’s going to be in your annual board approved budget. It’s more likely it’s going to be part of your sprint planning or part of your quarterly objectives.
Lauren Kelley: This actually is in answer to somebody who put in the question of whether there are there any good peer networks for SaaS finance. I have to make a pitch for the OPEXEngine benchmarking community. For those of you who don’t know that know about us. We are an independent SaaS and software benchmarking community. All of our members are in the finance organization from CFOs at smaller companies to VPs of Finance, and heads of FP&A. We have a confidential benchmarking platform. It’s a give to get subscription. Tom’s participated in a couple of companies. We have loads of companies, hundreds of companies that we work with, something like 70% of the SaaS B2B companies that have IPOed have participated in the benchmarking. We do monthly webinars like this, which hopefully everyone is finding useful. And if not, please give us feedback because we’re trying to satisfy a need for peer best practice sharing and information in a confidential way. We’re not an investment bank or selling any other services or software, so the benefit is that hopefully the data is unbiased and independent, and we do what we call the SaaS finance meetups in the Boston area and in the bay area twice a year, and so we’d love to invite everybody who’s not participating in the benchmarking to join us and we will continue the conversation. We have what we call the SaaS Q&A on our website for the questions that we didn’t get to on this webinar. Tom and I will try and get back to you, but we can keep talking now for the folks who want to stay on and listen to some of the questions.
Another question that I thought was a really good one. How do you measure the productivity of your R&D team? What other metrics do you use? How do your engineer’s measure the effectiveness of their work? From a technical perspective it’s sort of like in sales, it’s easy enough to say ARR per sales rep and various different measurements. Any thoughts on that, Tom, in terms of specific individual measurements in engineering,
Tom Huntington: In my experience, which is generally venture funded SaaS, is that there are more problems with that than not. What gets measured gets done, and you don’t want to measure people by lines of code because the more lines of code mean more opportunities for bugs. Some of the best code is very elegant. You want to be careful around measuring speed because that could generate a lot of tech debt in the future. It really is a creative process and I think that the more we build creative processes that recognize how to foster creativity and identify the value associated with that. I think the value is outside of engineering. I don’t think you’re going to find it in engineering and math metrics. And so the best way I know is to take the engineering activity and connect it to value outside of engineering. Product usage is the easiest way. You built it. You build a button: are people clicking on the button? You build a feature: are people using that feature? And from there you can relate it to sales metrics and renewal metrics and things like that. But you have to connect, right? I think you need to connect customer usage to the engineering.
Lauren Kelley: One metric I have heard some companies talking about and getting some good experience with is, if you’re following a really tight sprint methodology and you have weekly or biweekly sprints, and then you have testing and then you have staging to production. In the testing, if a particular sprint team at least, there’s a lot of bugs and problems that are found with that in the testing that there’s some ways that you can associate the bugs that are being documented against the individual or the team. That can help you test the quality of the code being written. And the testing should also be tied to whatever the original goals were of the project.
There’s another question on what examples of attachment rates and the impact on Salesforce skews. I’m assuming what you meant, Tom, and maybe just to clarify that, is just making sure that Salesforce is structured in such a way that you can easily pull the data out to show professional services tied to what products and what projects since it’s not always set up that way originally, and makes it then possible to analyze.
Tom Huntington: We have a head of sales, who likes to sell very simple bundled orders to our customers. One line, one price, you get a whole lot. Then in Salesforce acknowledging what’s in that bundle means that internally we need to keep track of these things even when it’s not on the order form or when it’s not expressed as a line item on the order form. If you want to look at the sales data, you need to track it and your CRM is typically a place to do that. Could be your invoices can be your GL.
Lauren Kelley: It’s the constant battle between keeping it simple and keeping it detailed enough that you get good data. Thank you so much, Tom. This was terrific, and thank you for the audience for being so great and asking lots of questions and sticking with us on this topic because Tom and I are committed to getting better productivity of the R&D departments of SaaS and software companies. We are going to figure this out and make sure everybody’s managing it and tracking the metrics of it. So thank you everybody. Looking forward to the next webinar. Thanks, Tom.