The Transformation Webinar Series
Establishing Visibility for Execs, Leaders, and Engineers
While 90% of managers believe in the importance of making their teams' work visible, only 24% of developers feel the same level of visibility is achieved.
Let's bridge this gap. Learn how to foster developer motivation, improve software quality, and set realistic organizational goals through enhanced visibility.
Video Transcript
Austin Bagley
All right. Welcome to the third episode of Transformation. We're excited to have Charlie Moad, Senior Director of Engineering at Salesloft, here with us today. Welcome, Charlie.
Charlie Moad
Hey Austin, glad to be here.
Austin Bagley
Super excited to have you. Charlie is an experienced engineering leader across multiple industries. Prior to his current role, he worked as the VP of Engineering for a startup that was acquired by Salesloft and as a senior engineering leader for a company that was acquired by Salesforce. One day we'll need to have you on to talk more about all of that entails. For today, we're going to talk more about using metrics.
Austin Bagley
In this episode of Transformation, we're going to be talking about establishing the right visibility with engineering leaders and executives, and there's no better person to explain it than Charlie's. Let's jump right in. Before we do that, though, First off, Charlie, tell me a little more about Salesloft.
Charlie Moad
Yeah. Salesloft is your revenue workflow platform. We help sales reps from prospecting all the way through closing and we do that through actual workflow tools, integrations with all the tools you're already using. And then we bring all that gap together to deliver insights. And then even this year we introduced a new tool called Rhythm that will take all that signal and use artificial intelligence to turn it into action.
Charlie Moad
So that way your sellers are making the best, most efficient moves. All in service of buying more time so you can focus on selling your tool and being of service to your buyers.
Austin Bagley
Yeah, I love what your team is doing and it's it's an incredible team that's constantly pushing the envelope. So really cool to have your experience, but also just the backing of the amazing sales sales loft company here. So First off, let's kind of start. Just kind of ground work, you know, Why did you implement Pluralsight Flow in the first place?
Charlie Moad
So this touches on it primarily our number one objective this year and really every year is driving high performance culture through continuous learning, continuous improvement. So we want to measure ourselves to get better. You know, we've used our existing tools like Jira and GitHub Insights in the past to try to do that. We cobbled together a lot of manual spreadsheets quarterly to get a pulse on how our quality is looking, are we investing in the right areas and really what we wanted to do, especially in our kind of remote first world that many of us are in now. I think this became more important than ever to easily create something that not only.
Charlie Moad
Leadership could look at periodically, but delivery teams themselves could use to start to ask themselves are are the processes we're running, are they driving the results we expect and are we being consistent about those over time. And so, yeah, we reached out to Pluralsight and did a proof of concept that proved out really well. So again, we connect JIRA and GitHub to our Pluralsight instance and through combining those, we get great visibility kind of top down and bottoms up.
Austin Bagley
No, I love that. And one of the things that really impressed me about the way that you really implemented Flow was how you rolled it out across your entire organization. I think you used a couple key principles behind that. Tell me more about how you did that.
Charlie Moad
For us, we have a very transparent, you know, open culture and we wanted these tools to reflect that too. So we decided out of the gate anything that an executive sees. Any engineer should be able to see too. So there's no hidden views where people are kind of not knowing what's behind the screen, right? Every engineer can use this as a tool to kind of see how they're ramping or even advocate for themselves as a promotion.
Charlie Moad
That's one of the things we talked about in the proof of concept that this is. You know, it's easy to take this as like, oh, you're measuring me in a new way that I'm not used to as an engineer. But on the flip side, you can use this as an advocate for yourself or a manager can use it to advocate for their teams and promotions. And so there's positives on this side as well of going open And then again, speaking back to continuous learning, I can look across the metrics of the org, look at one team and say, hey, what are you doing to drive this like outcome? Because we don't see that consistently across other teams.
Charlie Moad
So that was definitely part of it. Again, we have this sense of our delivery teams are autonomous, but again that comes with accountability. So those are the two things we we want to align. And part of that accountability is, hey, what are you investing in? What do you feel like are the weak spots in your process in your team?
Charlie Moad
And then show us over time like how that's panning out and are you driving the results you expect and so? You know, we really want this to be a tool for teams to use in retros to understand where things maybe went awry works world. Just things that generally that you might lose insights into unless you're kind of keeping a more frequent pulse than those quarterly check insurance that I mentioned before that we're again very manual processes to pull those together.
Austin Bagley
Yeah. And I mean and the benefit of not waiting for quarters is that you have opportunity to make changes versus kind of seeing back you know months and then times which is great way to kind of intervene and and and correct things or have the opportunity to go and improve things going forward.
Charlie Moad
Yeah, I was going to say flow has a, you know, a plethora of metrics. So another guidance thing that we were given and obviously good practices don't try to move everything at once. So we really targeted what are the three metrics that we wanted to go into. And so the first two you see here, time to first comment, PR, iteration time, those really speak to are you getting quick feedback and are you reacting to that feedback? There's research out of Google that really points to this is a huge lever to increase overall delivery speed.
Charlie Moad
If you know you're cranking out code and then you have to wait several days to get any feedback before you can change that again, obviously that's slowing you down, but also you're losing contacts and. I should say your context switching more rather and so just from an efficiency standpoint, it kind of hits you from multiple angles. So this is 1 where we really wanted to make sure that you know people were getting that support from their team first and foremost. And really, outside of like an operational issue or you writing your own code, this is the next thing on the list we want our engineers doing is giving feedback to their team and making sure that the team's work is flowing. Finally, coding Days was one that we had a lot of talk about.
Charlie Moad
It's I think the scary metric that, you know, people, you know feel like they have to work to it and they have to arbitrarily, maybe want to subconsciously or consciously game. But our take on it was, hey like let's look at coding days as a seek to understand. So we at our company have what we call core hours. It's 3 hours a day where we block out time. No meetings for individual contributors.
Charlie Moad
That they should be able to code. And so whenever we look at our coding days, it's really a question of what's keeping you from coding. Do you, are you not getting good, you know, product requirements? Are you just not getting the context? Are you not getting the support you need?
Charlie Moad
Are you in too many meetings? We actually learned we had team public Slack channels that were a huge distraction for some teams. And we were able to action on that and move them out and find better ways to channel those support questions. And again, this is all in the service of what's keeping you from coding and I think that message has resonated really well as we rolled out that particular metric.
Austin Bagley
Oh gosh, there's like so much stuff we could dive into here. We've already like built like we could go into like hours across all of these things. A couple comments. I Love Actually want you to dive in a little more with that last comment you said because I think that that was something that was a really big insight for you guys is you had these slack channels where everyone in the organization had access to the engineers, which is really bogging them down. Is that is that kind of what was happening or tell me more about what you kind of found there?
Charlie Moad
Yeah. Again, we're just a very people first culture, which is a great thing. But sometimes I think as we've scaled, you find that essentially everyone in the company who, if they can access your engineers again for the art engineers to hit a flow and take advantage of those core hours. It's really hard when you're getting pestered on Slack from, you know, hundreds and hundreds of people out there. And so I think for me it's all about letting our developers hit a flow that's like the thing that we're trying to drive towards.
Charlie Moad
But then we also learned in one example I thought was really interesting, Teams were doing a lot of work to prevent what may be a bug from becoming a JIRA, so they might spend an hour or two. Dealing with a support question just so it didn't become work for the team and we wanted to flip that to say no, you should spend like a minute on it or two. And if it looks like it's something bigger, it should become work for the team. Because, right, as we talk about investment layers later, we want to see that to understand, is a team spending more than our expected amount to maintain a service or deal with customer defects. And so again, those are the types of behaviors and discussions that have really bubbled up by starting to look at these metrics.
Austin Bagley
I love that. I mean, one of the things we talked about in our last episode, which is it's so cool comparing, contrasting the last one versus so, so last time we had doctor Kat Hicks talking about all the research behind the idea of how to help developer teams thrive. And what's cool about this is hearing from you is that I'm seeing all these things actually in practice, like inaction happening in the organization, which is really cool. I think talking about those things, I mean, one of the things you talked about was looking at the whole engineer, you know, And there's a lot more that goes into crafting great software than like the simple stories and the, you know, the new work, if there's so much more on. So you're actually making sure that you're actually having visibility across the entire, you know, you know, not only workflow, but like what is encompassing an engineer's day and that that's helping you make better decisions.
Austin Bagley
And also you like block and tackle for engineers in a better way so that you're setting them up for success. I also love this, this principle of autonomy with accountability because it really connects back to one of the key tenants of developer thriving from our research which is agency. And I think seeing how you have deployed that within your organization like really kind of sings to that that principle that it is effective when you give them that power and and empower them to do what they need to do to do great work. So I love this. So you've kind of mentioned it throughout this entire time, but maybe we so we can recap for the audience.
Austin Bagley
You've talked about Open Access for flow. What are the different roles within sales loft that log in and use flow?
Charlie Moad
Yeah. So any individual contributor software developer is an enabled seat from a metric standpoint. So they're the ones who were actually measuring their activity through those various platforms. So we can do the reporting. The others that log in would be engineering managers, directors and our VP of Engineering, OK.
Austin Bagley
And so every level here is seeing flow.
Charlie Moad
Yeah, but really within our platform there's two roles. The way we've set up, we have a manager and a user, and the manager can effectively. Organize who's on a team and that's it. That's the only difference. Obviously, there's like the administrator have to set up integrations and things like that.
Charlie Moad
But as as far as the day-to-day, we keep it that simple, cut and dry. And then we have a a hierarchy in our structure where obviously we have the engineering org, we have pods below that. And then the pods usually have about 5 delivery teams. And so we're able to kind of roll up at various levels based on the question we're looking at.
Austin Bagley
Got it. OK. So let's kind of dive into that. So you've everyone has access, you've turned the lights on for everybody, but not everyone's looking at the same thing. You've got people kind of looking at different things.
Austin Bagley
Tell me more about like what roles are looking at what types of data and insight with inflow.
Charlie Moad
You can imagine a delivery team is really looking at there's the the Sprint report and the retrospectives. So in their actual retros, they can bring those up to start to understand. Hey, did we see movement across the metrics we're watching? If so, let's dig into the detail and see. Was there one or two cards potentially that you know change this or actions we need to take out of that so they're using it in more of a shorter term view to understand kind of like where things may be getting tripped up or where do we see like things go backwards in the flow?
Charlie Moad
Managers are using it in their their conversations as far as like what's expected of people in your role, like what do we typically see and again we're talking more about like do we have the output that lets us understand how we're trending too. So it's all about trending over time and less about the the core number we're doing because again story points, you can't even compare team to team, that's another good example, so. Again, managers are looking at that angle and then they can also look at investment themes to understand is their team especially. What I like is when you look at an individual team and it's breaking it down by user. Hey why is this one engineer working on nothing but customer defects for weeks on end?
Charlie Moad
Like they're probably going to get burnt out. And so you can see that very clearly in the reports. That's something that we'll keep an eye out for and make sure that we're actually distributing the work obviously for knowledge sharing, but just just work happiness too, right? That's another thing that we want to have as far as from a leadership level, it's. Are we working on the right things as far as those investment categories?
Charlie Moad
So in our JIRA we have investment themes for every single issue. So again I mentioned it's really important to actually get all work reflected in JIRA. So we can concretely say how much time or what percentage of time are teams spending in certain areas and then that even helps us in planning on a go forward basis when we were building rhythm. We went really heavy on strategic you know feature building and backed off on some other areas. And now that we have that out there, we're seeing kind of things come back and we're paying back some tech debt for example.
Charlie Moad
And but it's fully expected and we're kind of just making sure does that trend match kind of the plan we laid out and then over the whole year how did we break down that investment. So that's really helping us. Again, looking back at our last fiscal year, but also planning into our next fiscal year, I can more confidently this year say, OK, I know this team is going to spend about 15% of their time on customer defects, but last year is like 1/4.
Austin Bagley
You know, you just kind of guess, yeah. And so you actually can can really set them up with the right expectations. You're not changing things throughout the year or I mean you are, but like you've set them up at a more accurate starting place so that you know the resources you need, you know what expectations to set. You have a higher, say, do rate because you're seeing the full, you know, picture of what's going on.
Charlie Moad
Yeah, and I, you know, talking about the next feature that is not fully released, but coming as the Dev weeks, that one is even a nicer layer on top of just story points alone, because now you're getting something to contrast with Was the team being equal on how they assign story points from various work streams. And you know there's instances where we see variances that again you wouldn't get through just ticket counts obviously, but even story points alone.
Austin Bagley
So this is like I love kind of the principles you've kind of shared here because everything you've talked about you've really embedded like looking at objective data into like your existing cadence of work. So with teams, you're having them look at obviously these three, you know, key metrics, but also kind of, you know, multiples within their Sprint, within their retros. Hey, let's go dive in and see what's happening. Let's go take a, you know, a spirit of curiosity, let's go try to explain what's going on here and go be curious and go try to figure it out and see what are the things that we can go on and fix. But then also if you go up a level to kind of your leadership team, it's like now I have visibility into what people are working on.
Austin Bagley
How do I go and block and tackle, how do I plan, how do I, you know, update our stakeholders in terms of what's going on and when we're gonna deliver things for the product. So I love kind of how you've embedded those into the cadence of work. The other thing if we kind of take a transition here that I've been really impressed with, with sales law specifically, is you've done some deep dives into the data to help you kind of uncover some hypothesis around the way that you guys work. So let's focus on the executives and leaders. Now I love some of the analysis that you put in to better understand what was working across your teams and what needed to change.
Austin Bagley
So what are some of those insights? And I think I've got some screens here that you've prepared that we can talk through.
Charlie Moad
Yeah. So first of all, what you're seeing here is all data that we pulled out of the API. From Pluralsight. So we combined the coding metrics API and the collaboration API to look at the first two quarters of our year. So we had almost six months of data.
Charlie Moad
We thought it was a good time to really check in because again, we want, we want to see over the long term. It's kind of dangerous when you look week to week, yeah, because people go on PTO and things like that. But over the long term, you expect that you're going to really get a good pulse on how things are going. So I had a desire to see kind of in one slide. Not only across teams, but within teams, where did these metrics that we prioritize shake out?
Charlie Moad
And so once we exported that data to a CSV, we're able to stick it in Tableau. And what you're seeing here is a visualization, a box plot about coding days. And so this box plot is good at conveying what I was asking for. So you can see as you look across this diagram, the middle of the two Gray rectangles is the median. For the individuals on the team, So every dot is an individual.
Charlie Moad
And then these whiskers, they're called the lines above and below are kind of showing you a min and Max, but they'll only go up to two standard deviations. So where you see the dots just kind of out there in isolation, those are true outliers. And so those could warrant really digging into like, well, why do I see that? It could be an error in your configuration, for example, or just. Somebody who's really new and doesn't have much data, but it definitely warrants looking at this.
Charlie Moad
So what's interesting here is again, you see a wide variety of behaviors. Some teams you'll see are very consistent in their profile, but then other teams have massive deviation. And so again, you said be curious earlier, I love that. I think that's what I always kind of emphasize for like as a manager, like don't assume anything looking at this. You will be surprised whenever you dig in and ask like, well, why?
Charlie Moad
Why are you seeing this? So you know me as a director, I'll go to my managers and say does this match your expectation or those things that surprise you and then have them even go talk to the developers on this too, like hey, what's driving this? And obviously we can also dig into the tool itself and look at the for coding days in particular, well, what is their commit frequency And you know some of these, what we found is. We had developers that were coding up features days at a time on their laptop, never making a single commit. And obviously there's a a risk element and computers are a lot more reliable than they were when I was in college.
Charlie Moad
But there's still always that element of it's safe to, you know, commit your work, work and push it. So just a basic good practice. But then others of these, it's like I mentioned, it's like, oh, I spend a lot of time in our public Slack channel doing support. And so that drives a completely different discussion. And on the flip side, teams that have like a very high to very low, you might wonder, oh, is this team really delivering on the heels of one or two people?
Charlie Moad
And we need to make sure that we're kind of building a A-Team profile that if one person leaves, it's not detrimental.
Austin Bagley
And so I think there's something that people need to take away from the way you just described all of that. And that is, you took every single of these insights. And your immediate response was how do I as a leader like go and help these teams and support these teams in a better way. It isn't about, hey, what's up with this metric? Why aren't we doing?
Austin Bagley
It's like, OK, these are now insights for me to go and make changes like hey. I got to go block attack. I've used to support this phrase a lot. I've got to go help them get out of these support channels too much so that we're, you know, staying focused on what they need to be doing or I need to help do some, some, you know, cross collaboration or training so that that the team is more consistent, you know, from top to bottom. You know, I think that's the way that I think everyone should take away from this, is that it's more about how you as a leader can go and use these insights to go and help the teams rather than just monitoring, you know, productivity.
Austin Bagley
Great. So I think this is like one of the things I like about this, but there's also you did some like really cool research around your code review process as well. So love to kind of dive into some of the insights around that and what you've done on your team to kind of use those insights. Sure.
Charlie Moad
So I mentioned at the onset time to first comment and PR iteration time were the other two in addition to coding days. So this chart here is the same type of chart I just described before, but we're looking at time to first comment across delivery teams. And So what you see here is there is a large large kind of disparity across teams and in particular you see some teams where people are on average waiting multiple days to get feedback. I think there's also a theme where teams of two, you get this behavior and some of those teams of two are globally distributed. And so this, this is as a manager, it's in my face.
Charlie Moad
It's like, oh, we've really set them up for probably a sense of working in a silo or getting very slow feedback. It's not really on them. It's just like that's the dynamic of how they were structured and maybe we need to reassess that. And so essentially you know as we look at this, it drives a lot of the discussions I talked before some of the teams on the left, we might go to them and say hey, what are you doing to get feedback so consistently fast. And several of our teams actually I think a lot of our teams now have dedicated slack channels only for requesting PR comments and it it can kind of get buried in the noise of the team channel normally.
Charlie Moad
So having a separate one that is dedicated to that has helped drive good behaviors. That's just one example.
Austin Bagley
And it has for teams who are, who are kind of having that that shorter medium time to first comment and and ultimately like time to to close. How is that benefiting the team? By By working on those?
Charlie Moad
Those specific metrics, yeah, I think what's nice with this is again time to first comment is kind of proven through research is a huge lever to kind of bring other metrics with it. So essentially we're seeing code get shipped faster. You know, we're catching potential issues quicker and yeah, it's just getting feedback early and often is always going to be a plus is what it boils down to.
Austin Bagley
Yeah, I like that a lot. And you got to kind of a couple comments on these left teams here as well. Do you want to kind of cover that?
Charlie Moad
Yeah, I thought it'd be fun to tell a little story. So these two teams here are actually on the same pod as I described, and they also they work on the same code base. They usually will take customer defects and divide them amongst each other. So very close. And so you expect that they're gonna operate very consistently.
Charlie Moad
So in time to first comment, you see these teams effectively have the same setup, the same process. Sometimes they'll even comment across teams and you know you have a very, very similar profile. So here is the PR iteration time again, this is once I put something out there as a comment, it's the time to actually getting the final commit to push this out. So it's kind of the inverse is how I think of enough time to first comment. Well, what's interesting is now these two teams I expected, OK, they're going to be right next to each other.
Charlie Moad
And actually what was fun is I took these charts to this pod and we all talked about it in an open forum to understand this. And so this one in particular, you can see concretely, well one team is essentially very quickly acting on that comment while the other one, it's just like stepwise higher. And they're, you know, even within the org, they're in very different spectrums. And so this was just very interesting to me, like what's driving that? Is it bad in One Direction or another?
Charlie Moad
And what I learned is the team on the left is very, very much doing Sprint formal processes where they will work 1 ticket at a time and they, you know, will get the comment, get it out there and move on. The team on the right is working a little bit more of a Kanban style. So they are actually just they'll get a ticket, they'll, you know, put it out for PR review and then they just go pick up the next thing and move on. So they were being much more slow about going back to it, you know, deal with those comments. So it's like a race to get it out there and then OK, I'll come back to it when I can.
Charlie Moad
And you know, so we're still digging into like other negative effects that we see in other metrics as a result of that. So no concrete answers there, but my intuition would be there's always risk. Like time introduces risk. Like in the sales world we say time kills all deals. Well you can imagine here all the negative things that could happen if a code sits out there.
Charlie Moad
So I would generally bias towards hey, it's probably good to try to get work done so you're not doing too many streams of work. It comes back to getting into a flow and not context as switching as much.
Austin Bagley
Yeah, you have all the all the costs of switching. You went back to something that you're working on a few days ago and getting back into the swing of things. You know the the repo may have have have changed in the time that you're away and so now you have to go and incorporate all that. So yeah, I I definitely agree there's a lot of risk that comes from from time and so you actually have. One more slide like talking about some of these insights here that you generated from your code review process.
Austin Bagley
Tell me more about what what's happening here.
Charlie Moad
Yeah, so the previous three you saw were a box plot I talked about. I want to see across the teams and within teams. But you know, I've said a couple Times Now there's research saying that time to first comment makes it quicker to merge code. I wanted to see that like, visually. So what we concretely have here is on the left access you have time to first comment and on the bottom we use the time to merge metric.
Charlie Moad
And again, you would expect if this research is right, they would correlate or correlate. And I hope if you look at this chart you can see they do fairly concretely correlate. There's a pretty strong relationship between those things. So also the distribution was really fascinating. So you know, naively I would think, OK, hey, the quicker we merge like the better for the org.
Charlie Moad
And if you click through to the first animation, there's some clusters here that I want to talk through. And this is a little bit of my subjective view, but I think it kind of goes to show that you don't want to over index on just driving the metric to 0 for example. So these three teams, I I jokingly call it debugging in production for various reasons. One of the teams is actually a data engineering team. And so a lot of the way that they do work is they're deploying config changes to our production environment to maybe tweak resources or change like a backfill for one team.
Charlie Moad
And So what we're actually getting more of a measurement is how how much they configure their applications versus how much code they're writing. So with that team, we're actually talking about how do we take those configuration changes and actually exclude them from flow cuz they're not telling us really anything about software development performance. Yeah, I'll get that. The other two are the ones that we talked about earlier and what you see is, and you saw that on the time to first comment slide, they're two of the quickest teams we have. What's you're not seeing this slide is in the last two quarters we had a big spike in the number of bugs, excuse me, that were coming back.
Charlie Moad
And so for me it's like, hey, we're moving too fast. And so it was good to show this to the team and say, hey, this is giving you space to say slow down, spend a little more time on the PRS, testing your features like it's not a race to production, right. We have to balance getting things out. You know, not perfect, you got to get them done, the whole done first perfect tension. But at the same time, this concretely showed me we were moving a little too quick.
Charlie Moad
So we've started the kind of message out there. So I thought that was just really interesting that it correlated to the number of defects we were seeing.
Austin Bagley
That is a big time connection to our last episode because we talked about the principle, we talked about myths of productivity. One of the myths is productivity is never stopping. And reality, productivity is stopping. It's being adaptive and being sustainable, which you've really kind of implemented here with the team. I love how you've really taken the data to identify this is an area where we actually can slow down a bit.
Charlie Moad
So I call out these three dots, those are teams of two for various reasons and again I think just seeing that where that all three of those they are the only teams of two I should emphasize as well. So seeing where they fall again really highlights for me, hey we put them in a position where it's probably really hard to be highly productive from the sense of what are you know expected profile or should be. And so I I found that really interesting as well as a cohort.
Austin Bagley
Yeah. The fact that they're all up here in the top right court quadrant, that you don't have some, you know, down here in the left, like it's it's a pretty clear directional insight here, which is really interesting.
Charlie Moad
Yeah. So these two teams in particular are within the same pod, have the same manager and popped up here. So there was a lot of conversation of what's driving this. And So what we found, these teams were both running Kanban, also very, very heavy on the upfront planning side. And one of the things that we expected was that upfront planning would pay off at the time you go to write code.
Charlie Moad
But what's interesting is we were kind of saying the the metrics told a different story. And so these teams actually now based on this data, they're trying to break up their work even more. And anecdotally, I actually had a bunch of one on ones with a handful of them today and checked in on this and all of them are like just generally I think more comfortable with the work they're doing. They were saying it was kind of overwhelming the amount of changes they were trying to push through and the cases they were testing. And now that it's broken up, they feel a lot more confident with the changes they're introducing.
Charlie Moad
And then I even met their QA engineer and I was like, hey, are you doing OK if you're getting a lot more tickets And they were like, actually it's better because I'm not having to test a bunch of scenarios. It's more just like, you know, very simple tests and I'm able to get them through quicker. So just universally, again, the, the data helped us to have those conversations and just try something new. This was a case where we really just said let's just rip the Band-Aid and try a new approach And you know, early signs say it's really, really driving. You know, again workplace happiness, which I mean happier developers are going to be more productive too.
Charlie Moad
So that's something we'll keep having conversations there and digging in. But again, in absence before Flow, we had no visibility into this.
Austin Bagley
Yeah, I like the the principle. You kind of, you actually go back to one of your earlier principles that you expressed, you know at the top of this was autonomy with accountability. And so the data sparked the conversation. But then you relied on the engineering team to go and say, well, what do we do about this information? And it wasn't a top down mandate, it was, it was a conversation you had as a team together say, hey, what are some things we want to try differently?
Austin Bagley
And it looks like those worked, which is pretty awesome to hear.
Charlie Moad
Yeah. And then I think finally I just kind of showed the BLOB in the middle. I think what we're learning at least six months in is this is that that general area is kind of our happy state. And you know for where we are as a team and kind of the demands we have of our engineers, we'll be checking this quarterly for sure. And I'll be interested to see like are we as a complete cohort, Are we seeing a general movement of that group or not?
Charlie Moad
And yeah, very interested because again, this is just one snapshot in time, the first snapshot. So we'll see where it goes from here.
Austin Bagley
This is awesome. I mean, and I think what we're seeing here is a pretty like advanced use case of flow where you're you're nerding out and diving into the API. You're pulling out some of these insights that are having, you know, broad impacts into the entire organization. It's helping you really uncover the best way of working for your teams, which is so great. For those that we may actually have to have like some sort of follow up session where you kind of go through your ninja skills with the API, but you also have some pretty cool stories like right within the flow app itself in terms of helping you do some testing and some hypothesis vetting to uncover like is there better ways of working?
Austin Bagley
And so I'm going to talk through some of those. And I believe the first case is you did some testing with Github's Co pilot, correct?
Charlie Moad
Yeah. We were lucky enough to get a 30 day trial with GitHub copilot and we were given enough seats that we could open it up to all of our individual contributors. And effectively what we did is during that 30 days, we asked anybody who wanted it if they wanted a seat. We we gave it to them. We weren't overly prescriptive on how much they had to use it or anything.
Charlie Moad
It was just available. And at the end of it what I did is I had sent out a survey to everyone saying hey please fill this out and part of that survey was understanding how often did you take advantage of Co pilot, what was your frequency of usage. Also I was looking at their level within the org were they associate mid level senior staff and effectively I wanted to just get a sense to does GitHub Co pilot drive the results that they tout and flow seems like a logical way that we could see this. So what you're seeing here is during that 30 days, that 30 day window that we had the trial, the left is about 30 plus people that said yes I used it and I use it every day. On the right we have the everybody else in the org who said they maybe said I used it.
Charlie Moad
I used it a couple times a week, I used it once a week. Anything less or you had the people who didn't even respond to the survey, right. So OK, got it. Generally what I wanted to see is those two buckets and all the up, down arrows you're seeing really what that is. It is the 30 days of the trial compared to the previous 30 day window.
Charlie Moad
And so you know as an org we're going to see our delivery kind of naturally ebb and flow. So I didn't want to confuse the data with the fact like well is and engineering team or throughput went up 20% across the board. I really wanted to see well if everyone else's throughput and let's look at that the raw throughput everyone on the right, their raw throughput went up 18.7%. So that's kind of my baseline of what we would have got if we average everyone. But when I look at the Copilot users who used to daily 85%.
Charlie Moad
So this isn't like an error tolerance type difference. This is a massive change, right? Every single metric you see here is directionally right and pretty substantial. Efficiency rate is an interesting one. This is code that you push to production that doesn't get changed within.
Charlie Moad
I think it's 30 days shows it's slightly less efficient, but again they're so close it's hard to know if that's related. But it was interesting. But that is kind of the outlier where the it's directionally inverted from what you would think. I know another metric that people find probably the most interesting is the TT 100 times time to 100 lines of productive code. And yeah, there you can see that 8.7 versus 29.6 change.
Charlie Moad
It's it's pretty substantial to just drive that much more productivity. So All in all, I was expecting to see something, but I can say honestly I was shocked at the difference is what it boils down to.
Austin Bagley
Oh, this is the soda shake. So tell me more are you, are you moving forward more with Copilot now because of this? Like what's what are kind of your go forward plans and any advice you have you know with with teams who are exploring Copilot?
Charlie Moad
Yeah, absolutely. So actually within Flow, I just created two teams, they call it. One was the cohort who said that they were using it every day and then the other one was everybody else. So it's very easy to set this up to do this comparison. So I was just able to go to the report, pick the date range and then flip between the two groups.
Charlie Moad
What's we're doing on a move forward is essentially people who are raising their hand or expressed interest that they wanted a license. We're giving it to them immediately, but then I'm keeping track of like when they got access. And also GitHub. GitHub has some reporting on their usage. And what I want to see is this was a 30 day window.
Charlie Moad
What's this look you know 90 days in, you know 180 days in and get the sense of what's the long term impact of having this tool?
Austin Bagley
Yeah, you're really expressing one of my favorite use cases of flow, which is, hey, I'm going to go make a big investment or a big change. And this is one example of implementing a new, you know, obviously pretty highly touted developer tool, but there's lots of other changes. I'm going to go reorg A-Team. I'm going to go, you know, add in some more support here or there. I'm going to go, you know, change the way that we we do planning.
Austin Bagley
And then taking a step back and looking at the numbers and saying did the change that I instigated have the effect that we were hoping to have yes or no. And and I love that you've kind of taken these really simple metrics to say, you know, this is what that has resulted. There's lots of nuances here. There's lots of things you can't dive into further. But this is like a great kind of like first view of, you know what like.
Austin Bagley
There's no, there's the, the gap between, you know, productive throughput, you know, 72%, you know, lift versus 16 that's way too much to explain through coincidence, right. And so I love that that you're using this to really inform your the way that you are operating, which is which is awesome. So this is all like wonderful conversation. Had a great time talking with you in terms of all different ways that you're using, you know, these data and insights to help drive your team forward. If you were to give any like parting advice for engineering leaders who are implementing flow, like what advice would you give them and what are some things that you want them to focus on within flow?
Charlie Moad
Yeah. I think what I've found most powerful, especially recently, we're having our quarterly performance conversations and it's really dawned on me that just having this baseline of data and this common view. Has made it so much easier to have those conversations. No longer will you have to say like, I think or it feels like you're just looking at the numbers together and seeking to understand and like you said, being curious about that as a manager. And it just takes away the accusatory tone or the tension that you might have had in those conversations in the past.
Charlie Moad
And it gives you a common ground on a move forward basis to say, here's some of the expectations where I need to see change. If there are challenges. And I think especially now in this remote world that a lot of us are in, it's more important than ever. I'm not sitting in the same room next to someone and have that real instinctive pulse of like how they're clicking with things and whatnot. And so at the end of the day, we have to rely on some tools like this to understand it especially.
Charlie Moad
This remote kind of environment's not going anywhere and it might even scale up. So I've really, really liked that as a manager. And funny enough, it kind of aligns what Sales Loft does for sales reps We have a lot of tools that look at activities and insights. That will help managers have better coaching conversations and I remind my teams about this all the time. I'm like, oh, that's software you're building.
Charlie Moad
Like Flow is doing the same thing for you as a developer as we are doing for sales team.
Austin Bagley
So you should be like very aligned and embrace that approach. I love that this has been such a great conversation and you have, you know, such incredible insight into the way to really do this, right, really to implement these, these insights to bring your entire team along for that journey to go and be curious, to uncover insights. And then and what's most important is to go and make changes and make and go try new things because of the things that you see with those insights. And I love everything you've shared here today. So thank you so much for the time today.
Austin Bagley
Charlie, appreciate you taking the time out of your day to talk to us. We are super excited for all of you who have joined us to have spent this hour with with Charlie. But also we've got three more episodes coming. We're going to be talking about healthy metrics with Carol Lee. We're going to be talking about efficiency and optimizing your workflow.
And then we're going to wrap up with Manulife talking to us about how we advocate for our teams with data. And so great sessions coming up. Thank you so much for your time and we will see you next time.