Kevin is an experienced leader of high-profile, high-performing product, research, and shared technology engineering organizations. An often-invited speaker on building strong engineering teams at conferences internationally – often talking about learning from failure. Has extensive experience building products using Lean, Kanban, Scrum, and Extreme Programming methodologies.
In this episode, Kevin tells his favorite mistake story about the launch of “Spotify Now” when he was an engineering leader at Spotify. Why was there pressure to launch? What mistake did Kevin and team make regarding data from a small group of initial users? How did Spotify leverage its culture of “handling failure well”? What did Kevin learn?
Questions and Topics:
- How do you balance the cost of lost customers vs. the cost of embarrassment?
- Being surprised by the results of experiments
- Was Spotify Now a problem of a bad concept or bad execution? Or Bad design?
- Losing customers as “the cost of learning”
- Organizational learning to not get into this situation again?
- Doing retrospectives on EVERYTHING to remove the stigma?
- The Forbes article that Kevin was quoted in
- People who strongly believe in “accountability” — punishing failures — can you change their minds?
- Failure vs. mistake? — how would you compare those words?
- Tell us a little bit more about DistroKid – strengthening this culture of learning from mistakes?
Video and Blog Post by Kevin:
- Fail Fast, Fail Smart… Succeed! by Kevin Goldsmith
- Blog post version of the story at Spotify
Watch the Full Episode:
Subscribe, Follow, Support, Rate, and Review!
Please follow, rate, and review via Apple Podcasts or Podchaser or your favorite app — that helps others find this content and you'll be sure to get future episodes as they are released weekly. You can also become a financial supporter of the show through Anchor.fm.
You can now sign up to get new episodes via email, to make sure you don't miss an episode.
This podcast is part of the Lean Communicators network.
Other Ways to Subscribe or Follow — Apps & Email
CTO Kevin Goldsmith On Leveraging Failure To Drive Success At Spotify And Distrokid
Our guest is Kevin Goldsmith. He's the Chief Technology Officer of DistroKid, which is the world's largest distributor of digital music. Kevin is an experienced leader of high-profile, high-performing product research and shared technology engineering organizations. He's often invited to be a speaker on building strong engineering teams at conferences. He often talks about learning from failure. I watched a version of that. You can hear Kevin giving his presentations and learn more of his perspectives than we can cover in this episode. He has a lot of experience building products using methodologies, including Lean, Kanban, Scrum and Extreme Programming methodologies. His website is KevinGoldsmith.com. With that, Kevin, welcome. How are you?
Thank you. It's absolutely great to be here. This is one of my favorite ideas for a theme of a show.
Thank you. It's your favorite idea, and we'll hear about a favorite mistake and talk about failures and mistakes. I know you've fought and talked about this a lot. You've experienced a lot of lessons learned here. I want to ask first about your website, to delve for a second. You also described yourself as a photographer and a musician. On the music side, what's your instrument or instruments?
The bass guitar is the instrument I’ve been playing the longest. I play other things as well, like piano, cello, and guitar.
There's some music linked on Kevin's website. I guess composer is the label you would accept as well.
Yes, composer seems a bit grandiose for what it is I do.
I do write music.
I encourage people to check out Kevin's website KevinGoldsmith.com. As we normally do here, Kevin, I'm going to try to tee you up here. Looking at the different things you've done throughout your career, what would you say is your favorite mistake?
My favorite mistake is related to my time at Spotify. It's a product launch I worked on while I was there called Spotify Now. The reason it's my favorite mistake is because you've almost certainly never heard of it.
I have not, and I'm sorry to say I’ve never been a heavy Spotify user. I know a lot of people are. Go ahead.
Even if you were a heavy Spotify user, you would not know what this thing is called because it didn't work out. This was in 2015. For those of you who don't remember the history of streaming, I was working at Spotify. I had been at Spotify for a few years. Apple Music was launching the Apple streaming platform, and we were well aware it was coming. We decided that we were going to have a very big product launch right before Apple Music was announced.
We wanted to do that to say like, “It's great that Apple's getting into the space.” Obviously, given Apple's size, they were probably the largest retailer of music at that point. Them getting into streaming was a big deal. We wanted to say, “We're the true innovators here.” We had a bunch of different things that were in various stages of development. We decided to pull them all together to put a unified theme around Spotify Now. We went had a big announcement and launched it right before Apple Music launched. We had a huge press event in New York with all these famous musicians and artists and launched it. It did not go so well, to be clear.
Before delving into that, what was the difference between Spotify as it was then or as it is now and Spotify Now as a product?
Spotify Now was the idea that we were going to find the right music for every moment. That was the unifying idea. People will use Spotify to this day, many hours a day. It's not like some games or social media where you're on it, then you're off it. People use Spotify for hours every day. What we wanted to say is, “You're at the gym, so you're going to listen to your workout playlist. You're in the car, so you're going to listen to your commute playlist, or we're going to present things we think are going to be appropriate for you right now.” It also included a bunch of brand-new things for the platform and a bunch of new innovations. I'm trying to hold it back on telling you what they exactly are because of the learnings from this failure, some of those things are still on that platform and are well used and worked out. A bunch didn't.
I didn't mean to derail you.
No, it's okay. I'm trying to build the suspense of a story. We had all these things in various stages of getting ready. We put a unified theme around them. Normally, Spotify is a company that works very lean, or traditionally has worked very leanly when I worked there. It would mean we wouldn't release anything to broad customer use until we'd release it to a small customer base and proven it.
Spotify was very good about handling failure well. This is an important part. I learned a lot from the company on how they handled mistakes. This, given that was a big event and we didn't want the press to know and the press were certainly paying a lot of attention to us because of the impending launch of Apple Music, we didn't want to do what our normal thing was and release it to a group because we were worried the press would find out as we were building all these features.
I was in charge of this effort from the engineering side, so I was running the project from the engineering side. As we were using it, it didn't seem like it was as exciting as we'd hoped from the people building it. We were getting nervous. The company didn't want the press to find out about it. They wanted to keep it secret and wanted to have this big event. The engineering team said, “This is not normally the way we do things, and it doesn't feel right to us. We'd like to try it.”
We did try it. We eventually convinced the company, “We need to try it,” so we did. We launched it to a very small cohort of users. We were very lucky none of them turned out to be in the press, so it didn't leak. We got amazing results completely out of the sky, massively unexpected results. This is where my mistake came in. We got completely unrealistic results from this small cohort of users. We'd been working so hard, and we were trying to get to this launch. It was a very ambitious project.
That convinced us what we were doing was going to work out amazingly well. We got it. We were able to launch on time. It was a huge success from a project perspective. We launched it. Again, Spotify, being very conscious of learning from mistakes and being smart about how it handles failure, we didn't launch to 100% of our users. We launched to a small percentage to test it and validate what we'd seen. What we found was when we gave it to the general public, not only did it not impress them, it made people dislike the app. Giving these features to people made them quit using Spotify instead of getting more people to use it.
Quit and cancel paying?
Quit and cancel their subscription. What our pre-release results showed us was a 6% product retention increase, which, for a subscription product, means people continue to use your product, and 6% more people using your product, especially at the scale of Spotify, is amazing. That's what I was saying. It was completely unrealistic. Instead of seeing a 6% increase, we saw a 1% decrease, which meant for every 100 people we gave these new amazing features to, 1 of them quit using the app. It was not great.
If I can interject with a question here, the sample that you had, did it turn out to be unrepresentative of the broader user base? Was that a mistake? Was that part of it?
No, it wasn't. There was an engineering problem that resulted in that 6% test increase and then a 1% real-world decrease. That problem turned out to be that we accidentally released two tests to the same cohort of users. One of those tests was all these new Spotify Now features. The other test turned off advertising in the free product. The other test essentially made the free product exactly the same as the paid product pretty much. Of course, if you take commercials away from a free product, you get a substantial increase in retention because you don't have ads interrupting your experience.
It was co-mingling the results of those two experiments.
When was it discovered that, “We had turned off ads?”
It was discovered after we launched. Once we launched and instead of seeing a 6% increase, we saw a 1% decrease, we had no idea. To your point, the group we gave it to was this group in New Zealand. We picked a random group of people in New Zealand and said, “Maybe the folks in New Zealand aren't representative of the larger set, so 6% maybe is weird,” but we would've still expected to see maybe a 2% or 3% increase. When we saw a 1% decrease, we realized something bad had happened, but we didn't know what. Another thing that you do when you want to handle failure well is you don't do big changes because big changes mean it's hard to know if something goes wrong, what went wrong? There are too many things.
This is exactly what we'd done, which we normally didn't do. We would do lots of small things, but because we had a whole bunch of things, we couldn't be sure if was it something in the product that we got wrong and that was sandbagging everything else we were doing or was it something else. Honestly, it took us a while to figure out, to walk things through, and try a bunch of different experiments before we found this issue.
Sometimes it takes a while to figure out the issues. You have to try a bunch of different experiments to find them.
To me, in problem-solving language, whether it's Toyota production system and manufacturing, if there's some problem, one word that gets used a lot is containment. First, contain the problem before you can then figure out the root cause and figure out what to do. Was part of the containment either to stop the rollout or to pull back the changes?
We absolutely could have pulled the changes back. If we had not had a massive press event and hundreds of articles announcing all these great features, we probably would've done that, but we did contain it. This was Spotify having learned from places like Toyota and things like that. When we launched this to the public, we only launched this to 1% of users in our top four markets. By definition, it was contained.
The challenge we had was because we got such an unexpected result when it went to the public and because the mistake was so big because there were so many changes in one release, we couldn't grow that percentage because we were losing customers. We could not expand that percentage. It was small enough that we wanted to do as many tests as we could. It was hard to get to statistical significance.
We wanted to run lots of tests and we couldn't. At a certain point, we decided to grow the container. One of the prog managers did the math to find out how much money the company would lose by going from maybe 1% to 5%. We decided it was a reasonable amount to figure out. Somewhere in there, we realized, through lots of experimentations, literally it took us months of different experiments to figure out what it was in these Spotify Now features that weren't resonating with customers. We were able to release the rest of them. That process was incredibly painful.
It's a fascinating set of dynamics there of being under pressure from a competitor and it was announced publicly. There are all kinds of business dynamics that would pull you away from the Spotify way. I can see where you could extrapolate the cost of lost customers. It would be very easy to calculate the cost of embarrassment or the cost of reputation. You can't calculate that. How do you weigh that?
The great part about having so much understanding of the dynamics of your product, we could literally figure out how much money we were losing every day and make decisions based on real cost. The reputational cost, all those things, that's an impact. We couldn't roll forward because no matter what people were wondering, “I read this article, this new feature. When is this feature coming? I haven't seen it,” we couldn't give it to you. At that point, the reputational cost was going to be what it was. We didn't have a choice on that one, but we did have a choice on how we then rolled it out. That's how we worked with it.
You talked about problem-solving. It's tough. In general, problem solving, whether it's continuous improvement in an assembly line or with software, the ideal would be making one change at a time so you can very methodically test cause and effect. There are some situations where the urgency might be so strong. We've had a quality catastrophe. We're producing nothing but defects.
In healthcare, we've got some high infection rates. You might say, “We don't have the luxury. I'm doing one test at a time.” We'll take the risk of co-mingling the results with the urgency to say, “We're going to try a bunch of things.” Maybe that's a mistake, but maybe it goes to show it's very situational. What are your thoughts on figuring that out?
It's very situational. I like your analogies there. I don't want to equate what we do to testing in healthcare because the urgency is very different. We had an urgency imposed on by us and then a financial urgency, but we could have kept it at 1% forever without worry. It could be with a worry, but the thing that was forcing it at this point was us trying to get this stuff out to customers. It is a little bit closer to the assembly line.
It was this problem. We'd failed so big that we didn't know what we'd done wrong. We had to then reverse engineer all the decisions and test them after the fact to walk back and figure out what the issues were. Folks had worked hard to get to be able to launch on this date. This was at the beginning of the summer of 2015. It's after working hard through the winter to get ready for this and then to lose their summer.
You don't mess with summer, but we had to mess with summer. The urgency was imposed on by us, but also because we'd worked hard on wanting to get these things out. That is one of the reasons why we decided to grow the cohort because we were locking up a lot. Basically, most of the product engineering organizations or the company are trying to figure this out, so we needed to move things forward.
There was that known cost of learning to move forward and figure it out. As you were running experiments and trying to get feedback, was it a bad concept or bad execution? If I said, “It's sleep time,” and then suddenly, some Swedish death metal starts playing, that would irritate you as a user. That’s a big user story. What were some of the core things that you learned then, and were you able to fix it?
What we were able to do through trial and error and different combinations of tests, there was a bunch of stuff that all came out together with this. This is when we introduced video in Spotify and added video to the application. This was also when we introduced podcasts to the application. We'd introduced a lot more algorithmic playlists. We'd also introduced a lot more editorial playlists. We'd added a brand new user interface. If you've used Spotify, there are a couple of different paths through it. One is, “I know I want to listen to Taylor Swift. I want to listen to this song from Taylor Swift. I'm going to go search, find this song, and play this song.” Another usage of the app was, “I just want to listen. I'm reading or I'm working. Give me something I want to listen to.”
We'd done well as a company doing that, “I want to listen to this song by Taylor Swift right now,” or, “I want to listen to something very much like this.” We built an entirely new user interface around this, “Play me the right music for right now. Give me what I want to want to hear.” That did not work whatsoever. A lot of the problems were with that interface, while we thought it worked well, while this test made it seem like it was working exceptionally well, it wasn't working at all. The other thing we added to this is the Spotify running playlist works which would adjust this music tempo to your running cadence. We'd determine your running cadence and then match the music to your running cadence.
From the motion sensor in your phone at that point, probably?
Yes. Another thing that we did was the party mix DJ. That was part of this as well. All those did okay, but it's nothing like we'd been expecting. Once we'd figured that out and removed that interface, we were able to ship everything else and get a modest increase in retention, something more of what I should have expected. My mistake in this was I saw something completely unrealistic. Instead of saying, “That seems odd. Maybe that's not right,” it was like, “This is great. Full speed ahead. We've struck oil.”
That's a great point because one other framework that language that people use in different settings is sometimes called the Deming Cycle, plan, do your study, and adjust. You frame an experiment, you do or test something, and you study the impact. You did those things. It's such a good reminder there to think, “Do we have bad data?”
The data we got confirmed my bias that what we were doing was amazing. As the engineering leader of the project, I wanted it to succeed. While I had concerns and doubts, this gave me permission to put them all away. I knew my doubts were wrong. Everything's going great, and this is going to be amazing and spectacular.
To come back to your analogy of healthcare or an assembly line, you have tolerances. If you're machining parts or you're administering a medication trial, those tolerance have a lower bound, but they also have an upper bound. When it exceeds the upper bound, that's also a cause for concern. What I was looking for was to make sure we were being the lower bound, and I didn't set the upper bound to say, “This doesn't seem right,” instead. What I should have expected would've been maybe a 2%. An outstanding result would've been a 2% or 3% increase. Seeing 6% was silly.
That seems like a clear lesson learned if you get results that are far outside of your hypothesis, do some digging. Have you been in a similar situation? I'm sure you're going to be mindful of that running future experiments.
I haven't had this exact situation repeat itself, but certainly, I’ve gotten much better at saying not only this doesn't look right or this is concerning. We expected to do better, but also saying, “This seems good,” or, “How confident are we? Let's make sure that the data we've gathered is correct.” I absolutely learned from them.
It comes back to scientific problem-solving or scientific improvement. It’s this process of you have a hypothesis, you run a test, and hopefully, you're predicting the results. You use a phrase something about being surprised. We need to be right prepared to be surprised. That's part of being a good scientist. That's a different mindset of like, “We've got this idea. Of course, it's going to work.” I'm sure you weren't so hell-bent on saying, “I know it's going to succeed. I couldn't admit that there's a problem, but it sounds like there was a different cognitive bias.” That's being human.
I was the engineering leader, not necessarily the product leader, so these were not my ideas. It wasn't that I was so wedded to, “I am birthing my vision to the world, so it must be right.” That wasn't necessarily the case here, but it was also the case of, “I'm working hard. I'm working with folks. I'm trying to motivate and encourage them.” When I saw something that maybe should have made me question but reaffirm that what we're doing was successful, me wanting to be a successful engineering leader made me feel great, instead of me saying, “Something might be wrong here.”
It sounds like there are a lot of engineering and business lessons. I know you've been away from Spotify. I impose it as a rhetorical question. I don't want to ask you to speculate. What's the organizational learning of executives at Spotify saying, “Here are the business dynamics, and here are the pressures. How do we avoid putting ourselves in that situation again?” Was the launch of Apple Music a once in a company lifetime event?
I’ve been gone from Spotify for a number of years now, and the companies evolved like companies do. Certainly, during my time at Spotify, one of the things that impressed me about the company, having worked at lots of other places before, was that learning culture. When I joined the company, everyone kept talking about the biggest launch before we did Spotify Now, which was before I joined the company, we had a massive premise event and announced a ton of new features that we hadn't built yet but claimed we were going to release soon and then didn't for quite a long time. Everybody remembered that.
One of the things that is really impressive about Spotify is their learning culture.
The company had learned from that mistake. When we did the Spotify Now event, we didn't even set the event date until we had extreme confidence that we would be able to launch on that date. In fact, for better or worse, the minute they finished the event in New York, we turned it on in Stockholm because we'd learned from that prior. Because Spotify was exceptionally good at learning, me being the biggest project I'd worked on at Spotify to that point, I also had all the retrospectives of all the projects. We did retrospectives on any project. I could go back and learn what works well here, what doesn't work well here, and what I should be looking out for. That helped me a lot.
The lessons that we learned from Spotify Now taught us a lot about how we released things from then on. An important part for the company as well is, to your point, we could have said, “This didn't work. Hopefully, everybody will forget. We'll kill all these features and go back to business as usual.” The company wanted to understand, “We thought this was going to work, but it didn't. Let's learn from it.” Some of those features I mentioned, obviously podcasts but video as well, in the time since I was on Spotify, some of those features are no longer in the product, but some of those features have become massive businesses. Spotify persevered and wanted to learn the lessons instead of sweeping them under the rug and pretending it never happened.
You mentioned retrospectives. One point you made in the webinar presentation that I saw you do was the idea of doing retrospectives on everything to remove the stigma of you being called in and are in trouble perspective. Can you think of a scenario where the launch of features or some project was deemed a success, but the team could still look and say, “Are there things that could have been better? What did we learn?” Tell us more about that.
I’ve continued this practice that I learned from Spotify and other companies, but there was another good Spotify example. One of the things that impressed me was right after I joined the company in 2013, especially back then, there were big artists that didn't want to be part of streaming, the kind that people want to hear on their music platform. I don't think it was The Beatles. That was later. I forget who it was. It might have been AC/DC or some other artists. I don't remember right now.
Was it Metallica because they were so anti-Napster?
Metallica was later. Metallica was while I was there. It might have been Led Zeppelin. We did this launch for them. A band like that is a huge event for a company, especially back in 2013 when Spotify was a massive thing. It happened right after I joined. A lot of that work happened before I had even got there. I just helped finish the project with my team. We had a big party because it went great and everyone was happy. I was surprised that a day later, I got this four-hour meeting on my calendar, “Let's go through all the mistakes of the project.”
As far as anybody could tell me, it was a massive success. It turned out we spent the time and there were a ton of things we could have done better. “No, this was great, but this took too long. We could have done this differently. How would we do this next time?” Through the next several years at Spotify, we would have more of these artists, Metallica, The Beatles, these kinds of things. We took the lessons from that, and it was much smoother.
Each time, we got much smoother at it. Not just because we kept doing it but because we actively learned the lessons from the previous time. That made an incredible impression on me. Having been in the industry for a bunch of years at that point, I was used to you doing these meetings when something goes wrong and it's a blame exercise. Who screwed up? Was it the test team? Was the engineering team? Whose fault was it? It was about pointing fingers. To have this very different approach to it blew my mind.
I wanted to give a shout-out to Ward Vuillemot, who is a guest in Episode 195. I learned about Ward's work and read an article where you were quoted in it. We know a lot of people in common, but that quote struck me so I reached out to you. You said, “Figuring out how to fail effectively is a superpower in organizations versus others that haven't and are still punishing failure. It destroys all innovation.” That's so well said. That's powerful.
You talked about the blame game. That would be one example of punishing. Let me ask the question this way. It might make sense to some people to say, “If there are problems, I need to punish them. Otherwise, that's opening the door to more problems.” If somebody believed that, is there any hope of turning them around, or do you need to put yourself in a situation where people don't believe that?
I’ve certainly worked with people who very much have that attitude. Accountability is a word we use a lot in management. Accountability means you're accountable for your success, but you're accountable for your failures and what happens. I’ve certainly worked with a lot of people who would say, “No, you must punish failure because otherwise, you don't learn.” In my own experience, having worked in organizations that very much live this way, what it tends to do is makes people very risk-averse. The easiest way to avoid failing is to not do anything I'm not certain I’ll be successful with.
The organizations I’ve been in that have operated in this way tend to have a significant problem with innovation and don't understand why they're being beaten by their competitors who are out innovating. It's because anytime you make a mistake, you get beat down. You stop doing anything that you don't know 100% you'll have success with.
That happens in so many environments. Even if you're not trying to be on the frontier of innovation, that dynamic will stifle the smallest of continuous improvement efforts. People will keep their heads down and the primary objective is not drawing attention and not getting in trouble. That's such a risk-averse environment.
Innovation requires taking risks. By definition, you're trying to do something that hasn't been done before and you can't do anything interesting perfectly the first time. You have to learn and grow. I don't remember, but the final production version of the light bulb ran 14,000 experiments.
There's an Edison quote. “He failed and then I succeeded.”
There are 14,000 failures to get to success. The companies I’ve worked at, talked to, and innovate exceptionally well are companies where it's not that people aren't held accountable. You still hold people accountable, but you hold them accountable for failing well, which means, “I make a mistake, I learn from the mistake, and I do something different.” If I make a mistake and then keep doing the same thing and I'm not learning anything, that's an accountability problem or taking a risk that is completely outsized and unnecessary when I don't need to. It's this idea of making these small experiments and learning quickly and failing in small ways, versus like, “I'm going to bet the whole company on this idea. I'm going to take a big swing. When I miss, I take the company down.”
When you make a mistake, learn from it and do something different. Don’t keep doing the same thing without learning anything.
There are usually checks and balances or if it's a reasonably psychologically safe environment. Nobody's doing anything alone. If you have an idea or a hypothesis and want to run an experiment, other people may speak up if they see problems or risks and you can counter that. Probably the biggest problem would be is in a culture where the CEO can never be wrong and fire anybody who challenges them.
I'm sure there are a lot of case studies, and we may be living through some of them now, where the CEO places a huge bet. There are so many people saying like, “if he or she would've only listened,” and they didn't. Normally, there are checks and balances. I’ve heard leaders in healthcare, they're afraid, “What if my people have bad ideas?” If it's a “bad idea,” you talk it through with the team.
You might do a small test of change. You could be wrong that it's a bad idea. It comes down to risk and reward. You're not going to be reckless. At the same time, you can't create an environment where people are so risk-averse or especially in an environment where people are afraid to disagree with the boss. I’m getting a little sidetracked there.
You're right. There's a point like, “I'm going to have bad ideas.” Everybody's going to have bad ideas, but until I have a bad idea and try it and go, “This was a bad idea, but I understand why it's a bad idea now,” unless I do that, I'm never going to have a good idea because all I'm going to be filled with is all these bad ideas I never got to do.
Before we wrap up here, here's one thing I want to explore with Kevin Goldsmith. In the tech community, you're using the word failure. I latch onto the word mistake. There are a lot of similarities thematically in what we're talking about, but how would you compare those words?
I latch onto the word failure because it implies that there's a negative consequence to me more so than a mistake. Honestly, I made a mistake in that I saw an unexpected result and didn't question it, but the outcome was a failure for the company. Maybe that's how I would differentiate it. A mistake led to a failure. I also think I latched onto that word specifically in tech because we talk about fails. I grew up in my career at a time when we were trying to build fail-safe software. At one point at Microsoft, I talked to this team that was mathematically proving that code was correct in a way to completely avoid any possibility of failure.
One, it was very challenging and difficult. It was a research team. It was appropriate for a research team, but it never made it to production because it was nearly impossible to do. What we've learned from early in my professional career in the ‘90s to now is rather than say we're going to avoid failure from happening, instead we've become good at saying, “Failure's going to happen all the time, and we're going to get good that when the failure happens, we've anticipated it and we've made the systems robust so that they'll fail, but it won't impact anything significant.” That's the lesson. If you look at the software industry from the ‘50s to the 2020s, that's been the big lesson that we've had.
You've shared so many lessons with us, Kevin, from what you've written and what else is out there. I encourage people to go and check that all out. I was going to ask you briefly and tell us a little bit more about DistroKid the business. Here's a question that maybe merits its own episode. Coming in and maybe trying to create a culture of learning from failure, learning from mistakes, and handling failure well, that's not easy to answer succinctly.
One of the things that attracted me to DistroKid was since I left Spotify, I'd worked in a lot of other industries. I missed working in music. It's clearly something that's important to me personally. As jobs go, it's fun to work in music and build products for creative people. One of the things that also attracted me specifically to DistroKid was that it wasn't that I needed to bring this culture to the company. This culture existed in the company. The CEO, Philip, is very much in the mindset of Lean. The executive team, the CEO, and the COO are very much in the mindset of Lean and doing experiments and learning.
One of the product managers that I loved working with at Spotify, who's very experienced in this as well, is the head of product. It isn't so much I need to convince people. In other companies, I’ve had to educate. Here, it's more that I can help make it more efficient and make it a little bit more streamlined how we do it. It is very nice for folks like us who have to spend a lot of time convincing people that this is a good way of working to go somewhere that people already know it. They just want to do it better.
I made the mistake, but thank you for correcting me. I was assuming your opportunity was to come and create the culture. It sounds more like it's a matter of strengthening, reinforcing, and improving.
Also, helping build the processes to make it easier to do that and to make it better. For me, since Spotify, it's been very much a coming into a company talking about this and trying to build those processes and that way of working. Now for me, it's nice to go somewhere where it's like, “You don't need to sell us. We're ready on board. We want to do it better,” is exciting.
Kevin, thank you so much for sharing your story, your reflections, and the lessons learned. It's powerful and interesting. There's a lot to take away from that. We've been joined by Kevin Goldsmith, CTO at DistroKid. Kevin's website is KevinGoldsmith.com. I enjoyed this, Kevin. Thank you so much for being here.
Thanks so much, Mark. I enjoyed it too. This has been great.
As always, I want to thank you for reading. I hope this show inspires you to reflect on your own mistakes and how you can learn from them or turn them into positive ones. I’ve had readers tell me they started being more open and honest about mistakes in their work and they're trying to create a workplace culture where it's safe to speak up about problems. That leads to more improvement and better business results. If you have feedback or a story to share, you can email me at MyFavoriteMistakePodcast@gmail.com. Our website is MyFavoriteMistakePodcast.com.
If you've enjoyed this show, you'll love my upcoming book titled, The Mistakes That Make Us: How Getting Things Wrong Can Make It Right For Leaders and Organizations. You'll read about examples and lessons from companies, including Toyota, the technology company, Conexus, to award-winning distilleries, they all have cultures of learning from mistakes. This leads to more improvement, more innovation, and greater success. Leaders can build this culture and the mistakes that make us will show you how. Learn more and register to receive more information about the launch of the book at www.MistakesBook.com.
- I Want to Write My Book
- Ward Vuillemot – Past episode