My guest for Episode #49 of “the My Favorite Mistake” podcast is Neil Daswani, the author of the newly-released book, Big Breaches: Cybersecurity Lessons for Everyone. Scroll down to see where you can enter to win a free signed copy!
Neil is, among other things, a Co-Director of the Stanford Advanced Security Program, and he earned a PhD in computer science from Stanford. You can learn more about him on his website and his LinkedIn page. He's previously worked for companies including Google, Twitter, LifeLock, and Symantec
In today's episode, Neil shares a “favorite mistake” story from his time in grad school, when he hacked into a grading system… not to change his grades, but because he could. Hear Neil describe the mistake in what he did AFTER getting into the system. Neil also shares knowledge and lessons about ethical hacking and responsible disclosure, the recent Solar Winds hack (and its impact and lessons learned — or should be learned), and the difference between “white hat” vs. “black hat” hackers?
We also get his thoughts on the classic hacking-themed movie from my childhood, “War Games” — should we stream it, or skip it??
Scroll down to find:
- Audio player
- Video player
- How to subscribe
- Full transcript
You can listen to or watch the episode below. A transcript also follows lower on this page. Please subscribe, rate, and review via Apple Podcasts or Podchaser! You can now sign up to get new episodes via email, to make sure you don't miss an episode. This podcast is part of the Lean Communicators network.
Subscribe, Follow, Support, Rate, and Review!
Please subscribe, rate, and review the podcast — that helps others find this content and you'll be sure to get future episodes as they are released weekly. You can also become a financial supporter of the show through Anchor.fm.
Other Ways to Subscribe — Apps & Email
Automated Transcript (Likely Contain Mistakes)
Mark Graban (0s):
Episode 49 Neil Daswani, author of the book, “Big Breaches: Cyber Security Lessons for Everyone.
Neil Daswani, PhD (11s):
I effectively, I hacked into his system and would be able to change people's grades, change their assignments to all kinds of things.
Mark Graban (22s):
I'm Mark Graban. This is My Favorite Mistake. In this podcast, you'll hear business leaders and other really interesting people talking about their favorite mistakes because we all make mistakes, but what matters is learning from our mistakes instead of repeating them over and over again. So this is the place for honest reflection and conversation, personal growth and professional success. Visit our website at myfavoritemistakepodcast.com. For show notes, links, and a chance to win a signed copy of Neil's book. Go to MarkGraban.com/mistake49, please subscribe, rate, and review, and now on with the show.
Mark Graban (1m 7s):
Hi, welcome to My Fvorite Mistake. I'm Mark Graban. We're joined today by Neil Daswani. Among other things. He is the co-director of the Stanford advanced security program. He has a PhD in computer science from Stanford, and he has a book that's going to be released at the end of February called “Big Breaches: Cybersecurity Lessons for Everyone.”. So, Neil, thank you for joining us today. How are you?
Neil Daswani, PhD (1m 31s):
Hi, Mark. I'm doing quite well. Thank you for having me.
Mark Graban (1m 35s):
Yeah, well, you're, you know, eminently qualified to talk about, you know, the subject of cybersecurity and I assume, assume as the book does or conversation about that, you'll make it understandable for all of us.
Neil Daswani, PhD (1m 46s):
I will do my best to speak in English all the time. And if you see me veering off track, just let me know.
Mark Graban (1m 52s):
Okay. Sure thing. I, it would be a mistake, I guess if it w there was too much jargon that we didn't understand, but, you know, as we normally do Neil, before we get into some of those other topics here and have conversations about all of that, you know, thinking back to work you've done, what would you say is your favorite mistake?
Neil Daswani, PhD (2m 14s):
Sure. Well, I'd say that my, my favorite mistake was back when I was in graduate school, I was taking a database course and one of my professors had set up a new grading system that I think another student had had coded. And I being of course, in computer security and very naturally curious, I effectively hacked into his system. I identified that it had a particular kind of vulnerability where if you log in just after the administrator logs in, then you were given administrator privileges and would be able to change people's grades, change their assignments, do all kinds of things.
Neil Daswani, PhD (2m 59s):
And that's my well. And so the mistake was that, you know, once I got in as an administrator, I changed the administrator's password to the word cracked to prove that I had back broken the system. And that was the mistake. That was my favorite mistake. It was my favorite mistake because it, I learned a lot from it. The professor was furious, absolutely furious.
Mark Graban (3m 28s):
What was the aftermath of this, but go on.
Neil Daswani, PhD (3m 31s):
And I was afraid I was going to get suspended or expelled. And, you know, I had a, I had CC'd my, my advisor when I had emailed the professor about the vulnerability in the system. And my advisor came to my defense, Hector Garcia Molina had the digital library project and the guy that bought Larry and Sergei their disks, you know, thank, thanks to him for coming to my defense and allowing me to complete my PhD. But, you know, the, the mistake was that I had changed the administrator's password. If I had simply alerted him as to the vulnerability and maybe had a meeting with him and showed him how the vulnerability worked, but didn't make any changes to the system. I don't think he would have been as upset.
Neil Daswani, PhD (4m 15s):
So that was my favorite mistake.
Mark Graban (4m 17s):
So other than what did you just get kind of a, a scolding or some threats, or just a little bit of anger thrown at you? Was that really the worst?
Neil Daswani, PhD (4m 25s):
The worst of it was I got a slap on my wrist and it was told not to not to do it again. And I learned from it. I learned about when you find security vulnerabilities in real systems, even if it's something as innocuous as a grading system, people can get very upset. And so how one, you know, manage the situation after that in such a way so that you don't make any changes, but still demonstrate the vulnerability and how to fix it was just very important.
Mark Graban (4m 58s):
Is there, you know, I guess amongst quote unquote hackers or security professionals, I mean, is it a matter of, kind of like a code of how to go about these things?
Neil Daswani, PhD (5m 12s):
So over the years, there's a concept called responsible disclosure that has arisen. And the way that responsible disclosure works is that when you find such a vulnerability, you, you let the organization that runs a system know about it, and you, you give them, you, first of all, of course, you know, don't make any changes to their systems explicitly or inadvertently. And you, you give them appropriate time. The two parties should come to agreement as to when is a reasonable time to have it fixed. And when, as a reasonable time to talk about the vulnerability after it has been fixed.
Neil Daswani, PhD (5m 56s):
And so that's what, that's what responsible disclosure is all about.
Mark Graban (6m 2s):
And so, you know, we'll be talking about after that responsible disclosure takes place. And I want to ask a follow-up question in this case here with the grading system, A) did they update the system and B) did you check to see maybe, you know, how long it took, if it indeed did get fixed?
Neil Daswani, PhD (6m 21s):
So, so yes, they did fix the system. They fixed it pretty fast. And I did, I did, I didn't check, but I went ahead and I let the professor know, Hey, I'm gonna, I'm going to check. So if it tickles any alarms or whatnot, it's because I'm, I'm, I'm checking to check. No, no, no. They didn't say not the tech. I think the other thing that's important is that when you, when you do chat, you go, you, you seek permission and you get authorization. And I would also say that, you know, when, if you're a security reason referring to looking for vulnerabilities to only do these things on, on test systems and not the actual production real life systems, cause you don't just know, you don't know what haven't you can possibly wreck.
Mark Graban (7m 10s):
Yeah. Wow. Is, is there, I've heard terminology, you know, “white hat” versus “black hat” were, were, were you coming out and can you, can you explain those terms a little bit? You, you were coming out this as a quote unquote, a white hat hacker. You weren't, you were just curious, not where you weren't trying to do any harm or you could have, I guess, right.
Neil Daswani, PhD (7m 32s):
Th th th so that's correct. So I was coming in as a white hat hacker, and so white hats are typically the good guys that are trying to defend systems. Black hats are the bad guys that are trying to find the vulnerabilities and break in and then do malicious things. And so I have, I have always been a white hat and I've learned how to become a better white hat over time. And I think, you know, given, given some of the escapades in my career, whether whether it had been, you know, having, you know, progressed from being a security researcher to a security product developer, to founding CEO of a startup, to a chief information security officer of a public company, I've certainly gotten better about how to be a white hat.
Neil Daswani, PhD (8m 24s):
But I think that I also believe it's important to embrace the habit of continuous learning and, and all this continuing to become better.
Mark Graban (8m 34s):
Yeah. Now that you're on the professor side of completing your PhD and you are teaching, is any of the, you know, quote unquote white hat tactics taught formally in any classes, or is this just sort of part of quote unquote hacker culture and things that people learn from each other as their kids or students, or what have you. So as a co-director of
Neil Daswani, PhD (8m 59s):
Stanford's event security program, I do, I do instruct I do teach classes. I'm not a full university professor or tenured or anything like that, you know, and w w one of the, one of the things that I, that I love about the standard event security program is that I co-direct that program together with John Mitchell and Dan Benet, Mitchell one, the head of the computer science department currently is a world renowned luminary in computer security. And, you know, I, I bring the industry influence to that, to that program. One of the, one of the great things about doing that is there has been more and more instruction over time on teaching people, how to think like the black cats, but operate always as a white hat and in our courses.
Neil Daswani, PhD (9m 54s):
And in our labs, we do teach people how to break into systems. We give them all the appropriate advice, only do this on test systems and whatnot, but we do teach them to techniques on how to break in, whether it be, how do you exploit a SQL injection vulnerability? How do you explode a cross-site scripting vulnerability? And I realized I just stopped speaking immigrants.
Mark Graban (10m 16s):
I mean, this, this topic and everything may have attracted an audience who knows exactly what you're saying, but,
Neil Daswani, PhD (10m 23s):
And bottom line, I think it is important to, to, to teach people how did the good, good white hat? And by the way, you know, there's some companies that, for instance, hire people that have been black hat hackers in the past, and, you know, at the various companies that I've, that I've worked in, you know, over time I've worked at Google, I've worked at Twitter, I've worked at LifeLock, I've worked at Symantec. I work at a whole variety of organizations, but, but my view on that was always, you know, very influenced by some of the people that I work with at Google who had the mindset that, you know, if somebody has, you know, in the past, been on the black hat side, you know, and there's questions about their ethics and their morals, you know, you just may never know when they're going to switch sides again.
Neil Daswani, PhD (11m 18s):
Now I think there are some counterexamples to that. I think, you know, Kevin Mitnick, who's one of the most well known hackers of old time, you know,
Mark Graban (11m 28s):
I even know the name. Oh, you even know the name. Good, good. Very good. Yeah. So, so I, I
Neil Daswani, PhD (11m 35s):
Certainly counter-examples, but you know, for instance, when you look at various companies like hacker one or bug crowd, or Cenac, you know, these are companies that, that run bug bounty programs, where they work together with companies and they hire a whole bunch of ethical hackers. You know, they hire people from all kinds of backgrounds and because they're coming from all kinds of backgrounds, some of the best such companies will make sure to do all kinds of psychological analysis on prospective hackers that they invite to the platform, because you just never want to be in a situation where someone finds a vulnerability and they just don't tell you about it.
Neil Daswani, PhD (12m 18s):
They kind of stockpile it and they can use it later for, you know, they could sell the vulnerability, they could use the vulnerability. So, so I think, you know, the ethics and morals around this sort of stuff is just really important.
Mark Graban (12m 32s):
Yeah. Is this where the phrase, I mean, I've heard the phrase a zero day vulnerability is that, that means something has just been discovered. It stays zero. It's still exploitable. It needs to be fixed.
Neil Daswani, PhD (12m 42s):
Yes, that's correct. In the world of security, there are known vulnerabilities where some, somebody has found a vulnerability and they reported it. And it's now part of the national vulnerability database. And, you know, there's likely a patch available to fix the vulnerability. Those are known vulnerabilities. You know, there's also unknown vulnerabilities, which have not been discovered yet by anybody. Right? All software has bugs. There's a certain class of bugs that can result in security vulnerabilities. And we may not always know what all the bugs are. We may not always know what the security bugs are.
Neil Daswani, PhD (13m 23s):
So those are the unknown vulnerabilities. Then there's the situation that you described where somebody discovers a vulnerability for the first time and perhaps, you know, informs the entire world about it, including the organization that has the vulnerability. And it is, it is, it is day zero. Nobody has any time to react to come up with a, with a patch and that vulnerability could be exploited at will. So those are, those are indeed zero day vulnerabilities. Yeah.
Mark Graban (14m 2s):
So somebody, you know, discovering this and then, you know, bragging about it online in, in some form would, would seem like not fall under that responsible disclosure set of code and guidelines.
Neil Daswani, PhD (14m 15s):
That is exactly right. The way that responsible disclosure would work is when the security researcher finds the vulnerability, they informed the organization and the organization, and the individual agree upon an amount of time with which the vulnerability should be fixed. And then the researcher only talks about the vulnerability afterwards, so that, you know, even if you know, the vulnerability is found and, and it kind of might be, might be a zero day, right. The company didn't know about it, nobody else knew about it. There is, there is time to fix it and that zero day available to the entire world to exploit.
Neil Daswani, PhD (14m 58s):
So, so, so, so yes, the right protocol to follow is responsible disclosure protocol. Now there are, there are some researchers that decide to go directly to the press when they find a vulnerability, which gives, you know, the organization's zero days to react it, by the way, also puts important sensitive consumer data at risk and puts consumers at risk because, you know, you could have a real attacker, a nation state attacker, a cyber criminal that could also then just start using the vulnerability. Right, right. Then and there
Mark Graban (15m 34s):
Building on this idea of responsible disclosure, how much responsibility is it? Let's say in the case of Stanford, like clearly what you had hacked into, they didn't hold it against you. You got your PhD, you're working there. I, and I imagine they're not going to be upset with you telling the story after the fact, because what what's there haven't been patched, but it's, it's still kind of embarrassing to say, well, this happened, how much, how much ethical responsibility is it for a department or a university or a company to share this vulnerability with others who might be also at risk?
Neil Daswani, PhD (16m 14s):
So I think, I think it is, it is an important responsibility. So depending upon the nature of the vulnerability and in this particular case, I mean, this was a vulnerability that I think I had found, you know, pre 2004, you know, and it was just in one system, it wasn't the main Stanford University grading system. It was a system that was being used by just one professor or for his class, you know? And so the ramifications were not, were not as significant, you know, many, many organizations, Stanford included have, have seen much, much, much worse, you know, hacks, compromises, breaches, but you know, this particular story I talk about in the foundations of the security book that I published back in 2007 when I was working at Google.
Neil Daswani, PhD (17m 7s):
So yeah, there's no reason for the university to get upset about this, this particular thing, you know, they'd been through, you know, much more significant things, but I think that whenever least worker finds a vulnerability, it is important to, to, to let let's say that you find a vulnerability in a open source software package, it is important to let the developer of that open source software package know about it so that they can, they can fix it because chances are, there might be a lot of organizations that are using open source software. And there could be a lot of people at risk. In fact, you know, if we, if we look at what happened in 2017 with the Equifax breach, there was a vulnerability in a piece of software called Apache struts and Equifax, as well as many, many organizations use Apache spots as part of their software development.
Neil Daswani, PhD (18m 4s):
And there was a vulnerability which basically would allow an attacker coming in from anywhere on the internet to basically issue commands of their choice to assist them that was running a patchy spreads. And, you know, the, the, the, there was a patch available I made available. And it, it is really important for every organization I've been using that vulnerability that use that software to patch that vulnerability as soon as possible. Yeah.
Mark Graban (18m 36s):
So I mean more, most recently in the news, the, the Solar Winds breach is probably the highest profile one that's occurred here recently that can, can you talk about that situation? What what's known about causes of the vulnerability, how difficult that's going to be to fix, or I, I don't even know maybe first question is what is this solar wind software that, that so many were using in the government? I believe the military also in the private sector.
Neil Daswani, PhD (19m 6s):
Sure. I'd be happy to challenge moment. So someone wins is a company that, that makes available many products for information technology purposes, and also security purposes. And Solar Winds had, has a product called Orion that used by, you know, 300,000 customers to monitor the performance of certain information technology systems. What happened is that as, as we're all very well aware, software systems need to be updated from time to time. What happened in this particular case is that a foreign nation state actor was able to inject a malicious code into one of those software updates and into particular versions of the Orion's out of the 300,000 customers, there is about 17 or 18,000 that were using the moldable version of Orion performance monitoring software that had this malicious code injected, and then allowed the attacker to pretty much take over, you know, that system as well as, as well as others.
Neil Daswani, PhD (20m 19s):
There is a lot of interesting technical detail and a lot of interesting aspects of how it happened. You know, one of the, one of the interesting things was that Solar Winds, some of their, some of the components of their software, some of their software libraries that made up their software package, you know, what is in the past getting flagged by antivirus companies, as, you know, potentially being a virus, potentially being Malvern. But of course it was legitimate software. So what, what Solar Winds and, you know, I think that has worked with the end of ours community and the number of anti-virus providers, white listed, meaning they gave that library that was getting flagged.
Neil Daswani, PhD (21m 0s):
They gave it a free pass.
Mark Graban (21m 3s):
So those warnings were rationalized away or it's a false positive was their assessment.
Neil Daswani, PhD (21m 10s):
Yes, that's correct. And, and, and, you know, unfortunately it was, it was given her a free pass. So when the nation state actor decided they wanted to inject malicious code in that library, they knew that their malicious code was going to get through as that, that component was given a free pass. So I think that, you know, th there's many, many learnings from the Solar Winds attack, but that, that was one that you should not nationally give out these kinds of free passes.
Mark Graban (21m 40s):
Wow. I didn't know that aspect of, of the story. Gosh.
Neil Daswani, PhD (21m 45s):
Yeah. There's a lot of interesting aspects of the, of the story, you know, on one hand, Solar Winds has been called the digital Pearl Harbor, but I think it's important to keep in mind that Pearl Harbor was a complete surprise. This however, was a third party supply chain compromise, right? Because many organizations use Solar Winds as a third party. And the initial vulnerability was in this third party software, but Solar Winds is definitely not the first third party supply chain compromise. If we think back to some of the biggest breaches that occurred, if we look at the Target breach in 2013, we're over 40 million credit card number has gotten stolen.
Neil Daswani, PhD (22m 33s):
Target got compromised because they had a third party supplier by them, the Fazio mechanical services, it was a supplier that control the heating and air conditioning and all of their retail stores and Fazio Mackenzie services had some of their network credentials stolen. And that is where the breach started the attackers because the, the, the target network and the Fazio mechanical services network were tied together, that hackers were able to breach Bosnian account services and then go on to, to pivot, pivot, pivot, and breach target. So that was one example.
Neil Daswani, PhD (23m 14s):
And then just the very next year, JP Morgan chase in 2014 had had a, had a breach. I mean, they were spending over 250 million annually on security. They got rated because of one of their third-party suppliers by the name of Simco data systems that was running a website that was responsible for their non-profit charitable marathons. So, so there certainly have been third-party compromises in the private sector in the past, but even then, so in 2015, the government's office of personnel management got breached 20 million government employee identities were stolen in that breach. And that was due to a third party by the name of key point government solutions that helped the office of personnel management do some of their background checking.
Neil Daswani, PhD (24m 6s):
So some third-party compromises are not new at all. What is new here though is the scale and the scope of the third party compromise. And so while there have been a whole bunch of indications of the kind of seriousness of attack that could occur, you know, it's, it's expected that the number of organizations affected here have just been much larger than the past due to third party compromise. Yeah.
Mark Graban (24m 35s):
And you mentioned target and Equifax. I think I've got lifetime free credit monitoring. I think, because I know I was part of those two and I'm the office of personnel. I think that also affected anybody who was in database for security clearance and even people who had been interviewed as part of that. So I've, I've got a, that's just a short list of things. I think I've been caught up in thankfully to no known ill effect, but it's not great to receive that notification.
Neil Daswani, PhD (25m 8s):
Yeah. Yeah. And it's great that you got, you got credit monitoring for, for things like the, the, the target and Equifax breaches, you know, credit monitoring is also given out in the office of personnel management breach, but, you know, and you know, it did affect, like you said, it did affect more than just employees, anybody that got background checked and say, perhaps wasn't hired yet, or didn't get hired, whatever that they were also part of that dataset. But I'll tell you, I was surprised that credit monitoring was the tool that was given out to, to people for, I don't know, a year or two, because the threat, the threat is much more significant with all of the identity data that was stolen and effectively when people apply for certain government jobs, they fill out these SFA B6 forms and it has, you know, not just your name and your social number, social security number and your address, but it has all that information of all your family, friends, and neighbors that get interviewed as part of that background check.
Neil Daswani, PhD (26m 8s):
It has things like the results of second psychological analyses. It has information on, you know, drug history, where'd, you live whatever. And so if you think about, if a coordination state wanted to mint spies in this country, right, or have people in the country and have them get jobs and government agencies, that is exactly the data set that you want to steal and credit monitoring, by the way only X your credit, it does not protect your assets. It does not monitor, you know, your, your, your cell phone account, the bank accounts.
Neil Daswani, PhD (26m 50s):
And so identity theft protection is, is, is, you know, would have been a much better thing to give all 20 million government employees. But given the scope of that breach, you know, the question is what are we doing to make sure that spy is, are not getting minted? And so, you know, in, in the congressional hearings that took place afterwards, you know, it was, there were, there were statements made saying that, you know, this is the biggest blow to counter-intelligence efforts that had ever taken place. And that it'll take generations to recover from what's done is done. We've got, you know, there's no mending what happened here in the past. We've got to look, we've got to figure out how to look forward.
Mark Graban (27m 36s):
So that one up question that, that vulnerability of injecting code in there is that something that exists is that, does that same vulnerability, how much does that apply to other software systems? Maybe that's unknowable or there's this alert?
Neil Daswani, PhD (27m 53s):
Yeah. Th this, this, I mean, this applies to many, many, many systems, as we know, there's many software systems that regularly have to be updated. And I think that for organizations that are affected by solar lanes, it's important to notch, by the way, I don't necessarily know that we should be casting blame on, on Solar Winds. Right. I think, you know, well, I think because this is, this is an issue that every software vendor has to deal with. And, you know, I think once things get looked into, I mean, it'll be interesting to see what, what, what gets learned about, you know, solar wind specifically, but, but in general, any, any software organization that is sending out code updates, especially when their, their software is running with administrative villages, they're there there's many, many, I guess, tens of thousands of software vendors that could be susceptible to the same sort of thing.
Mark Graban (28m 57s):
Couple of quick questions before we wrap up. And again, you know, our guest is Neil Aswani. His book is big breaches, cybersecurity lessons for everyone kind of rapid fire questions here. So I use the Chrome browser and I use the password manager, and I get these notifications that have started popping up saying, you know, your passwords on 34 different accounts have been compromised. Is that a mistake to ignore those? Should I, right after this recording, go look at that list and go fix update passwords where I can right away.
Neil Daswani, PhD (29m 31s):
I think it would be good to go ahead and update, you know, your passwords what's been happening is that a lot of databases of passwords get stolen and they get put up on the dark web and the, the attackers have these huge, you know, Corpus and repositories of stolen passwords. And the first thing that they do is try and use them. What I would also recommend, right, is to use two factor authentication, to use two-step verification for any, and every online account that offers it, especially with things like banking websites. So when you log in, you don't just apply a password, but you know, a six or eight digit code is sent to your cell phone.
Neil Daswani, PhD (30m 16s):
And pretty much you have to, you have to enter, enter that code as well. So just because the attackers stole your passwords over from the dark web, doesn't allow them to log in your accounts. They also need to then compromise, you know, your cell phone too. Right.
Mark Graban (30m 32s):
And I, I do use on a number of accounts, the Google authenticator tool, which I think provides a deeper level of security to your point, Neil, if they were to also somehow clone or steal my phone, that authenticator app, I'll tell you, here's a mistake I've made and I've done it twice because I don't update my iPhone that often when you, if you wipe your old phone and then you install Google, Google authenticator on the new device, that creates problems. Like there's, there's a particular way. You've got to sort of port over permission. So it's good security. But if, as a user, you make a mistake about how you manage that you spend a while rebuilding your two factor authentication.
Mark Graban (31m 13s):
So that's one of my mistakes.
Neil Daswani, PhD (31m 14s):
Yeah, that's right. But you know, it's, it's not a, it's not a, like, I, I think as, as a technologists and as an issue, we need to make this easier for, for users. Right? And, and by the way, the reason that problem existed is because there was a secret that was only known to Olsen. And if it's done right, that secret should not be portable to, to the new phone. You know, you know, even using authenticators in general, though, right. When it gives you the six digit codes or whatnot, you still have to be aware that you can be susceptible to a phishing attack, right? Because just like an imposter website can ask you for your username password and impossible website could also ask you for the six digit code, right.
Neil Daswani, PhD (31m 57s):
And then the imposter website can send that information to the real website and log in on your path. So the, the, the best defense is to use, what's called a, a security key, where it's a piece of hardware. It sticks into your laptop or your phone. There's many folks that manufacture these things Yubico is one of them. And what it does is it requires that you have this piece of hardware and you can keep it on your key, you know, your key ring or whatever, but you plug it in when you're logging in, and then it authenticates you based on a secret in that piece of hardware that is not fishable. You can't type the wrong thing into an impossible website. I, I w and I'm in, by the way, you can also use your phone these days as your security key.
Neil Daswani, PhD (32m 44s):
So Google has made available and advanced feature where you can enable two-step verification and have your cell phone be the security queue that allows you to log in
Mark Graban (32m 59s):
Good tips. Neil, one final question is kind of a silly one to end on. So when I was a kid, “War Games” was a very popular movie. Remember liking that movie. Is that a mistake to watch? Is it a mistake to watch that movie, do you think, or are you a fan
Neil Daswani, PhD (33m 18s):
“War Games” was an awesome, awesome movie, it's definitely not mistake to watch “War Games.” When my, when my kids are just a little bit older, I want them to watch that movie because I think it was one of the, I think came out in the 80s. It was one of the movies that made it so apparent as to, you know, but by taking humans out of the loop and just relying on automated computer systems to be able to launch nuclear missiles, you know, here's what can go wrong. You know what I mean? I'll tell you how having a PhD in computer science. Sometimes I'll tell you, you know, if it's between a computer spitting out an answer with an algorithm and listening to a human, sometimes you want to listen to the human.
Neil Daswani, PhD (34m 2s):
So I think it's definitely not a mistake to, to watch board games. I think there's another great lesson in War Games in that, you know, I think even today with artificial intelligence, artificial intelligence systems are still learning and a whole bunch of things. And there may be some things that AI algorithms may not be able to learn. And I think, you know, at the end of War Games, it was just great that the computer was able to learn that the game of Thermo nuclear war global thermonuclear world was not a game that you could win, you know, it was like tic-tac-toe. And so, so definitely not a, not a, not a mistake, a great movie to, to watch.
Mark Graban (34m 46s):
Okay. So I'll maybe rewatch that sometime soon. So, Neil, thank you so much for telling your story about, you know, your mistake of, you know, hacking into that system and the way you went about it. So thank you for sharing that and thank you for sharing, you know, some other, I think really interesting thoughts about mistakes that organizations or individuals might make related to cybersecurity. So the book again by Neil Daswani is Big Breaches: Cybersecurity Lessons for Everyone. Neil, thank you so much for being a guest.
Neil Daswani, PhD (35m 17s):
You're very welcome. Thank you so much for having me Mark. I really enjoyed the time.
Mark Graban (35m 20s):
So again, I want to thank Neil for being such a great guest today. I want to thank you for listening again for show notes, links, and a chance to win a copy of Neil's book, go to Mark gribbin.com/mistake 49. One mistake I've probably been making is not using this opportunity at the end to you about upcoming guests. So I'm going to try to correct that going forward. Our next two guests are Phyllis Quinlan. She's a nurse executive with a health system and a leadership consultant. We're also going to talk to Lenny Walls. He played corner in the NFL for a couple of teams back in the 2000s.
Mark Graban (36m 1s):
So we're going to talk about, of course, favorite mistakes from each of them. We're going to talk about mistakes from their fields. So there's a great variety of guests coming up from so many different industries. I hope you will keep listening. And again, thanks for doing so. Thanks for subscribing if you've already done. So please rate and review us if you have the chance on your favorite app of choice, and I hope this podcast inspires you to reflect on your own mistakes, how you can learn from them or turn them into a positive I've had listeners tell me they've started being more open and honest about mistakes and their work. And they're trying to create a workplace culture where it's safe to speak up about problems because that leads to more improvement in better business results.
Mark Graban (36m 47s):
If you have feedback or a story to share, you can email me email@example.com. And again, our website is myfavoritemistakepodcast.com.