The Monica Talks Cyber Show

The Big Bang of Emerging Technologies

November 10, 2020 Monica Verma Season 1 Episode 13
The Monica Talks Cyber Show
The Big Bang of Emerging Technologies
Show Notes Transcript

In this episode, Monica Verma talks with Carsten Maartmann-Moe, CEO, Transcendent Group, about hacking, the big bang of the internet, connected devices, Internet of Things and emerging technologies. 

They talk about the evolution and the threat landscape, risks and consequences of both legacy and emerging tech, the ethics behind it, use of data and more.

Looking to become an influential and effective security leader? Don't know where to start or how to go about it? Follow Monica Verma (LinkedIn) and Monica Talks Cyber (Youtube) for more content on cybersecurity, technology, leadership and innovation, and 10x your career. Subscribe to The 10x Circle newsletter at https://www.monicatalkscyber.com.

Intro 0:00  
You tuning into the podcast series We Talk Cyber with Monica, your platform for engaging discussions and expert opinions on all things cyber. For more information, check out MonicaTalksCyber.com. And let's hop right into today's episode. 

Monica Verma  0:17  
Hi, everyone, and welcome to today's episode of We Talk Cyber with Monica. This is your host Monica Verma and I'm back with yet another fantastic episode. We'll be talking today about hackers' view on emerging technologies and threat landscape. And we'll be talking with Carsten from Transcendent Group. Hi, Carsten, how you doing? Welcome to the show.

Carsten Maartmann-Moe  0:35  
Hi Monica. I'm good. How are you? Thanks for having me.

Monica Verma  0:39  
Lovely to have you. Would like to say a few words about yourself.

Carsten Maartmann-Moe  0:42  
Sure. So my name is Carsten Maartmann-Moe. I am the CEO of Transcendent Group in in Norway, we are a small GRC consultancy firm. And I have a background as an ethical hacker, works for that as most of my career. And I'm really looking forward to talk to you about all these exciting new technology that we surround us with and my perspective on that technology as a hacker. It is a really wide subject. So I'm really looking forward to it.

Monica Verma  1:11  
Before we start, let's just talk about defining the term hacker and differentiating between an ethical hacker and a non ethical hacker.

Carsten Maartmann-Moe  1:21  
Um, yeah, so from a security perspective, a hacker has always been a well known school internet jargon, it was more of a somebody that worked within technical, IT infrastructure, it could be a sysadmin, it could be a coder. In modern terminology, it's often being used as a reference of people who actually break into software or hardware. So that's more of the ethical hacker. I have a background in in the ethical hacking part of the business. But I think when we talk about emerging tech, I think it makes sense to talk about hackers in the broader sense, actually, in the old sense of the word, you know, people working with, with technology. And the reason why is [that] a lot of the emerging tech is actually made by people, which I would assume identify themselves as hackers. You know, Mark Zuckerberg of Facebook is one of them. So I think, we could talk about it from a broader sense, but me and my experience is mostly from the ethical hacking side, and I think that will probably color some of the some of the stuff I'll talk about.

Monica Verma  2:26  
Fantastic. So let's start with, we talked a bit about now hackers and you define, okay, what a hacker is, a broad definition. Let's take it from there. What are the biggest challenges in your opinion, in terms of the emerging technologies that we have, and also the way the threat landscape is changing?

Carsten Maartmann-Moe  2:44  
Well, I think if you look at all the tech that has been happening around us now with a perspective of a hacker, you're one part excited and one part scared, because it's just going in an immense tempo and has been since since the Internet was conceived. And it's basically everywhere. And it's more powerful before than before we, you know, we collect more data on people, there's more tech around us, the tech that we're actually surrounding us with is going through sort of a Cambrian explosion, which is basically you know, when the, it's a terminology from biology, when when the fossil record shows up a lot of an extreme explosion in more advanced life forms on earth. And tech is sort of going through the same thing. Because we're going from a world where we're software existed in the beginning, offline, and then online, and it was mostly you know, webpages and apps and ports and services. Now it's, it's robots, it's AI, it's no code applications, it's cloud is all these technology. And it's IoT devices in our home. And we're enabling all this tech to actually sense and sort of reason about the world around it. And that is definitely a change from from the old school use of tech where we're tech mostly was was applications that we use. Now, we suddenly have tech that moves around us and interacts with us, not only in a digital plane, but also in the cyber plane that from an hacking perspective, that's very interesting, because as a hacker or ethical hacker, at least, one of the things that we're interested in when we try to break into stuff, if it's an application or if it's a car, you know, we're trying to map out its attack surface, which is basically ways we can interact with the application or car or whatever it is we're trying to break into, because all of those points that we can interact with it is potential basis for the may exist abilities. And the old school, you know, when I was young and before the Internet, you know, that interaction was floppy disks. Hence, vulnerabilities and viruses existed on floppy disks. That's the first time I got a computer virus was through floppy disk that I put into a public terminal at a library where I was living. And then when the internet came in, suddenly viruses and worms existed online. And now we're sort of moving towards a phase where not only are the internet there, and there's been a massive explosion in internet services, but we also have a physical plane where we can interact with technology. And that combined makes the attack surface a lot larger and different than than it used to be before. So it's very interesting from a hackers perspective, because you normally would like to sort of try to what you're trying to do is to inject errors into the technology to make it do something it's not designed to do. It's basically what you're trying to do as an ethical hacker. And suddenly, you can also try to manipulate the physical environment around the robot to try to make it to do stuff that it's not supposed to do. And this is difficult, different type of potential hacking, than what we've seen thus far. Though, it's it's very, very interesting from from a hackers perspective. But also, I think, because I'm not pessimistic, I think that we shouldn't panic, I think, in large, the tech explosion that we're seeing around is positive for security. I mean, a lot of the a lot of the larger breaches are that's been the couple, you know, last 10 years not been because of emerging tech, it's usually because of legacy tech, to be honest, tech that has been forgotten, not patched, not updated. It's not always like that, but a lot of the stuff that we see when we do penetration tests, so meaning, when our clients hire us to to break into stuff, it is, you know, due to misconfigurations, or legacy software, just coexisting with the new stuff. So I think in general, new stuff is good for security, but the problem is the explosion in attack surface, and everything.

Monica Verma  7:11  
So you say a very important point here is that the lot of the problems that we're facing today is also because of the legacy tech. We haven't fixed the problems that we have in the legacy. But at the same time, we are now coming with more emerging new technologies, and they are now colliding with each other. Right, making it even more complex, because we haven't fixed what we have screwed up so far, if I can say it so. So, how have you seen the threat landscape change over the last, let's say, a decade.

Carsten Maartmann-Moe  7:46  
I'd say the threats have not changed themselves. I think it's been more of an awakening. for everybody to see them manifest themselves in real breaches. I think we have had some more transparency to the big, you know, the big breaches that we've seen, but I don't think necessarily the threat, at least not the threat actors have changed a lot. They've always been nation state. So they're always been criminal gangs, they always been, you know, 16 years olds, boys and girls in their, you know, in their bedroom hacking away at some piece of tech that they've seen online. So I don't think that has changed a lot. But what's changed is the, you know, a the potential impact of those breaches. Because we now have very large tech companies that control a lot of the stuff that we do online and and if you know, if you breach Twitter, then suddenly you do have access to Barack Obama's DMs. And if you breach Facebook, then suddenly you can potentially breach a lot of people. So the scale has gone up. I don't think the threat actors have changed. Of course, the threat actors are sort of, they're always exploiting the technology that is there. So we surround us with more technology, you bet that, you know, the threat actors will look at that technology as well. But I don't think the actors have changed a lot. It's just that they may have been slightly professionalized. And we've gotten marginally better at exposing them and attributing stuff to them. But we're still a long way from where we should be when it comes to attribution, for instance, which is a great problem in at least international cybersecurity, that you know, people are not held accountable for breaches, and therefore, you know, breaches happen.

Monica Verma  9:53  
Right. So from a business perspective, what is the recommendations you can give to the businesses, where should they start looking? What are the most important questions they should be asking to better protect themselves from these adversaries, to better secure the technologies that they're using for the businesses, for the employees, for the customers, and also to be better positioned as a better defense mechanism at better defending themselves from the cyber attacks? What would be your recommendations?

Carsten Maartmann-Moe  10:33  
Well, I think that if you're in the business of creating a product, or exposing something online, which I think most businesses are, I think you should ask yourself, if you're producing something, you should ask yourself, you know, how can this piece of tech that I'm introducing be misused? And what is it? You know? How can it be broken, which is basically the same thing that we often ask ourselves when we do ethical hacking engagements. Because and if you don't have that competency yourself, you know, you should get somebody internally to have a look at it from an from an attacker's perspective, because I think that a lot of businesses are counting on somebody else have done that. So if you're building an IoT device, you're counting that, you know, the people that shipped you the hardware and base OS have done their security job, but I'll let I'll let you know, they haven't done the security job unless you ask them and check with them. Because they are they are incentivized to build the cheap hardware as possible. And if you buy it from from somewhere else, you can bet sure that they they've spent as little money as possible on on security, is often at least our viewpoint on this when we look at for instance, hardware components. So, without the proper view on your value chain, and the actual security that yourself are developing on top of that value chain, you put yourself in higher risk position when it comes when it comes to breaches. So and the other thing I think is important is, and this is [a] larger question, but I think that we as security professionals should should increase and engage ourselves in policymaking, because a lot of the policies and regulations that are out there are often somewhat misguided, and they're not really helping, you know, security researchers and others, to help work to create greater, better security out there. And I think that we as a community should engage ourselves. And it's not, maybe not, the most sexy part of security. I mean, security regulations and, you know, governance, but it's an important part because you know, changes happen on a larger scale. I don't think you can expect the vendors to take greater security responsibility without regulation, because, you know, it's, it's just not profitable. And they are the businesses that are most of their at least, you know, the public ones to maximize the profits for the shareholders and security in that context does not make sense. And the same goes for people, by the way, I mean, security for people, you know, teaching them to be resistant to social engineering is an uphill battle. Because, you know, it's, it doesn't make sense for me as a person to go through all these security hoops. You know, I'm hard to do my job, not not do security. And so it's difficult, and it's difficult realization for a lot of cyber people to think about, you know, that, yes, I think we actually have to regulate and then work with the, with international community to regulate security in all the tech that we surround us with. And I have a last point as well, I think that in general, I think that businesses can be a lot smarter when it comes to security. I see a lot of businesses, just hiring manpower, if it's consultants or you know, hires, to tackle problems that could be tackled with, you know, a few smart guys with the recording and scripting skills and some good tools. So I think that, in general, we should try to push as much responsibility as possible towards the coders and the maintainers out there. And in a new world, everything is DevOps anyway. So this is the same team, so they should have [responsibility] for security, not not some sort of large scale security team. And I think a lot of businesses actually save money on this. Because it makes the the people responsible for that part of the development job as well. So that was three points, I think, as you asked for one, but...

Monica Verma  15:26  
the more the better, because the question really comes down, you touched a very good point of the policies, because with the emerging technologies, with the legacy that we already have, with the aspects of hacking and checking what attack surface, there are, as you said, also testing these products, when you develop new products, and security products, and so on, it's still difficult for businesses, sometimes to understand, as you say, not investing too much money in security. But then from a CEO perspective, from a business owner perspective, that makes sense, I will want to use the least money possible to get the best results, right? I would not want to spend more money than it's actually required at all. And then there comes the point of risk management and assigning what the risks are and where the risk lies. So from, let's say, from a security governance perspective of a security leader, right, because the CEO is not necessarily equipped in understanding security to the extent that the security leader does right. And I think it's also probably unfair to expect that of CEOs. How do you believe or what are the what are the key ways in which a security leader of an organization can help CEOs in making such decisions to ensure that they're better postured, while still not completely bankrupting the company?

Carsten Maartmann-Moe  16:50  
Yeah. Yeah. That's a very good question. And I think it's probably a bit unfair to the CEOs out there to say that they only want to maximize the profit of the companies, because I think in reality, most CEOs, at least the responsible ones would like to spend enough money on security and enough meaning, you know, enough money to keep you out of harm's way. But, but still, not more than exactly that amount. So I think that most most CEOs would be happy to spend money on security, I think that the biggest problem for the CISO, for instance, is to explain, you know, how much is that amount? And if you're, you know, a security leader in a company, you want to do enough, fun is often difficult to define that exact moment, when you feel like yeah, I think we have pretty good control. But it is an important sort of state to get to because it's not sustainable to add on more and more security people and more and more security tools in long run. Even though, you know, the decision makers that are maybe above you, you know, may not understand that the core tenets of security, they do understand economics. And at some point, they will challenge you. And if you can't come up with a good answer, why are we spending all this money on security? You're not in a good place. So I think trying to be rational as a security leader, and think about what what what are the I mean, it always starts with a the risk assessment is pretty boring, but it always starts there, but trying to be bit smart about what you secure first, and try to at least if you're producing something, I mean, your product and that's what generating money into the company. Maybe you start there. And it's a continuous, you know, it's continuous work working in security, you can't stop there, but you can, you can sort of continuously improve your both internal and product security stance over time. But I think a lot of people think that they have to have a lot of people. And I think that security, and that, you know, a company must be good at security, because it has x number of people compared to the number of people employed. And I don't think that's a good metric at all, I think. Yeah, I think, you know, we should probably talk more about coverage in terms of where we're comfortable with the security in the company, it could be business divisions, or it could be parts of software. And it could be also, you know, highlighting, you know, places where we're uncertain about, you know, our security stance, and that would be a good way to communicate, where you need investments and where you're where you're comfortable. But it's difficult. I mean, the security leader role is extremely difficult because you have to balance you have to balance the financial and cost perspective with security. And traditionally, security has been, you know, not prioritized. I feel like that's changed a bit. Now, the problem is more, you know, how do we do this in a smart way? I think emerging tech in an AI can play, you know, a huge role in it. And that's the positive side of emerging tech, this, you have a lot of interesting AI, and machine learning and automation software coming up, that tries to solve different sort of, you know, security problems, everything from passwords to security monitoring. They're not quite there yet.

Monica Verma  20:37  
So what are your predictions for the next five years, as you're saying, in terms of emerging technology and AI and all these things that can help in terms of security?

Carsten Maartmann-Moe  20:48  
Yeah, it's always difficult. But I think that maybe we tend to overestimate the short impacts, short term impact of emerging tech and underestimate long term impact, I think the long term impact is going to be profound, not only for cyber being general for society, and I think the society that hence that cyber will play a larger role in the upcoming not only tech decisions, but political decisions. I mean, we see pre stages of that already with the with everything from the Cambridge analytica case, with reversing the US election and other stuff where we have a very interesting sort of perfect storm of a growing attack surface, and a lot of technology optimism, but also some, I would say, warning shots about the the potential dark side sort of, say, technology, where technology plays a role in propaganda and in manipulation and in fake news. So it's difficult to say, but i think that within 10 years, we will have a whole new field of cybersecurity coupled with policy and AI, robotics, and ethics may be also thrown into the bunch, because I think that we're on our way to a world where we're automation and robots will be a large part of our day to day work, and large part of society. And that will be interesting, because as sort of, if I'm, if I'm thinking out of the box, I'm not sure it's gonna be like this. But if you think about it, but now we were sort of trying to replace parts of what we're doing as humans with robots and AI and algorithms. So although, you know, changing and removing some some parts of that human weakness from is good for security, the sheer scale and speed of it all, it creates new challenges, because suddenly, I mean, a lot of the big breaches in banks, for instance, that you don't hear about has been prevented, because there is slowness in the financial system, for instance. Money does not go out of your account immediately. I mean, it looks like that in your online banking, but it does not go out immediately. There's checks and balances to stuff. But as you automate, and and create, let's say, automated AI-powered machine learning-powered insurance claims, for instance, then, you know, you take out that maybe that those human checks and balances out of the equation, and things move suddenly at a better speed, at a larger speed. So, the difficult way between and the difference between a machine and human this case is that if you fool a human being to transfer money to you, for instance, or you know, if you fool insurance company to pay out some insurance sum to you, you fool them once. But once you learn how to fool a machine or an algorithm, you could potentially fool them a lot of times really quickly and affect a lot of people really quickly.

Monica Verma  24:23  
On a larger scale. 

Carsten Maartmann-Moe  24:25  
Yeah, whole another scale. Yeah. So I think that's, as I think security might be improved by automation, and in general is improving. I think that we will probably see maybe fewer incidents, I don't know about that, but maybe fewer incidents, but at least larger in terms of impacts, both because of the speed and the size or everything like that. But also because of the fact that these algorithms are suddenly, you know, interacting with us in the physical world, not only online. So that will be very, very interesting to see. But my prediction is that, you know, if people working with automated driving or unsupervised production units and stuff like that, if they succeed, we'll have a whole new range of cybersecurity issues to tackle. And that will be very, very interesting.

Monica Verma  25:31  
Right. And there are the other aspect or the other side of it as well, that the automation will be also used by the adversaries to be better at their game. I mean, this is always exactly its arms race, right. And then the other aspects, as you said, of the ethics, I totally agree, I think the ethics part will be even more important, even more critical. I mean, it is an important part even today, but it'd be more challenging for the years to come and we will have to be, we will have to be discussing ethics on the table, in much more clearer terms and what [do] these means. We have already started discussions about ethics and AI. But as you said, the rules or regulations framework to ensure that these are not violated will be quite critical for the years to come.

Carsten Maartmann-Moe  26:13  
And it's very difficult. I mean, to discuss ethics within AI. AI in itself is a very complex subject, which I am in no way an expert on, by the way. And I feel I hear myself sometimes talking about AI, like I know how it works. I'm fairly technical. So the challenge there is to make policymakers understand what AI is and isn't. And there's a lot of confusion out there about, you know, about AI and machine learning and unsupervised, you provide all these terms that the community is using. People don't understand, in reality, how, for instance, machine learning work at a good level. And that's a problem in itself. Because how can you make, you know, good policies, you don't understand them. You know, how stuff works? So we have a lot of learning to do.

Monica Verma  27:16  
Lot of work to do.

Carsten Maartmann-Moe  27:18  
Yeah.

Monica Verma  27:19  
Fantastic. I think we will have interesting roles also in the future. So no time soon, it's getting any less interesting. Yeah, fantastic Carsten. Would you like to just maybe sum up by giving some of key, some recommended reading for the audience?

Carsten Maartmann-Moe  27:38  
Sure. I think that in an extension of what we're talking about, just now, I think that one book I would recommend is a book called Weapons of Math Destruction. It's a very interesting book, if you're concerned, or even if you're not concerned about the challenges with data, automation and algorithms taking decisions on part of humans. You may disagree or agree with, you know, the conclusions, but it's sort of an eye opener for people I think, here when it comes to, you know, what can be done with data? I think it's very, very interesting to see how you know, data can be used and how data quality and bias in that data can sometimes make decisions that are, you know, maybe not, not wanted, or unethical [one] may say, depending on what sort of moral and ethical framework you operate within. So that's a very interesting book. There's a lot of good hacker books out there as well. The first one I read was The Cuckoo's Egg by Clifford Stoll, which is very interesting, if you want to read a mix of sysadmin stories from before the millennia. And how he tracked down hackers by using some very fairly old school models, but that's still relevant, which is very interesting to me, at least that it is still relevant. So those are some books I would recommend. Get off the internet and read a book.

Monica Verma  29:20  
Yeah. Lovely. Thank you so much Carsten. It was lovely having you on the podcast episode today. Amazing conversation, discussions and I can fairly certainly say that, yes, we will be needing ethical hackers, for the years to come, to help us with the emerging technologies and well the society in general.

Carsten Maartmann-Moe  29:39  
Thank you, Monica. Thanks for having me here.

Monica Verma  29:41  
Thank you. So I'll be signing off now. That was today's episode of We Talk Cyber with Monica. This is your host Monica and I'll be back with more amazing episodes, amazing discussions and fantastic guests. So, continue tuning in and until then take care and stay safe. 

Outro 29:56  
Thanks for tuning in to We Talk Cyber with Monica. Do not forget to subscribe to We Talk Cyber in your favorite podcast app and YouTube channel Monica Talks Cyber. Take care and continue tuning in.