In conversation with the Chief Information Security Officer of the Auto Club Group, Gopal Padinjaruveetil.

Secure Click News In conversation with the Chief Information Security Officer of the Auto Club Group, Gopal Padinjaruveetil.

In conversation with the Chief Information Security Officer of the Auto Club Group, Gopal Padinjaruveetil.

 

Robert Scanlon (SecureClick) in conversation with the Chief Information Security Officer of the Auto Club Group, Gopal Padinjaruveetil. 

OK, first of all, thank you for participating in this interview! I know you’re not a big fan of the term “Security awareness.” Can you explain why?

We are aware of a lot of things. For example, we know that we’re polluting the planet. We know that we have to eat healthy. But our behaviour is not always aligned with what we’re aware of. For example, if a user says, “I don’t know what phishing is,” or, “I don’t know what social engineering is.” Most people involved in technology know what phishing is and know what social engineering is, but their behaviour is not aligned to their awareness. That is why I don’t like calling it an awareness issue, because it’s not an awareness issue. It’s a different issue.

So, what would you like to call it?

Well, I like to use the term Raising Information Security Effectiveness, or RISE for short. This term is more aligned to behaviour change than awareness.
 
As a CISO, how many users do you look after?

Around 15,000 users.
 
So, what are the security behaviour modification challenges posed by user numbers at that scale?
 
It’s making sure that everyone understands that they have skin in the game. They have a role to play. A lot of people think security is the security team’s job or the CISO’s job. That was probably true maybe ten or twenty years ago. But with everything now being digital and the attacks now happening at the human level – it’s important that every employee or every human knows that they have a role to play. I came from an industrial background and safety was a big issue. Collectively we got better at safety. We need to look at what the manufacturing sector did for safety and try to align it to “cyber safety”, or cyber hygiene. Safety needs to become like a habit. For example, during COVID-19 times, people would say, “Wash your hands”. It’s a habit. We have to make IT security a habit.



 

We have to emphasise to the user that the internet is not a very safe place. It might look innocuous but is actually a very dangerous place. There are criminals. There are scammers. There are a lot of predators. But, on the surface, it does not look that way. People are complacent. People don’t realise that the internet is a very bad neighbourhood. Let’s assume you’re talking about New York, London, or Dublin. You wouldn’t go to a shady neighbourhood at 1 am if you knew that place was not safe. You’d take some precautions, right? There are good people and there are bad people roaming about and you don’t always see them. That is a challenge.

How do you get users to realise that there is actually skin in the game for them?

It’s for their benefit. Even for your personal life, this is important. It’s the communication or messaging that we’re trying to master here. For example, if you’re in a big city like New York, you’re not just going to cross the road – you’re going to look both ways. Because you’re aware of the danger – automatically your behaviour changes.

So, in terms of cybersecurity, how do we drive home to employees that they have skin in the game?
 
Well, two things. There has to be commitment at the top, not just platitudes that security is important. The tone at the top has to be absolutely committed to this. Right from the top to the bottom of the organisation, we should have people saying, “This is an important risk for us.” So, the communication needs to be synchronised and everybody should be saying the same thing all across the organisation. This also requires some level of cultural change. Security risk sometimes has to be translated into dollar values. People need to understand that if something happens to the brand, its image is going to get damaged. Or, you could experience a direct financial impact. We’re trying to convert the risk into real-life impacts that different people can understand. Humans are subject to what is known in psychology as “the telescoping effect.” For example, we’re witnessing a war right now, but because we’re sitting so far away, we know it’s not going to affect us. Whereas, once you remove that bias and realise that such an event can happen to you too, it changes people’s perception of risk.




The same applied, for example, in the early days of COVID-19: people would say, “So-and-so got it, but I won’t get it.” But, in reality, all humans are susceptible to the virus. That is why users must understand the telescoping effect. Anyone can be a victim of cybercrime. If cybercrime was a country, it would be the third richest country in the world in terms of GDP. The cybercrime market capitalisation is over 10 trillion dollars. Every second, 20,000 dollars is being lost to cybercrime. It has become more attractive than the drug trade. The drug trade has a risk, but with cyber it’s a faceless enemy. The threat is real. Once users get that message, automatically things will change. We don’t want to scare people, because once you go down that route people become paranoid. We want them to use their intelligence to have the right level of scrutiny, and observe what is happening. The reason people fail here is not because of lack of awareness, it’s often because people become distracted. We have done surveys on this. When somebody fails in this area we want to know – why did you click on this link? We want to learn from this experience. The overwhelming response can be put into two buckets: One is, “I was distracted,” or, “I was multitasking, I was not paying attention.” And that’s a very human thing, we do get distracted. The second answer we get is, “I felt manipulated. The link or attachment really looked like the real thing. I trusted it and fell for it.” Humans are curious. It’s these subtle underlying aspects of security we’re trying to solve.




There will always be a cohort of employees who will say that making the employee responsible is just offloading the responsibility of IT security from management to employees. What would you say about that sentiment?

If something happens, we’re all in this together. Gopal might lose his job! We’ve seen CISOs getting fired too. Every company that suffered a cyberattack or breach has experienced financial impacts. We’re not trying to offload. We’re not asking employees to do the security work. We’re not asking them to build defence layers. All we’re asking is to make sure they’re doing their part to keep the organisation safe, their customers’ and partners’ data safe, and themselves safe. You have a role. We’re not offloading anything. We’re doing whatever we can to keep the organisation safe. We’re putting safety nets there in case people fail. So, you have firewalls, identity and access technology, SIEMS and SOCs. A lot of good technology is being implemented. And users don’t even have to know what it is.



Yet in spite of all these technological controls, an intelligent hacker or adversary will try to attack the human rather than the technology. There was a time when hackers used to attack infrastructure, servers, and networks. This still happens but most of the attacks are happening at the human layer. They’re trying to make you click on a link, for example, and download malware. Then things start going south. It all starts with a human action. For example, if you look at the natural world, there is the concept of “patient zero.” We saw that with COVID-19 the wet market in Wuhan. We are seeing the exact same thing with the cyberworld: it starts with one human. Then things start cascading down. So, in terms of the argument that we’re offloading these things to the user – users absolutely have a role to play in the cyberworld.

So, you were saying you have experience in the manufacturing sector. Industrial accidents are way down compared to, let’s say, fifty years ago. What did the manufacturing sector do to achieve that?

Well, every accident made the industry safer. Like with the airline industry, whenever there was a known incident in safety, everyone knew about this and the system itself became better. They tried to eliminate human error through training. They tried to eliminate system errors through redundancy. So single points of failure were avoided. Companies like Alcoa gave amazing training. Every company realised that man hours lost is production lost. There was a bottom-line impact. There was also a lot of very good messaging campaigns on billboards, like “Speed thrills but kills” – amazingly simple-to-understand campaigns. Another thing that they did was initiatives like “X amounts of days passed without accidents.” They even used to encourage employees with a small bonus. It was promoting a mindset of constant improvements. So, every incident made the industry better as a whole. There was better knowledge sharing. Every employee was trained extensively to understand what they needed to do to stay safe. And lastly the concept of reward and punishment was also instituted. For example, I used to work in a plant where we were obliged to wear a helmet because things could fall on your head. If you were caught walking around without a helmet you would be written up.

And would you recommend a similar approach for IT security?

Well, it worked! Nothing is perfect, but how many times do you hear about major industrial accidents connected to factories, nuclear power plants or airlines, today? Safety has improved tremendously. And it’s not just technology that is enabling this. Alexander [Stein] and myself, who study this, agree that you must have a multidisciplinary approach. People from psychology. People from safety. We need to handle this as a human problem, not just a technology problem. When you bring in a multidisciplinary approach to solving these complex problems you will see successes. If you look at cars, they’ve become safer because they use airbags. They are subject to crash tests. I’m not saying that we copy that, but we know how to make people safe. Regulations, of course, have played a big role here too. 



 
We hear of organisations that do extensive phishing simulation training and then, bam, one of their employees gets phished via a channel like Skype, for example, that no one even considered. So, how do organisations prepare for these curveball cyber events?


That’s a great question, Robert. You can’t train everybody for every known situation. Sometimes we have to give them broad brushstroke instructions. Safety is actually like a feeling. You can feel being safe and not being safe. When you’re walking into an isolated area, you sometimes get that feeling. We have that sixth sense because when you’re talking about safety it is about survival. So, we hope that, given enough information and training, people will get that sixth sense. They will get the sense that, “This does not feel safe.” Your sixth sense should kick in, and we’ve seen that happen all the time. You get that feeling that something doesn’t look right here. You cannot train for every single incident. There are going to be curveballs. You give them as much knowledge and information as you can and hope that they will sense, “This is not good. I’m not going to do this”

And what about the multitasking employee who gets caught out at 4 pm on a Thursday afternoon?

Well, Daniel Kahneman has written this wonderful book called Thinking, Fast and Slow where he describes how the human brain uses System 1 and System 2. System 1 is supposed to be quick and reflexive. System 2 is more about decision-making. In a distracted world, we are more and more using System 1. There are situations where we need our System 2 taking over instead of System 1 taking over. We know how the mind works. We need to slow down a little bit.
 
So, looking after 15,000 users, what are some of the other challenges when promoting information security effectiveness at this scale?

Let me answer this question slightly differently. We really need to start cybersecurity education in elementary school, not when they’re in the workplace. By the time they come to the workplace and I have to train them – it’s probably too late! I hope that society will change so that by the time an employee enters the workplace he’s basically well-trained to handle these sorts of problems, and then we can augment that, instead of me starting from kindergarten. And when talking about training at scale, another thing we need to do is decentralise training. You need to have change champions within the organisation. Instead of the CISO doing a centralised approach, you have change champions or cyber defenders across departments. We’re not asking them to be technically qualified to do this, but it’s just about keeping cybersecurity at the forefront of people’s minds. We’re actually trying to use the same techniques that marketing uses. If you look at advertising, for example, it’s all about reinforcement. For example, they’re bombarding you with thirty-second ads but they’re doing it many times. You’re watching TV, you see the ad; you go to a website, you see the same ad again, and again. You then start associating with that product. So, what I’m saying is you give user information in chunked or bite-sized segments but do it frequently across the organisation and decentralise it. If you look at nature, it is complex but decentralised. If you look at blockchain, that is decentralised. The internet is another example, a technology that has grown so much in the last twenty-five years thanks to the beauty of decentralisation. This is not Gopal standing at a pulpit and preaching cybersecurity awareness. That’s not what this is about. We have to decentralise cybersecurity awareness effectiveness across the organisation, and sometimes that’s the only way you can scale. That, along with starting early. Those are some ways to solve this problem.

    

In one of your LinkedIn articles, you mentioned a novella called Flatland. What can a novella from the 19th century teach us about cyber-security awareness?    

Well, in that novella the characters are living in a two-dimensional world. They are not aware of the third dimension. Once you see the world from a three-dimensional perspective you begin to see things in a different light. There are hidden dimensions to the cybersecurity problem; in fact, there are many dimensions to this problem. Once people start to understand the other dimensions, they will get a deeper understanding of the problem, and this starts getting reflected in their IT security behaviours.  
 
In your articles, you’ve also mentioned the need for partnering with your employees. How can this make cybersecurity programs more effective?

You don’t want your employees to feel that the CISO is the guy that says “No” to everything. There has to be a relationship based on trust. The trust has to be built. This trust also has to be earned. Then your users are likely to listen to you. You listen to people who you trust. If you don’t trust somebody, you’re not likely to listen to them.
 


A lot of IT managers believe if they subscribe their users to a cybersecurity awareness SaaS platform, it will change the security behaviours of employees. What do you think about that approach?

This is not about technology. This is about understanding humans and why they behave in a certain way. You need to understand the constraints they’re under. Relying on a SaaS application alone to train employees and getting them to do certain exercises is wilful blindness. You know, it’s like that classic definition of insanity, “Doing the same things again and again and expecting a different result.” We’ve been talking about awareness for the last ten, fifteen years and nothing has changed. It’s actually getting worse. Something is wrong. Of course, technology can be an enabler. You can subscribe to a SaaS and get your users to watch videos, it’s not going to get to the root of the problem. You need to understand what struggles users have, why they are doing this and how you can help them. We need to take a different approach to solve this problem.
 
How does the IT manager test the effectiveness of their program? Do metrics hold the answer?

Metrics can give you a sense of the direction of your program but, for example, a number of people failing a phishing test can be very contextual. Given the right threshold, anybody can fail. You need to have a baseline metric but not be fixated on the metrics. For example, I give users a difficult test, we see more people failing. When you give them an easy test, fewer users fail. Exercises like these give you an indication of how to test, where to test, and what to test, but don’t be fixated on metrics. You can measure the success of your program by the number of incidences that are reported.  Asking questions like, “How many employees are actually reporting incidents?” can be a good indicator that people are now observing and taking cybersecurity seriously. 

Gopal, thank you for those profound and very interesting insights!



 


Got a question?

If you would like to make an enquiry about any of our services click the "Contact Us Now" button and fill in your details.