Cloud security via deep observability

Paul Hooper, Board Member at Gigamon. Image courtesy of Gigamon.

Cybersecurity isn’t just about measures like encryption, firewalls, or multi-factor authentication. The level of observability in an infrastructure also matters.

Why? Because being able to monitor the entire network in real time will help make security more consistent overall, which is essential in the current business landscape.

Gigamon, a visibility and traffic monitoring technology vendor, takes this concept further by pushing for “deep” observability to mitigate security and compliance risk, especially for organisations migrating to the cloud.

To find out more about this level of system awareness, we sat down virtually with Paul Hooper, Board Member at Gigamon, and talked about a variety of things, like the connection between cybersecurity and deep observability, what the organisation learned during the pandemic, and what future tech they’re working on.

Hooper was CEO at Gigamon for nearly a decade before stepping down recently to serve as a member of the organisation’s board of directors.

Digital transformation and remote work have grown rapidly since the start of COVID, but came with cybersecurity risks. Meanwhile, Gigamon promises deep observability for the hybrid cloud. What role does deep observability play in mitigating these cybersecurity risks?

As we think about digital transformation, and particularly with the backdrop of COVID that – in my opinion – whilst it has some major implications to the globe, obviously, it will have some pretty significant benefits that actually forced an acceleration of digital transformation in ways that we really didn’t contemplate going into COVID. But certainly coming out of COVID, we’re seeing some uptake and some new innovations that are really required to live in the world that we’re in today.

Digital transformation certainly has driven a new focus around cybersecurity, because so much of our lives today is now operating in the broad and expanded world rather than in the constrained office world that we’re in. With that, how people think about protecting and securing their environments has had to evolve just as rapidly, particularly transformation driving us into the cloud, rather than into the physical data centre, because of the distribution of your clients and your employees.

All of a sudden, security takes on new meaning. Because you can no longer trust in the four walls or the hard outer shell of your office, or your data centre being secure in that landscape. You need to be able to have transparent security as the extent stems from the physical data centre through and including the public cloud. In other words, hybrid architectures.

We saw this need to take our visibility solution to provide a deep observability pipeline for the hybrid environment. In other words, being able to provide real-time network intelligence, of  traffic in motion, whether it be in a physical, virtual, multi-cloud, or in a third-party environment. So wherever you’re trying to manage, secure, and monitor, you’re able to get access to that information, (and) be able to use it to maintain the security positioning that you require as an organisation.

What were some of Gigamon’s biggest lessons of 2021? And what are your organisation’s top business priorities for 2022?

2021 was a learning (experience) for a lot of us. When the World Health Organization declared a pandemic on March 11 2020, many organisations – including ours – switched from almost exclusively in-office environments to almost an exclusively at-home environment.

I was really concerned as a CEO, whether the efficiency was going to decay inside the organisation, because no longer will we be in offices where we’re together or you weren’t having team meetings, and you’re interacting in a different way. And people were operating from home where a whole new set of distractions existed, because kids weren’t at school, and mums and dads became school teachers. All of a sudden, these distractions that none of us had prior to March 2020, became a reality almost overnight.

I was really worried about the efficiency of the organisation, I was concerned that – with no disrespect to any of our employees – I thought a number of them were going to think, ‘Yeah, this is a four-week lockdown, that’s no problem. I’ll just take an extended vacation, life will be good.’

It turned out to be a lot longer than four weeks; it turned out to be the best part of two years. And I was very pleasantly surprised that the efficiency of our organisation, I believe it marginally increased when people worked at home. Because I wound up with more time, and more employee time. They were spending less time commuting, they’re spending less time getting distracted by all of the other things that are going on around the office, and you just focus at home on closing out the day job.

So a big learning in 2021 was the efficiency of our organisation. I believe (it) actually improved with the work from home, which is driving us to think about how we return to the office and what that really looks like in a very different way.

The other big learning in 2021 was that the cloud became – rather than a five-year roadmap for many organisations to move into the cloud – it became a five-week roadmap almost. I’m exaggerating a bit for effect, but it really was a huge acceleration of people moving into the cloud. And that caused people to really have to reposition very quickly, the way they think about cybersecurity, management, and monitoring across the enterprise.

The other big learning for me – a pleasant surprise for the organisation – was how broadband networks 4G, LTE, and the physical infrastructure really survived the massive pressure tests they got in March (2020), when all of a sudden the office local area network really became a wide area network overnight. People were able to connect, businesses continue to be able to operate. And so broadband and all of the service providers, the 4G, LTE, and 5G infrastructure stood up to a big test.

So 2021 was a lot of interesting learning for us, and it really provides a backdrop for how we think about going into 2022. Definitely a continued focus on the cloud, we have taken our technology set that five years ago was very hardware-centric and based in a data centre, to something that is incredibly software-centric.

Today, we’re pitching and selling more into the cloud than we do into the data centre. And one where people are starting to use visibility, which we’ve evolved into deep observability. But it’s really being used by people to do much more than just manage environments. It’s being used to manage, secure, monitor, operationalise, productionalise, to bring value to, (and) to run revenue from. It really is at the crown jewels of so many enterprises. And that’s what’s driving us in 22.

What are some of the most exciting developments in Gigamon labs, specifically in the emerging technologies that you plan to use going forward?

A little bit of background on me: I’m an ex engineer and an ex CIO. So I can geek out really quickly on technology. And at this whiteboard (motions to a nearby whiteboard), I could spend the next hour explaining to you some of the really cool stuff that we’re doing.

We have a worldwide organisation of R&D that is continuously focused on evolving our deep observability platform. But if I was to pick on maybe two areas, let me pick both ends of the scale, and as there’s a spectrum between them. But on one end, the team is working on the challenges associated with what’s called scale-out architecture.

In the cloud, as you think about an instance of some application, when it becomes under load or under stress, or basically suffering because it is having too much traffic, it can clone itself – you can start scaling out. But when you go through a scale-out world, all of a sudden, these are very discrete instances. Traffic may be going through any number of these instances, as they scale out to take all the load. How do you maintain continuity across all of those to make sure that the information is understood? Say, instance A and instance Z, make sure both of those – and everything in between those instances – are staying in communication with each other so that when they see this information going through, they know when they’re seeing duplicates and when it’s unique.

So the team’s been working on how to work in a massively scaled-out memory architecture, and it’s not easy. We’ve got a number of patents already filed around it, we’ve got more coming. One of the things we’ve always focused on is to patent-protect all the work that we do. But how do you scale out that architecture to maintain continuity of your traffic? That’s one end of the scale that’s massively software-centric, massively scaled out, and massively cloud.

Right at the other end of the scale, we’ve just shipped a 400-gig product. When I started this industry, one meg was pretty quick. 10 meg was pretty amazing, and one gig hadn’t even been dreamt of. And here we are at 400 gigs. So the platform is 1RU and you’ve got 24 ports of 400 gigs. The box is a multi-terabit system.

When we launched new products over the course of the last few years, I would always send a note out to the company saying, ‘Let me internalise this for you, let me tell you how many DVDs per second you can get through this box, or something that a human can relate to, because 400 gig means, I don’t get it. So what? It’s four times faster than 100 gigs. So what? When you try and internalise the capacity of a 400-gig box, it’s incredible, because you can, for example, put all of the telephone calls around the globe through that box, and it will all traverse within a matter of about four seconds.

You think to yourself, ‘Wow’. And that’s going to be in the top rack of a data centre. So we’re working on both ends of the scale. One is very innovative software technologies. The other is latest-generation hardware technologies with 400 gig and we’re starting to look at terabit connectivity. So 400 gigs is almost passé now; terabit is on the way. What do you end up with with terabit? What’s the interface going to look like? How are you going to think about connecting all this stuff? We’re researching in all areas, and I say they’re the spectrum and trust me, there are multiple other tracks in between those two.

If you’re a technologist at heart, you know that one of the biggest challenges in this industry with hardware is not the performance, the capacity, or the throughput. It’s actually the heat generation. The thermals of these boxes are a real problem. Keeping them cool, because of the amount of performance inside of these things, is absolutely the biggest problem that most industries face. We face it.

Its cooling is probably the number one design criteria to think about, but it’s really exciting stuff to see. I love going out to our engineering labs and seeing these boxes running with two fans blowing down this little tiny chip in the middle of doing something pretty magical.

How do you envision deep observability and its impact on cybersecurity within the next three to five years?

I firmly believe that deep observability is going to become a prerequisite to architecture and infrastructure designs, wherever they may be. And if you believe some of the latest reports, 92% of enterprises will be deploying hybrid, potentially multi-cloud architectures.

In that world, deep observability is going to be not a not nice to have, it’s going to be an essential component that needs to be designed into infrastructure. So over the course of the next three to five years, I believe deep observability takes the next step forward. You can certainly gain observability and access to information in motion wherever it may be. On top of that, when you get that access, what do you want to do? Now you can observe the information. How do you think about using that observability to really advance the requirements of the enterprise? That’s the area that I think is really exciting for us.

So as we start moving forward through the next three-to-five-year journey, how do we bring more and more insight to that information? Because the world of yesterday’s becoming quite passive, we need to start moving into a much more active world as people start thinking about cybersecurity as a living beast rather than a passive ‘I’ve done security now moving on.’ It’s much more transactional.

The area I talk about frequently is this – I’m sure you’ve heard about zero-trust architectures – and how people think about zero trust. I believe the industry is being positioned to think that zero trust is a technology; it’s not. Zero trust is a psychology, it’s a policy, and then it’s a technology. So as we go forward the next three to five years, it’s how you take the psychology and security, and really embed it into your infrastructure design, leveraging deep observability to change the way you think, the policies you apply and the technology you can use to serve security into the future.