December 26, 2024
EP 68 – Cloud Security, Collaboration and Futuring in the Now
In this episode, Trust Issues host David Puner wraps up 2024 with a conversation with Red Hat’s Field CTO Ambassador E.G. Nadhan about the future of cybersecurity. They discuss the importance of cloud security principles, the impact of emerging technologies like AI and quantum computing, and the challenges of managing machine identities. Nadhan emphasizes the need for organizations to prepare for future security challenges by understanding the attacker mindset and taking proactive steps today to protect for tomorrow. The conversation also touches on collaboration within the open source community and the role of Red Hat’s Field CTO organization in driving innovation and addressing market opportunities.
Today’s show, our final release of 2024, is a fitting cap to the year in that it’s got some reflections, but it’s more focused on the importance of staying ahead of the curve and preparing for future cyber challenges together—being cyber ready for whatever the future may deliver. Because years end, but reality does not. It evolves and morphs at varying haphazard speeds.
Our guest today is Red Hat’s Field CTO Ambassador, E.G. Nadhan. Or, for those of you into the whole brevity thing, Nadhan, as he prefers it. In our conversation, Nadhan talks about the importance of cloud security principles, the impact of emerging technologies like AI and quantum computing, and the challenges of managing machine identities. He also discusses collaboration within the open source community and the need for organizations to prepare for future security challenges by understanding the attacker mindset and taking proactive steps today to protect for tomorrow, rather than waiting until after a breach happens.
Thanks for spending time with us in 2024. We wish you a happy new year and a great now. Here’s my conversation with Nadhan.
[00:02:00] David Puner: Nadhan, Field CTO Ambassador with Red Hat. Welcome to Trust Issues. Thanks so much for coming on to the podcast.
[00:02:06] E.G. Nadhan: Great to be here, Dave. Honored to be here.
[00:02:10] David Puner: Excellent. Well, thank you so much. I know the holidays are coming up. We’re getting toward the end of 2024. So, to start things out, what is the Field CTO organization at Red Hat, and what does your role as the Field CTO Ambassador entail?
[00:02:30] E.G. Nadhan: Well, let me start by wishing a great new year to everyone. And, in that vein, the Field CTO organization—like any other software firm, in fact, many other firms—I would say we have the sales organization and then the engineering organization: global engineering, working with the product teams. So, building and evolving products, and then selling them, making the case for them. Two fundamental notions ever since the first lemonade stand, let’s say.
The Field CTO organization is really at that intersection. We have Field CTOs, technology strategists, and chief architects. We strategically engage globally—we’re a global team—with customers to identify market opportunities. So, it’s not just about making the case for the products on the truck, but also about identifying what markets we’re not addressing, what opportunities are out there. Then, our architects and strategists work with customers to put repeatable solutions together to address those areas.
At the same time, by meeting with customers and partners, we learn what’s missing and how our portfolio of products and services should be augmented. What is the white space? So, it’s an exciting intersection because we get to look ahead—not just at the products on the truck but also at what’s coming around the corner, what technology should be coming around the corner, and how we can be the catalyst for customers to apply it while continuing to work with our engineering and product teams to evolve what is eventually sold. That’s the Field CTO organization, Dave.
[00:04:00] David Puner: Really interesting, and thank you for that. I should note you are U.S.-based, correct? Where are you located?
[00:04:10] E.G. Nadhan: I’m based out of Chicago, but like I said earlier, our team is global.
[00:04:15] David Puner: Okay. So, you’ve been with Red Hat for a little over nine years at this point. How has your role at Red Hat, and what you do at Red Hat, evolved in those nine years?
[00:04:30] E.G. Nadhan: Great question. Around this time of year, you tend to look back and see how we got to where we are, and then look ahead to the year ahead. So, it’s a well-timed question. I joined Red Hat nine years ago, as you said. I started in sales as the Chief Technology Strategist for the central region. Account teams tend to focus on selling software or widgets—making the case for what’s available today. But my role was more strategic: to understand what the customer is looking for, what outcomes they’re targeting, and how Red Hat could partner in their journey.
That role grew from covering the central region, essentially the central time zone, to covering all of North America. I became the Chief Architect and Strategist for North America. That evolved into a global role where I became the role leader for the Chief Architect position at Red Hat, focusing on synthesizing and collaborating on strategies globally.
A couple of years ago, the Field CTO role was introduced, acting as an extension of the CTO role at Red Hat. Field CTOs have geographical coverage, extending and localizing the CTO’s message for their regions. Chief Architects moved into the Field CTO organization under global engineering, and my current role as Field CTO Ambassador was born. It’s about amplifying messaging, fostering dialogue with industry leaders and decision-makers, and co-engineering innovative solutions with customers and partners.
The role is new, just a couple of months old. Talk to me a year from now, and I’ll have more to share about what’s been accomplished!
[00:08:00] David Puner: So, this is somewhat theater of the mind for our audience, most of whom are consuming this episode in audio. But you are, in fact, wearing a red fedora, signifying Red Hat. When were you given that hat? Was it when you became the Field CTO Ambassador, or how did that come about?
[00:08:20] E.G. Nadhan: Good question! I should probably ask my leader for a hat proclaiming the Field CTO Ambassador role—point noted! This fedora, though, is something I take great pride in. I got it during my new hire orientation. Only Red Hat employees receive this particular hat, and it’s distinct from replicas given out at conferences.
During COVID, I started wearing it at home as a signal to my family that I was working and “off limits.” It also became a handy visual on Zoom calls—no need to introduce who I work for; it’s blatantly obvious, and I wear it with pride.
[00:09:00] David Puner: So, with the hat comes efficiency as well. I like it.
[00:09:05] E.G. Nadhan: Yes, I would say more the mindset, as in what does being a Red Hatter mean? It’s not just about open source—it’s about collaboration, innovation, open culture, and sharing. All of those things come together when you’re a Red Hatter.
[00:09:20] David Puner: How did you come to be focused on cloud security and emerging technologies like AI and quantum computing?
[00:09:30] E.G. Nadhan: When I was in sales, the focus was more on what customers could do with the products available today—support, certifications, and services around those products. But the open source community is always experimenting. At Red Hat, we’re the largest enterprise software company using an open source development model. We don’t even have a traditional R&D lab. The open source ecosystem itself is our R&D lab, running 24/7.
[00:12:00] E.G. Nadhan: The open source ecosystem is pretty much our R&D lab. By that, I mean, we have paid Red Hat employees who are contributors and leads in the open source community. But we collaborate with our partners, our competitors, and contributors. It’s a community where participants do what they want to do, not just what they have to do.
In addition to passion, there’s co-engineering and co-innovation that happens. Some projects see the light of day, and some don’t. Our model is to bring the ones gaining traction to the forefront and productize them. We give them lifecycle management, a roadmap, and stand behind them. That’s what makes it to our platform.
Because of that pipeline, roles like Chief Architects and Field CTOs require us to stay ahead of the game and look toward what’s coming next. When the curious customer poses the question, “This is great, but what will we see in two years? Not just from the product, but what technologies do we see emerging?” we need to be ready. That’s why we strive to stay ahead of the curve and, in some cases, drive what should come next, taking the lead.
As we were absorbed into global engineering and the Field CTO organization, staying ahead of the curve became par for the course during the workday. That’s how I got into emerging technologies overall. We also have a peer team focused exclusively on experimentation with different open source projects, and we are very closely tied to them.
You asked specifically about AI and quantum computing. Who isn’t into AI, you know, in some shape or form?
[00:13:00] David Puner: Fair point.
[00:13:02] E.G. Nadhan: Yes. That’s just being part of the digital world we live in. The question wasn’t whether we should be part of it, but rather, “What is our role? What’s our value-add? How can we help with the massive shift the industry is experiencing with AI?”
My foray into quantum computing is very different. Red Hat is part of IBM as a wholly owned subsidiary, but we retain our independence, which I say with great pride. When we were acquired, people asked if we’d be “blue washed” or start wearing purple hats. None of that happened, even symbolically.
IBM reached out to some Red Hatters, including me, to go through their training and certification program to become IBM Quantum Ambassadors. I’m now one of those ambassadors, and I initiate conversations with customers and prospects who want to explore what’s possible with quantum computing. I also lead the IBM Quantum Ambassadors in the central region as a Quantum Senior Ambassador.
[00:14:00] David Puner: Is there some sort of formal training you need to go through to be a Quantum Ambassador?
[00:14:05] E.G. Nadhan: Yes, there’s training, an interview process, and ongoing engagements to maintain the certification. It’s a continuous evolution and certification process.
[00:14:15] David Puner: I do want to get back to quantum computing a little bit later, but first, what are the fundamental principles of security that organizations should adhere to regardless of whether they are operating in the cloud, on-prem, or hybrid environments?
[00:14:30] E.G. Nadhan: I remember when we started talking about the cloud, just like we’ve been talking about AI recently. The question back then was, “Can you really be secure in the cloud? Are you taking on additional risks?” There were multiple schools of thought.
The position I took was this: if you are secure as an enterprise, if you enforce and adhere to security standards and fundamental principles with proper governance, you will be secure no matter where you are.
[00:15:00] David Puner: What are the key elements that constitute a strong security foundation for organizations moving to the cloud?
[00:15:05] E.G. Nadhan: Absolutely. First, being secure by design is critical. Security cannot be an afterthought or a box to check at the end of the architecture process. It must be integrated into the design process.
For example, having a default password of “1234” is not secure. When end-users install software, they often click through custom settings to start using the software quickly. This makes it essential to be highly sensitive to default configurations.
Another element is separation of duties. There should never be a single person with full control or access. If that individual turns malicious, they could cause significant harm. Spreading responsibility across multiple people or systems mitigates this risk.
Privilege management is another key factor. Giving everyone root or superuser access may seem easier, but it’s incredibly risky. Users should have access only to the resources necessary for their tasks.
Transparency is also vital. The open source principle of transparency should extend to security—applied to data, designs, and algorithms.
Finally, understanding the threat is essential. Security is like a game of chess. You need to think ahead and anticipate what could go wrong. This includes adopting the attacker mindset and considering, “What would a hacker or bad actor do?” Attackers collaborate effectively, and we must mimic their strategies to stay ahead.
[00:17:00] David Puner: Right. So, you’re talking about the attacker mindset and getting into it.
[00:18:00] E.G. Nadhan: Yes, exactly. Exactly. And then, usually, there’s a lot spoken about defense in depth. I would say, I will mention that, but you’re not secured just because you have defense in depth. You need to do everything else because all the layers of that said depth can be effectively penetrated as well.
[00:18:30] David Puner: So then, those are the elements. How can organizations ensure that they have the right practices and governance in place to maintain security in the cloud?
[00:18:40] E.G. Nadhan: First off, there needs to be collaboration. If there is shadow IT, then the IT organization overall—from the CIO down—should be respectful of how that came about. Instead of leaving them alone or not allowing them to bring their initiatives into the fold, ask: “What is missing? Why did you have to create this? How can we do this better at an enterprise level?”
Because the moment you become an enterprise with multiple units, different teams, different business units, and different projects doing their own thing, you are exposing yourself to vulnerabilities. Instead, governance should respect those efforts but bring everybody into the fold.
“Okay, we are all part of the same enterprise. There is a main door, but there are multiple entry points—the patio doors, the roof, the ceiling, and so on. Let us collaborate on how to secure all these potential entry points. Let’s collaborate on tracking what kinds of intrusions are happening.”
We may have our own needs for functionality, business outcomes, and security in different domains, but it should be aligned at an enterprise level. Starting early, rather than making security an afterthought, is critical.
[00:20:00] David Puner: Red Hat, as you’ve mentioned, is known for its open source development model. In the context of open source, what are the key steps for organizations to establish a trusted software supply chain? What role do DevSecOps practices play, and what best practices do you recommend for managing, automating, and securing hybrid cloud environments?
[00:20:30] E.G. Nadhan: There’s a cycle from a software supply chain standpoint. In our case, it starts upstream, with the open source community and community leadership. When there are packages—whatever tool is being used, whether it’s Bugzilla, Jira, or others—we ensure the package is reviewed and tracked for early release and inclusion into Red Hat Enterprise Linux.
Next comes security scanning of what is actually going out. This includes using compiler flags set for hardening and security, followed by extensive quality engineering testing per release. We also ensure that all packages are digitally signed before distribution, with continuous security updates.
Each step is essential to ensure not only a secure supply chain but also preparedness for any necessary mitigation efforts should something go wrong.
In addition, from a Field CTO organization standpoint, we emphasize validated patterns. Let me explain what that means.
There’s an overused term: reference architectures. It’s often vague—”We need a reference architecture for this use case.” What does that mean? We prefer the term portfolio architectures. We start by identifying the portfolio of capabilities needed for a given use case. Then we determine the technologies that enable those capabilities.
Let’s say we have 20 to 25 architectures. We analyze these to identify common patterns across multiple architectures. Those patterns are then brought to life using Red Hat and partner technologies. This includes architecture as code and single-click deployment of said patterns.
[00:22:00] The key is validating these patterns. When there are new product releases—whether from us or our partners—we ensure that the integration of all the capabilities with different products continues to work. These patterns go through quality engineering to maintain their integrity.
The reason this matters is that enterprises often expose themselves during product upgrades or version changes, especially when these span firewalls. Hackers look for weak links like these. So, it’s not enough to secure a single product. The overall pattern—the integrated solution that enables the capabilities—needs to be secure.
[00:23:00] David Puner: So then, how can organizations balance the need for security with the need for agility in hybrid cloud environments?
[00:24:00] E.G. Nadhan: The idea, again, is to make sure that it’s not taken for granted. In the cloud, when you are deploying containers, image scanning is something very basic that needs to be done. You want to make sure that you have open platforms, and whenever there is a new workload being deployed, ensure there is proper scanning.
There’s this notion that because it’s a container, it’s like a fire chest—nothing can penetrate it. That’s completely untrue. You cannot trust what is in the container, so you have to go through image scanning. Absolutely. That is one step. Then there’s certification, as I mentioned earlier, and software signing, which absolutely needs to be done for each deployment.
Looking at it holistically, it’s not just about securing a single workload but ensuring the combination of capabilities being deployed continues to work securely through quality engineering.
We also live in a world where it’s not “one cloud fits all.” There are multiple cloud environments—it could be a virtualization environment in the data center, or it could be the edge. Just because a workload is secure in one environment, with one cloud provider, doesn’t mean it’s secure in another. You must ensure that the workload you’re responsible for is secure wherever it is deployed.
Let me use a metaphor. When we make a left turn in the United States, if there’s a car ahead of us and we see that driver turned left because of the green arrow, that doesn’t necessarily mean you have the green signal to follow. The environment is different. Time changes things. Just because something is secure in one environment doesn’t mean it’s secure in another. You need to go through the certification and validation needed for wherever the workload is being deployed.
[00:25:00] David Puner: So then, shifting over to machine identities, what are some of the challenges and solutions related to managing machine identities in the context of AI?
[00:25:10] E.G. Nadhan: Before I get to AI, let’s talk about secrets management. It always helps to have a single system of record with enterprise-grade software that can be relied upon. This is critical because credential management can become a vulnerability. Even for human identities, you need a reliable platform for identity management overall.
Enterprises often focus on human identities because bad actors are usually associated with disgruntled employees or someone making a mistake. “To err is human,” as they say. But when it comes to machine identities, you need to start by asking: What constitutes a machine?
It could be an algorithm. It could be an API. It could be an edge device. It could even be a pacemaker in the medical field or a camera in a retail store. These are all examples of what we consider “machines.” And very quickly, the scale adds up. A rough metric I’ve heard is that for every human, there are at least 45 corresponding machine actors.
[00:27:00] That means we’re talking about significant scale. Just because you have a solution for secrets management and a single source of truth for human identities doesn’t mean it will automatically scale to accommodate machine identities. That’s the challenge.
This is where the right software providers come in—ones that can address both human and machine identities at scale. Standardization and automation are essential, especially given the volume and proliferation of machine identities. For every one human identity, you need to be able to manage 45 machine identities with the same level of security and oversight.
[00:28:00] David Puner: As promised, I wanted to shift back to quantum computing, particularly because you’re an IBM Quantum Senior Ambassador, as you mentioned earlier. Quantum computing is fascinating, fast-moving, and has the potential to change everything. So, at this point in time, how should organizations prepare for the security challenges posed by quantum computing?
[00:28:30] E.G. Nadhan: From a security standpoint, cryptography is what comes to mind.
[00:28:35] David Puner: And when you say cryptography, you mean the practice of securing information by transforming it into a format that only authorized parties can understand?
[00:28:45] E.G. Nadhan: Yes. That being said, the way you framed it is very interesting. Let me start by saying that everything we do on a computer for good reasons uses cryptography. This is why even bad actors—especially beginners—cannot read emails, access medical records, or post from social media accounts. That’s where cryptography plays a crucial role.
[00:29:00] David Puner: So it boils down basically to secure transmission or storage, right?
[00:30:00] E.G. Nadhan: Yes, absolutely. And the fact is, cryptography today is so good that when a secure data or systems breach occurs, it’s not usually because the encryption key or algorithm was broken. It’s not as if attackers suddenly gain access and visibility to data that seemed like gobbledygook and suddenly it makes sense in plain English.
That’s not usually the reason why breaches happen. Breaches typically occur because someone used a weak password like “1234,” didn’t follow the rules to implement robust security practices like two-factor authentication, or made a human error. For example, someone might have a great password, but it’s written on a Post-it note stuck to their laptop. That’s how breaches happen.
Modern encryption methods—like 2048-bit public keys—are like the sturdiest walls we’ve had for many years. But quantum computing can do things that classical computing cannot even fathom. It opens up a whole new domain of problems, both good and bad. Like any new technology, it can be used by both good actors and bad actors. If quantum computing falls into the wrong hands, it could be used to decrypt what is encrypted today.
[00:31:00] Suddenly, what seemed impossible could become possible if quantum computing is applied for the wrong reasons. A term that is loosely used to describe this risk is Y2Q.
[00:31:10] David Puner: Y2Q. I remember Y2K, but I’m not familiar with Y2Q. What is that?
[00:31:15] E.G. Nadhan: It’s a term we use at IBM. The question often comes up: “When is this going to happen?” I wish I could give you a precise answer, like predicting the landfall of a thunderstorm, or the exact timing of Y2K—December 31st at midnight. But we can’t say the same for Y2Q.
It will happen. There are predictions—some say a few years out—but the point is that we need to act on it today. Like they say, “Live today as if it’s your last day on Earth.” Enterprises need to prepare as though Y2Q could happen tomorrow. What if it does? Are we ready?
That’s where enterprises must take steps today to protect themselves. It’s not about using quantum computing just yet, but about ensuring that their classical assets—data, systems, and so on—are quantum-safe. Essentially, it’s about rebuilding the cryptographic vault to make it secure against both quantum and classical attacks.
The worst-case scenario is realizing this need the day after Y2Q—that’s too late. Enterprises should act now. IBM offers services to help enterprises become quantum-safe. At a fundamental level, this involves rebuilding the cryptographic vault and ensuring it’s secure against quantum and classical attacks.
[00:33:00] The good news is that there are concrete steps enterprises can take today. It’s not a matter of figuring out what to do; the steps are already available.
[00:33:10] David Puner: If you were a betting man—and I’m not sure if you are—I see you smiling, so maybe you are—when would you predict quantum computing will be accessible to the masses?
[00:33:20] E.G. Nadhan: I can’t give you a specific year, but I can definitely see it happening in this century, possibly even before 2050.
[00:33:30] David Puner: Okay. Then we’ll have you back in a year to see if your prediction changes. When quantum computing finally arrives, will it bring about a shift similar to what we’ve seen with AI, or will it be an even bigger change?
[00:33:50] E.G. Nadhan: First, it could happen anytime before 2050. I only put that date out there as a marker of how far off it could be. Where quantum computing is today, there’s a lot of research and experimentation happening. Quantum computing is based on probability, unlike classical computing.
In classical computing, a bit is either 0 or 1—like heads or tails on a coin. But in quantum computing, when you toss the coin, depending on how high it’s tossed, there could be 100 different states the coin could be in. The probability of the coin being in a particular state is the type of metric quantum computing uses.
[00:35:00] Because of this, a quantum bit, or qubit, can store exponentially more information than a classical bit, which has only two states. This opens up incredible possibilities but also introduces challenges like error correction. There’s experimentation happening, and success in some areas, while others still require further exploration. There’s tremendous curiosity and promise surrounding quantum computing.
As for AI, tools like ChatGPT have made AI ubiquitous. My mom, for example, recently asked me about ChatGPT—which I didn’t expect! AI has penetrated so far that almost everyone is aware of it or using it.
Quantum computing is not there yet. The implementation behind the scenes isn’t as easily accessible. For example, asking a quantum computer to solve a complicated math problem isn’t as straightforward as using AI to draft an email. Quantum computing’s adoption will likely start at an enterprise level, where businesses gain competitive advantages through new models and products.
[00:37:00] However, end-users probably won’t directly use quantum computing to do everyday things like buying milk. Instead, enterprises will leverage quantum computing to differentiate their services, which could eventually benefit end-users indirectly.
[00:38:00] David Puner: So then, looking into 2025 and going into the new year, what trends or topics are top of mind for you?
[00:38:10] E.G. Nadhan: I would say being safe with AI is going to be a major focus, especially given the discussions around AI, what it’s yielding, and the type of information it’s providing. The need for open source AI is going to be felt more and more. It’s not just about the models—you’re kind of held hostage to the type of data served by the models if it’s proprietary.
By going open source, you can be more cognizant of what data is available, ensure the use of the right data, and enable transparency across the models. Open source fosters transparency by providing accessible models and data to promote accountability, as well as ethical and fair AI innovation without dominance by a few entities. These are all things that come to mind when I think about AI safety. That’s one area.
The other is compliance. How much risk are enterprises willing to take? Security, compliance, and overall enterprise risk management are deeply interconnected. I can see regulations for AI coming through, if they haven’t already. The EU has already started working on some frameworks. Companies need to be more cognizant of what types of regulatory frameworks they need to adhere to from an AI perspective and take the appropriate steps.
[00:39:00] I would also double down on machine identities again, especially with the rise of AI. AI models can function as logical machines, if you will. When I mentioned devices, APIs, algorithms, and so on, AI models fit within that scope. We need to ensure we’re clear on what the model is doing, what outcomes it’s producing, and what actions or insights are coming from it. Machine identities will play a critical role in this space as well.
Those are the three key areas that come to mind for 2025: AI safety, compliance, and machine identities.
[00:40:00] David Puner: Thanks, Nadhan. I know you’re active on LinkedIn, putting out a lot of what we call in the business “thought leadership.” If folks want to catch more of your insights, they can check you out there. Is there anything else people should know? Where can they find you?
[00:40:15] E.G. Nadhan: LinkedIn is a good start. I have multiple thought leadership avenues, but following me on LinkedIn would be a great way to begin.
[00:40:25] David Puner: All right. Well, Nadhan, Field CTO Ambassador at Red Hat, thank you so much for coming on the podcast. I really appreciate it, and Happy New Year.
[00:40:35] E.G. Nadhan: Happy New Year to you as well. The honor is mine.
[00:40:40] David Puner: Thanks for listening to Trust Issues. If you liked this episode, please check out our back catalog for more conversations with cyber defenders and protectors. And don’t miss new episodes—make sure you’re following us wherever you get your podcasts.
[00:41:00] Oh yeah, drop us a line if you feel so inclined—questions, comments, suggestions (which, come to think of it, are kind of like comments). Our email address is trustissues, all one word, at cyberark.com. See you next time.