Industry Insights
Feross Aboukhadijeh
June 23, 2025
Terry O’Daniel didn’t set out to work in security. But after stints leading production engineering teams at Yahoo, building security risk programs at Salesforce, and leading security orgs at Netflix, Instacart, and Amplitude, he’s come to love the work, especially the messy, cross-functional, storytelling-heavy parts of the job. I sat down with Terry to talk about measuring the right things, building trust with engineers, and how AI might finally give defenders a leg up.
I never wanted to work in security. I started out as a network engineer and I always perceived security as very fussy and not very understanding of the pace of operations. I had to worry about things like keeping the website up or getting it back up when it fell over, and security always felt like an afterthought. This is because I worked in very small startups. I was incredibly lucky after university that I moved to San Francisco and just stumbled into some roles basically running all of technology for these small startups.
The interesting thing is I am not a software developer by training. I'm really an infrastructure operations and now security person. So I found that there was a real niche for doing all the things that devs don't like to do, and it gave me an intense pragmatism. But it also taught me a lot about how software is built, where it can go off the rails and what are the right patterns to develop when you're thinking about security engineering, particularly the question of "What is the burden I'm putting on developers?" How do I make it easy for them to understand the context of what we're doing here and create those feedback loops?
I’ve never seen a security team fail on technical merits. I'm sure it's possible. Maybe I live in the rarified air of Silicon Valley and it's not so much a problem here. But what I really see, especially in software and tech, is security teams fail for one of two reasons:
I don't think I've ever led a security team at a place where we have a really good understanding of risk. Risk sometimes sounds like a compliance-y word, but in my mind it's really about having a limited amount of resources. It could be budget, it could be headcount, what have you, and we have to figure out how to most effectively apply those resources. How can you do that unless you understand what business are we in? What are the really existential threats that would cause this whole company to go away tomorrow if an attack were realized?
Once we identify those crown jewels, you have to start tying the threads back out to the possible attack chains. You have to understand what is likely, what is the blast radius of various attacks if they were successful, how do I decide what to work on first?
If you just start chipping away at things that have worked for you in the past, then you're missing an opportunity in that early phase when you come with beginner's mind and fresh eyes to really ask core questions of the business, not just engineering, product, even finance. How do we make money? What would be the worst thing to happen here? How could that be realized? And then you have to do your own hard work of tying it out to attack chains and coming back to the table and saying, "Thank you for your time. Given all that, here's my top five list of things we have to fix."
Curiosity. First and foremost I look for people who aren't interested in shuffling papers around on the desk but rather people who really want to dive a little deeper and understand the why, the how. Thinking outside the box — what could we do differently? I think that all is really rooted in curiosity.
And then you need a strong sense of ownership. Folks need to feel like once I've handed a project over to them, they own it from end to end. Part of that's up to me to make sure that they really do feel empowered and build enough room for failure because you have to allow people to fail a few times. But failure is a very bad thing usually in security, so there's a lot of balancing act there.
Finally, I like to hire folks who are constant learners, not just curious, but they have a growth mindset, and they're always thinking to themselves, "I want to get better. I want to improve my skills. I want to dive deeper into this technology, not just because I'm curious about it, but I know that it will make me a better professional at my job.”
Measuring the wrong things — or measuring in a way that doesn’t help. I once joined a company where the main metric was “open Dependabot alerts.” Just a single number. No context, no severity, no breakout by team. It just… went up. That kind of number isn’t useful to anyone.
Another example is the number of open vulnerabilities. Sure it's a number and it's important to track that, the number of crits versus high, etc. But at the end of the day, we are asking people to do work that is not going to get them promoted. It's not going to get them recognized. It is seen as a necessary piece of housekeeping at best.
So I have to be thoughtful about how much heat I'm putting on developers by throwing these big numbers up there that might make their way to leadership. And suddenly I'm doing the opposite of what I just said, which is being empathetic to how my partners work and giving them the tools so that they can be successful in the framework by which they're evaluated.
One of the first things I look for is not that you have the solution I would have thought to build or even a solution that sounds feasible to me. If you come to the table and I have a really strong sense that you understand the problem space, I am willing to listen more and to hear more about how you're trying to solve it.
Then hopefully you will understanding a little bit about my value system, my structure, to whom I report and how frequently. Some tools will hit the mark in a major way in terms of helping someone on my team be successful but they haven’t necessarily cracked the other half of the equation, which is: every quarter I have to go to the board. I have to explain to them what are these existential risks to our company and what am I doing about them. So you’ve got to make it usable and quick for my engineers, but also make it easy on me to talk to leadership and prove that my investments are driving down that risk.
I am more of an optimistic person and part of that comes through having lived through the dark times of a few of these technical revolutions. I think the rate of change is only going to continue with this AI revolution, and security is very much in danger of being left behind. Maybe that’s the pessimistic piece but I’m optimistic about the fact that so many of the problems we have in security, which frequently come down to resource constraints, are more solvable with machine learning, AI, LLMs, and Agentic AI.
I love the fact that AI is rushing in to help security operations, to build the funnel to winnow down from a thousand alerts to the five I should actually care about to the one I should work on first. Everyone gets a promotion. Everyone on the team is now a manager or soon will be of your set of AI agents. We’re in the lumpy, ugly period of it, but I am optimistic that we’re going to come out the other end either with a whole bunch of Terminators or, more likely, we’re entering an era where we are radically reducing the speed from thought to action.
Founders who have lived the problem. I want to hear why they picked this space, not just why it’s big or growing, but why it matters to them. The best founders speak with clarity about what’s broken and have real experience trying to fix it.
Honestly, I feel lucky to have come up through production and infrastructure engineering. It gave me a strong foundation. But today, I think security pros need more exposure to legal and business risk, especially with privacy laws and regulation evolving fast.
We also need to do a better job giving junior folks early exposure to what leadership actually looks like. It’s not fair to promote someone and expect them to succeed at a completely different set of responsibilities without training. We’ve got to build more intentional growth paths.
I actually tell people: Socket.
Risk. We we talk a lot about risk and we're doing nothing with it. We're still using red, yellow, green, and dumbing everything down. Until risk is machine readable, we're just kidding ourselves.
Cold calls. Never in the history of calling and phones has it ever worked.
Patience and persistence.
* This interview was edited for clarity and length.
Subscribe to our newsletter
Get notified when we publish new security blog posts!
Try it now