Recent Hub Casts

Hacker Conversations: Kevin O’Connor, From Childhood Hacker to NSA Operative

In this edition of Hacker Conversations, SecurityWeek talks to Kevin O’Connor, a high school hacker who went on to work for the NSA. He is now director of threat research at Adlumin.
Nature or Nurture? It’s a psychological – almost philosophical – question that will never be adequately answered. Take the hacker. Hacking is innate to most hackers: it is nature. But the boundary between blackhat and whitehat owes more to personal opportunity and moral compass: that is nurture. The origin and type of hacker may well be a nature/nurture composite – it’s a large part of what this series seeks to understand.
Kevin O’Connor knew he was a hacker by the time he was in Middle School. “A hacker is somebody who looks at things they aren’t supposed to look at; or likes to use things in ways they’re not designed or expected to be used.” ‘Repurposing’ legitimate tools is a common part of the hacker conversation. But notice that there is also no concept of morality or immorality involved; pure hacking is completely amoral. “Hacking is simply the practical application of knowledge,” he continued.

O’Connor’s involvement with hacking began in middle school after he pestered his parents for a laptop, to help with his schoolwork. “I became a geek around the sixth grade. I just wanted to be cool. By the seventh grade I had figured out how to bypass the log-in for Windows. I was using the computers at school to show off to my friends – a way for a geek and nerd to show off, to gain popularity.”

Kevin O’Connor, a high school hacker who went on to work for the NSA. He is now director of threat research at Adlumin.
As his skills progressed, his hacking was never done for serious immoral intent – more the lulz than anything else. “I was just deleting things like cached passwords and doing stuff to get access to the systems – I had a couple of bypasses I could use – after accessing the command prompt. I used to do it just to mess with the systems and things like that.”
Was he ever tempted to go deeper into the shady side of hacking? “Oh, for sure,” he said, “especially in my younger years. I was on the early internet, trying to scan up [gain access to] systems in foreign countries and access whatever data I could. I remember being in a chat room with Adrian Lamo at one time – I was still trying to be the cool kid.”
While in Middle School, like almost all curiosity hackers, he was busted. “I got caught,” he explains. “I accidentally bricked the five computers used by my language arts teacher. She was young, but close to the vice principal for my grade.”

He was caught but not punished. Instead, the school enrolled him in the MOUSE program. MOUSE stands for Media, Outreach, and Science Education, and mouse.org still operates today. It describes itself as “a nonprofit organization that empowers students to create with technology, solve real problems, and make meaningful change in our world.”
It would be pure conjecture to consider this a turning point in O’Connor’s development. But if he hadn’t been caught, what direction would his hacking have followed? If he had been put on formal probation, would he have become resentful? If he had been sent to a juvenile detention center, who would he have mixed with?Advertisement. Scroll to continue reading.
Instead, MOUSE taught kids how to use computers and run computer networks. “And they sent me and three other kids to Columbia University for a couple of weekends to learn how to help out and manage the school’s computer systems. I met this guy called Storm – a real hacker-type. When I got back to Middle School, I was like a second tier tech support, going around the classrooms and fixing computers.”
O’Connor has little doubt about the importance of the MOUSE intervention to his future approach to computing. “Sending a young kid to Columbia University to learn how to use computers at such a young age – that’s transformational.”
However, it would probably not have been enough, on its own, to tip the balance firmly away from the shady side of hacking. This almost invariably comes from the concept of the moral compass. Where the moral compass comes from is not easy to understand except that it is almost entirely down to nurture – or how one is raised. It may be easier with O’Connor: “My dad was a cop, my grandfather was a cop, both in New York City.”

After school, he went to Penn State University. He had an older brother who was very successful academically, and he was expected and wanted to follow in his footsteps. Originally, he planned to study biomedical engineering or aerospace – cybersecurity was not the mainstream career opportunity it is today.
“It wasn’t until my junior year that I really found my computer science passion. I started studying something that was a part of my personality, like hacking was a part of me, with the intent to make it my professional objective.”
The course was largely focused on risk analysis, and he majored on security and risk analysis. “The program at Penn State,” he added, “was almost a feeder program for analysts at agencies like the CIA, so a lot of it was focused on early intelligence analysis. We had folks like General Hayden come and talk to us about cases the agencies had worked on.” O’Connor progressed to work as an intern with the NSA while he was at Penn State.
At one point, three NSA staff visited the university to talk about the NSA and what it does. “I recognized one of these guys was a hacker through and through,” said O’Connor. “We use the word grok – we spoke the same language. He showed me a business card from the late Kevin Mitnick. It was metal and had lockpicks that you could poke out of the business card. I thought that was cool, and the hacker and I really hit it off.”
He thinks this relationship was partly why the NSA came back to him after university. “I started working there. The first summer wasn’t the most exciting, but it was a great foot in the door – I worked on things like Common Criteria. After that I joined a development program to rotate throughout the agency. I got to go to the Naval Postgraduate School, and then focused on cybersecurity defense and secure architectures. I got to develop tablets for senior executives in the US government, and did a lot of work on developing systems to handle and transmit top secret data.”
Then he got back to his roots. “In one of my rotations I got back to my real passion – the hacking stuff. I worked for the Computer Network Operations group. These are the folks who go out and collect foreign intelligence for the government – we were the US Government hacker team.”
O’Connor is proud of being an NSA hacker. “I don’t care what people’s opinions are on the intelligence community or the NSA – the people working there are some of the most talented and dedicated individuals you’ll meet.”
O’Connor’s hacking days had come full circle, from starting as a child hacker doing it for the lulz to a professional hacker doing it for his country.

So, what does the Kevin O’Connor history tell us about ‘nature versus nurture’ in the development of the hacker? Almost all hackers say they were born hackers. O’Connor is no different — hacking is part of his psyche. It was not something he wanted to be, but something he realized he was. There is little doubt that almost all hackers are born hackers.
One complicating factor is neurodivergence. Statistically, a large proportion of hackers are neurodivergent. O’Connor is neurodivergent, with ADHD. He believes the rapid multitasking symptom, ‘the mind of a butterfly’, associated with ADHD is a boon to hacking. But again, is neurodivergence innate or developed? Many believe it is nature that can only be ameliorated or worsened by nurture (the parent calling a divergent child ‘lazy’ is no help; the prescription of medication can be a help).
It would be a stretch to say that neurodivergence creates hackers, but it is probably a factor in their development. Possibly more so with the second major neurodivergence (short of full autism): ASD, fka Asperger’s syndrome. ASD comes with two common symptoms: social difficulties and an ability to hyperfocus on a subject or problem. 
An innate hacker with a computer and the ability to hyperfocus, but with limited social skills and a tendency to be alone, is the meme of the blackhat. If both the hacking tendency and the neurodivergence are innate, should we say that blackhats are the product of nature?
It’s tempting, but from birth onward, nurture becomes more important to developmental progress. It is nurture that develops a person’s moral compass, and it is the personal moral compass that guides hackers away from immoral toward moral hacking. O’Connor gained his moral compass from his family. He readily owns that 9/11 had a deep effect on him — he realized he wanted to use his skills to fight ‘evil’ rather than for any personal gain.
“Terrorist events like 9/11 had a dramatic impact on me. I think later joining the NSA was my way of supporting our country in the global war on terror — using the skills and the passion that I had developed when I was younger.”
He also benefited from another nurturing event: positive intervention. The MOUSE program was a beneficial alternative to punishment for the child hacker. In his own words, it was ‘transformational’.
Hackers exist. They will probably always exist. But they need not become blackhats. Encouraging the development of a strong moral compass, through family, schools, and social workers, and supporting this through positive intervention, can help make the hacker a power for good rather than bad.
But beware of generalizations. All hackers and their histories are different. You could almost say it takes a hacker to know a hacker –they have this ability to recognize each other even on first meeting. They grok each other.
Kevin O’Connor started off a little shady. Positive intervention and the emergence of his moral compass shifted his direction. He is now director of threat research at Adlumin.
Related: Hacker Conversations: Inside the Mind of Daniel Kelley, ex-Blackhat
Related: Hacker Conversations: Cris Thomas (AKA Space Rogue) From Lopht Heavy Industries
Related: Hacker Conversations: Casey Ellis, Hacker and Ringmaster at Bugcrowd
Related: Hacker Conversations: Youssef Sammouda, Bug Bounty Hunter

Watch Now »

Ransomware Group Starts Leaking Data Allegedly Stolen From Change Healthcare

The RansomHub ransomware group has started publishing data allegedly stolen from healthcare transactions processor Change Healthcare in a February attack.
The incident, which disrupted Change Healthcare’s operations and caused healthcare system outages across the US, was mounted by an affiliate of the Alphv/BlackCat ransomware-as-a-service (RaaS), known under the moniker of ‘Notchy’.
BlackCat pulled an exit scam in early March and Notchy claimed they did not receive their share of the $22 million ransom that Change Healthcare had paid and that they were still in the possession of 4TB of data stolen from the company.
Last week, RansomHub added Change Healthcare to its Tor-based leak site, claiming the possession of the stolen data and threatening to publish it unless a ransom was paid. The group said that many BlackCat affiliates were joining in, thus explaining how they came by the data.
On Monday, the ransomware group published several screenshots depicting agreements with various insurance providers, medical claims information, invoice information, patient information, and other types of data.
According to the ransomware group, it is in the possession of processing files that contain personally identifiable information and protected health information from multiple insurance providers.
The data set, the group claims, contains vast amounts of financial, medical, and personal information.
RansomHub is threatening to publish all the stolen data on Friday, unless Change Healthcare pays a ransom.Advertisement. Scroll to continue reading.
In the meantime, Change Healthcare parent company UnitedHealth Group is focusing on mitigating the attack’s impact on customers. The healthcare insurance giant says it has advanced over $5 billion to providers in need.
UnitedHealth Group never confirmed paying the $22 million ransom to BlackCat, but it would not be surprising if it gave in to the second extortion attempt, considering the vast impact the incident has had on the US healthcare system.
Related: US Offering $10 Million Reward for Information on Change Healthcare Hackers
Related: Omni Hotels Says Personal Information Stolen in Ransomware Attack
Related: LockBit Ransomware Affiliate Sentenced to Prison in Canada

Watch Now »

You Against the World: The Offenders Dilemma

In February, we saw a very large and interesting data leak from I-S00N, a Chinese company offering adversarial services for clients including the Chinese Ministry of Public Security, Ministry of State Security, and People’s Liberation Army. 
The leak offered details of compromise within at least 14 governments, custom hardware snooping devices, iPhone “remote access” capabilities, and even a Twitter disinformation platform offering the ability to distribute information en masse to new and/or compromised accounts simultaneously and conduct extensive monitoring. While the individual tools, services and activities are interesting, the profile offers a truth for global organizations that is concerning.
Enterprises have a range of options to mimic certain attacker behaviors or hunt for the same vulnerabilities on which attackers will prey. But what the I-S00N revelation demonstrates is that we’re fighting with one hand tied behind our backs, as other countries are weaponizing their private sectors in ways we can’t and won’t.  We’ve all heard of the “Defender’s Dilemma” as the good guys need to be right every time, while bad guys only need to be right once. As more, and more fragmented offensive security options enter the market for US companies, this gives us an interesting look at what could be considered an “Offenders Dilemma”. Foreign attackers have many more toolsets at their disposal, so we need to make sure we’re selective about our modeling, preparation and how we assess and fortify ourselves. This article will look at four pillars of an offensive playbook – Red Teams, Penetration Testing, Automation and AI, and vulnerability assessment – and for each the best approaches that provide an offensive security program with the visibility and reach to make the greatest impact.
Red Teams need to be about Every Team
When most think of Red Teaming, they envision a team of security experts playing out an attack scenario – either digitally or physically – to see if they can evade detection and achieve a goal by compromising a target asset or assets. While not wrong, that perception it is incomplete. Whether Red Team or threat actor, the “attack” is neither the beginning or end of contact with a potential victim, and thus too narrow of an activity for an organization to determine the full extent of their own vulnerability and risk. 
Prior to emulating an attack scenario, it is absolutely necessary to assess what intelligence you are providing the outside world to inform an attacker, and what human or procedural weaknesses may provide an open door through which an attack can begin. Engaging a Red Team’s ability to collect Open Source Intelligence (OSINT) through company communications, media coverage and even social media can be a treasure trove for a threat actor.  Additionally, many companies conduct “security awareness” training separate from Red Team activities which only provides an assessment against general scenarios.  A Red Team can conduct Social Engineering campaigns using live OSINT and following scenarios that give a real world perspective on how an attack may likely begin, and how effective it can be.
So what then? Understanding how an attack may play out is valuable, but unless you also assess how the organization and its stakeholders are oriented to respond, you have no understanding the extent of the damage a successful attack can create, or how effective you can be at minimizing impact. For this, also conducting incident response tabletops provide that full assessment of readiness, and also a blueprint for improvement.
Finally, realize that even the security team and defensive technologies themselves are assets representing potential vulnerability, and should be assessed in kind.  Which brings us to the next pillar.Advertisement. Scroll to continue reading.
Silos are no-gos
There are no “air gaps” in an enterprise. Every asset, be it application, device, office footprint, or cloud is interconnected.  So, while testing an application or device is important to understand individual vulnerability, they don’t exist in a vacuum.  The connections and attributes they share across multiple applications and devices in an environment can represent additional vulnerability, either on its own, or as a pathway from a vulnerability in an upstream asset. 
It’s for that very reason that an organizational attack surface needs to be understood at the macro and micro level.  We need to be testing individual applications and the overall ecosystem in which they exist.  But we can’t stop there.  There is a third dimension to testing — Time.
Just as no asset is an island, neither is a point-in-time. Applications are constantly being updated, added or deleted. New employees, business units or even whole companies via M&A are being added. Additionally, if a constantly evolving infrastructure wasn’t complex enough, new threats and classes of vulnerabilities are being discovered every day – some in new assets, some existing in assets for months or even years. For this reason, not only must assessment and testing be comprehensive, but it also needs to be continuous. The reality of this level of change would be overwhelming, if not paralyzing for any organization without the benefit of automation. But as with anything in security and life, there are benefits, and pitfalls.
AI and automation must not be autonomous
Automation in all aspects of technology and systems are what drive growth and let businesses scale. Technology is an amplifier, but it can amplify noise as well as signal. Then, it’s critically important to be able to discern not just signal, but the right signals, and that requires human intelligence, intuition and most importantly validation. 
We are also witnessing a step change technology amplification with advances in Large Language Models (LLMs) and Artificial Intelligence (AI).  The ability of AI to develop content, aid in programming, manage high level processes or detect anomalies in data and systems is astounding.  It is of course a reality not lost on malicious actors, who are also testing AI’s capabilities, from deepfakes to malware development. 
However, no technology, particularly as nascent as AI, is perfect.  The age old maxim of Garbage In, Garbage Out remains true. AI models and output are only as valid and effective as the data sources they draw from, and the people who maintain them. We at Bishop Fox like to think about enabling technologies as much like an Iron Man suit with Jarvis. It can supercharge what an analyst or an operator can do, but it still needs a human to see new patterns, determine outliers that break from models and just as importantly, confirm that the output results in a positive outcome. And even with that validation and extra set of eyes, sometimes we need the help of friends.  
A crowd needs leadership
One of the greatest strengths of the security industry community. From open-source tools to industry events and resources, strength numbers is an important asset as new threats are rapidly discovered and weaponized. In this respect, bug bounties are a critically important tool. Whether filling the gaps or finding the needles, community contributions in finding vulnerabilities and developing mitigations is invaluable. Additionally, while standardized disclosure processes and rewards are crucial in driving efficiency and communication, a bug bounty program in the wrong environment can present the same overwhelming issue of noise and lack of prioritization as automation. That’s why in many ways, a bug bounty program at scale needs a strong internal team and vulnerability assessment infrastructure to support it.

Watch Now »