Insider Attacks Are A Bigger and Harder Threat?

There is a new trend apparently emerging to defend against data loss and data damage as a result of external hacking. Some organizations have been removing their sensitive data from the Internet or isolating such data on “private” networks. This trend reminds me of Willie Sutton’s answer when he was asked, “Why do you rob banks?” His answer was, “Because that’s where the money is.” If this trend is real and if it spreads further, expect the hackers to simply move from external hacking to insider hacking. If you ask a hacker, “Why do you rob computer systems?”, he or she would likely answer, “Because that’s where the data is.” I doubt hackers care whether it requires an external hack or an insider hack. As you will see later, removing or isolating sensitive data from the Internet has apparently not stopped the CIA from being either the hack-er or the hack-ee.

The problem with isolating or removing sensitive data from the Internet is that insider threats are actually more difficult to deter than external threats. That’s because dealing with insider threats means dealing with humans, and that’s always going to be problematic. There are a lot of clever ways hackers can gain access to data if they have help from insiders, or if the insiders actually are the hackers. The consequence of trying to defend against hacking attacks by isolating or removing sensitive data from the Internet is that next-generation, trustworthy systems will have to deal effectively with threats from both external hackers and malicious insiders, and the latter is very tricky.

The biggest problem we face today in improving computer security is way too much complexity. Kevin Fu, in a recent Ubiquity blog entitled Dealing with Infrastructure Disruption: IoT Security, noted that “… we’ve been slowly boiling a frog.” Complexity in giant monolithic systems based on ancient technology like Windows and *nix has slowly evolved over the last 40 or so years to a point that it has become effectively impossible to prevent data breaches, insider data theft, and computerized fraud (e.g., payments fraud and election fraud) in these bloated systems. The systems have too many bugs (and thus too many security vulnerabilities), and their complexity makes them too difficult to understand. We can’t secure what we don’t understand.

The news is not all bad, however. There is at least one solution that will ultimately defeat the external hackers and the malicious insiders, and I am going to give you a rough outline of that solution in this blog.

Out with the Old, in with the New

We need to build a new generation of trustworthy systems if we expect to end the data theft and data damage by both external hackers and malicious insiders. Consider these recent quotes from computer security experts:

“… security must be built into IoT devices, rather than bolted on after deployment

[Dealing with Infrastructure Disruption: IoT Security]

“… the next generation of technology must have security built in from the very start.”

[Why We’re So Vulnerable]

Security does not happen by accident. Things like safety and reliability need to be engineered in from the beginning.”

[New approach needed to IT, says NIST’s top cyber scientist]

We understand the Why (see the next section). We are starting to form a consensus on the What (see the three articles above). Now we need to focus on the How, and that is the main purpose of this blog.

Why Have We Failed at Computer Security? 

The primary reason we are plagued today with data theft and data damage by hackers is because our systems are too complex. They are too complex because they have slowly evolved over the last 40 or so years into giant monolithic systems. Because they are monolithic monstrosities, they can’t be formally verified to be trustworthy (secure). If we can develop the discipline and tools to formally verify systems to be trustworthy, we have a very good chance to make computer systems a lot more trustworthy and secure than they are today.

A system based on the formally-verified seL4 microkernel was recently used in DARPA’s HACMS program to demonstrate a trustworthy system that could not be penetrated by a team of professional hackers. When the HACMS program began about four years ago, most people thought it would be a flop and that formal methods would never be useful. Most people turned out to be wrong. The DoD and the intelligence community have a very different view of formal methods today as a result of the success of the HACMS program and seL4.

How Should We Build Next-Generation, Trustworthy Systems?

There is a much better approach than monolithic giants for building future trustworthy systems: microservices architectures. From the Wikipedia article we learn that “Services in a microservices architecture are processes that communicate with each other over a network in order to fulfill a goal. A central microservices architecture property is that the microservices (subcomponents) should be independently deployable. The benefit of distributing different responsibilities of the system into smaller services is that it enhances cohesion and decreases coupling. This makes it easier to change and add functions and qualities to the system at any time.”

Perhaps the most important property of a microservices architecture is its potential to make practical the use of formal methods in the verification of trustworthy systems. Computer scientists have used divide and conquer algorithms and decomposition for many years to manage complexity. Decomposition in computer science is the “breaking down of a complex problem or system into parts [microservices, in this case] that are easier to conceive, understand, program, maintain,” and (I would add) formally verify. Current monolithic systems have traditionally been much too complex to verify using formal methods. It should be possible to verify individual microservices (subcomponents) of a microservices architecture, because the microservices (resulting from decomposition of the whole system) should each be relatively simple (if the decomposition is done properly) and thus can be individually verified using formal methods. The formally verified microservices can then be combined to create a microservices architecture that can itself be more easily verified, since its microservices parts have all been formally verified.

Figure 1 shows an example of a microservices architecture based on an seL4 hypervisor.

Article 1 Fig 1FIG.1

Most next-generation, trustworthy systems will contain untrusted (legacy) software that should be encapsulated in its own microservice for purposes of compartmentalization. It should be assumed that systems with legacy software will be compromised. Fault-tolerance methodology may have to be used to ensure that hackers cannot penetrate beyond the boundaries of the legacy microservice. For example, in the HACMS project the image-processing software running on Linux inside a virtual machine is legacy software. If this legacy software is compromised, the ULB could lose its camera feed. Although this might generally have a negative impact on a mission, the loss of camera feed won’t necessarily impact navigation, for example, because the ULB can use sensor-fusion techniques to detect when some navigation inputs are at odds with other inputs (e.g., GPS, inertial sensors, etc.). This provides graceful degradation (fault tolerance), and thus the mission can be completed even if legacy software is compromised.

This boils down to stating (formally) the overall security properties one needs for minimally secure operation, and then proving those properties based on the known properties of the trustworthy components. The formalism should also allow systems designers to assess the worst-case damage a misbehaving legacy microservice can achieve. The state of the art does not allow such overall-system proofs yet, but research is ongoing in this area by the seL4 people.

Are Organizations Attempting to Protect Sensitive Data by Isolating (or Removing) It from the Internet?

In a communication to me from a member of the intelligence community, I was told that “… important databases in the intelligence community are isolated from public networks and are unlikely to be exposed to malware directly.” I simultaneously learned that the intelligence community is much more concerned with insider threats than threats from malware insertion by external hackers.

The intelligence community’s concerns about insider threats have proven to be prophetic, as we all know, because the CIA recently experienced what was apparently the mother of all insider thefts of sensitive data. It is difficult to know exactly what is happening inside the CIA, but news reports have said that the leaked information came from “private CIA servers that are isolated from the Internet.” If the “private CIA servers that are isolated from the Internet” refers to the technology described in The Details about the CIA’s Deal With Amazon, one would have to say that the CIA’s $600 million “private” security strategy is not working out so well.

From another news article comes this quote (my emphasis underlined): “One of the revelations about the U.S. consulate in Frankfurt, Germany, that has come from the WikiLeaks release of CIA files is that American spies can use the facility for hacking databases that are not connected to the Internet.”

The above leads me to pose the question: Is isolation (or removal) of sensitive data from the Internet the new method of choice to prevent external hackers from accessing sensitive data?

If the answer to this question is YES, Willie Sutton would tell us that insider threats are going to become more of a concern in the future than threats from external hackers, “because that’s where the data is.” This implies at least two conclusions:

  1. As General B.W. Chidlaw said in 1954, “If you want security, you must be prepared for inconvenience.” Removing or isolating sensitive data from the Internet seems like a very inconvenient solution. This could actually be good news, however, because it might mean that organizations with sensitive data are preparing themselves for a lot more inconvenience in protecting their sensitive data.
  2. We’re going to have to find better ways of protecting sensitive data from malicious insiders.

Eliminating Insider Threats Is a More Difficult Problem to Solve than Eliminating Threats from External Hackers

I know I am repeating myself, but it is important to say it again: Insider threats are more difficult to deter than external threats. That’s because dealing with insider threats means dealing with humans, and that’s always going to be problematic. There are a lot of clever ways hackers can gain access to data if they have help from insiders, or if the insiders actually are the hackers.

The only way to prevent successful insider theft is to make it very inconvenient to extract sensitive data. That’s going to mean a big change for most users of trustworthy systems in the future, because computers have brought a lot of convenience into our lives and most people don’t like to be inconvenienced. General Chidlaw was right, however, and, like it or not, we are going to have to embrace his point of view if we want systems that are secure, especially from malicious insiders. Expect to be inconvenienced if you potentially have access to sensitive data, and the more sensitive the data, the more you are going to be inconvenienced.

Trustworthy systems of the future will be microservices architectures built on a trusted computing base such as an seL4 hypervisor with the formally-verifiable properties of being both malware-secure and insider-secure. A system is defined to be malware-secure if and only if malware cannot be externally installed into the system (equivalently, the system cannot be externally infected by malware). A system is defined to be insider-secure if it is very effective at eliminating theft of sensitive data by malicious insiders. Thwarting most insider threats can ultimately be achieved, but providing formal specification and verification that a system is secure from insider threats may not be possible. Instead, insider threats might have to be countered by a cleverly designed set of practical security rules.

10 Rules for Building Trustworthy Microservices Architectures

These rules are intended to be a general guide for developing next-generation, trustworthy systems that will be both malware-secure and insider-secure. A genuinely trustworthy system must eliminate threats from both external hackers and malicious insiders.

Rule Number 1: Trustworthy systems must be structured as a microservices architecture that can be decomposed into microservices that are simple enough that they can be individually verified using formal methods.

Rule Number 2: Each microservice must have only one purpose (or goal or function). This aids in the development of relatively simple microservices.

Rule Number 3: A trustworthy system must guarantee (by formal verification, if possible) that it is immune to external infection by malware.

Rule Number 4: User authentication (e.g., user “log-ins”) must rely on multi-factor biometrics.

Rule Number 5: The difficulty and inconvenience of retrieving sensitive data from a trustworthy system should be directly proportional to the amount of sensitive data to potentially be retrieved.

Rule Number 6: The number of humans (at least two in the case of least-sensitive data) required to simultaneously submit a request for retrieval for any and all sensitive data from a trustworthy system should be directly proportional to the sensitivity level of the data. This is similar to the Two-Man Rule, which requires two humans to take simultaneous action in order to launch a nuclear weapon. Inconvenient, but necessary.

Rule Number 7: Security (access controls) must be automated as much as possible, and it should be extremely difficult for humans to disable or relax security protections.

Rule Number 8: Manual security (access-control) configuration required by humans should be kept to a minimum, or, if possible, should even be non-existent.

Rule Number 9: Software updates should update a microservice in its entirety, the update must be secure, and the update should be accomplished remotely “over the air,” as is done with all smartphones today.

Rule Number 10: Network communication (over a path that contains insecure computers) between microservices that exist on two physically different computers in the network must be encrypted.

Conclusions

We need to be much more disciplined in how we build trustworthy systems, and we need to manage complexity so it does not get out of control. Using a level of discipline exhibited in the building of bridges, for example, and the 10 rules described above, microservices architectures can be built that cannot be externally infected by malware and will be resistant to theft and malware infection by malicious insiders. If a system cannot be externally infected by malware and is resistant to internal theft and malware infection, it will be extremely difficult for either external hackers or malicious insiders to modify the behavior of the system for nefarious purposes or steal data from the system. If a system cannot be modified for nefarious purposes and is resistant to theft by insiders, then it is a secure, trustworthy system.

Jim Morris is a software developer, computer systems architect, businessman, and serial entrepreneur with over 40 years of experience, most recently at FullSecurity Corporation researching solutions for microservices architectures and secure data-storage systems that are immune to external infection by malware and resistant to theft by malicious insiders. He has started and sold a successful technology company, did very early research (1970s) in OO programming languages at the Los Alamos National Laboratory, and was an associate professor of computer science at Purdue. His areas of interest are cryptography, software development, operating systems, and hacking methodology. He has a Ph.D. in Computer Science and a B.S. in Electrical Engineering, both from the University of Texas at Austin. Contact Jim at jim.morris@datasecurityadvisorygroup.org.

DOI: 10.1145/3081882

Copyright held by author.