g i r f t y

Building a CENIC Security Strategy

· CA Community Colleges, CSU, Cultural & Scientific, University of California, Healthcare, K-12, Libraries, Performing Arts, Private Sector, Private Universities & NPS, RENS & NRENS
Tags: securitynetwork securitynetworkceniccalren
REGIONS: California


Numerous headlines in recent years have drawn our attention to successful hacking, leading to breaches of information believed secure, access to systems believed safe. This year alone, hospitals have paid ransom to regain access to their own records, and political parties have been publicly embarrassed, to the point of replacing their own leaders.

Those headlines raise security questions for any organization involved in the digital economy, and CENIC is no different.

The 20 million people who rely on CENIC’s network to connect with each other and the world—who are part of California’s research and educational institutions, hospitals and academic health systems, K-12 districts and libraries, and a community college system that is largest in the nation—want to know that their network is reliable.

At the center of CENIC’s work is its commitment to supporting the research and educational missions of its members, and doing so in an open and transparent way. This focus is driving Chief Cybersecurity Strategist Sean Peisert (SP) and Vice President, and Chief Technology Officer John Dundas (JD) to develop a robust strategy for the network and its members.

Why is security such an important issue for CENIC now?

SP: There is an increased recognition of the issue that has raised the visibility and perception of liability surrounding security. The perceived liability and the perceived risk is placing IT directors and networks in a difficult position. The reality is that there are more talented threat actors out there than there have been before. However, our systems are better than they used to be as well. Financial information, social security numbers, human resources, scientific data, email, supercomputing systems, scientific instruments, building controls—much of this has been plugged into computer networks for a number of years. Only recently has there been a greater awareness because of events such as the Home Depot, Sony, Office of Personnel Management, Anthem, and Democratic National Committee breaches.

JD: Our systems have increased in complexity in such a way that no one person understands how any one system is interconnected with an unknown number of other systems. We’re now learning what has been and is connected to legacy systems in a variety of ways. There was a time when there were a limited number of computers connected to any one network. Today every person walking onto a California campus or library arrives with a smartphone that can connect to its network with unknown apps and programs. Now we’re learning how to manage those things.

Are the risks simply greater today?

SP: Health care is one area where the recognition of security needs is great. With the advent of the Affordable Care Act, requiring medical records to be accessible to patients online, everyone rushed to get medical records online to meet the deadlines. Unfortunately, there were a number of different systems promoted without time to assess how well they did or did not interact with existing systems. There wasn’t the time to test and assure the security of those records in some places. And this is the type of information that requires a higher level of security. We’ve had certain events in recent years that have opened our eyes. For example, I think that people working in government with security clearances were surprised that the government placed as much personal information as they did online. I don’t think there was an awareness that those who took their computers home to do work could unintentionally provide access to personal information through the use of unsecure networks. Now the awareness is higher, and that will increase security.

JD: The complexity issue has to do with software systems, too, and not just the network. As we build those software systems, we’re making a great deal of assumptions.

SP: There’s a wonderful quote by software developer and technology podcaster John Siracusa, who said, “Software is the most complex thing made by humans…. Programming [software] is like having to assemble a bridge starting from subatomic particles, and you're not allowed to use the current laws of physics as a reference.” You’re building very complex systems where there are no apparent connections between pieces and parts. We have no way to dependably evaluate the reliability of those systems. We don’t even have the ability to do that for a program like Microsoft Word. Now scale this up to large scientific systems like the Large Hadron Collider or large telescopes, and you can begin to see the issue.

How do you approach security from a network perspective?

SP: Essentially, the idea of taking a risk-based approach to security means we measure the risk based on certain assets—an asset-oriented risk approach.

Let me give you a concrete example. If you’re CENIC and the most important part of your mission is to keep a network running, then you look at those assets that are key to keeping a network running and you spend your time and assets to assure that you’re focused on those risks. It can get very easy to get lost in the weeds of complexity in this field, so it’s a helpful way to prioritize security.

What is the network’s role in addressing privacy and security issues for its users?

SP: I believe there is universal agreement to do whatever we can to keep the network running to allow CENIC Associates to communicate with each other and the outside world. That includes security and robustness—which are good engineering practices.  We are still working through how best to approach other actions we can take.

Let’s talk about privacy. In general, you can’t have privacy without security. I don’t hold the view that privacy also requires a trade-off with security, and vice versa. As an example, if you’re trying to send encrypted email to me, there’s no way to guarantee that will remain private unless our computers and software are secure. By providing a robust network, I view that as a cog in the wheel to provide security for the network as a whole.

Is security a service function for a network? Should it be?

SP: I absolutely think there is value in doing that, and it’s something we should discuss. The closest analogy is public health. If you have three people come into a doctor’s office with symptoms, you might not think anything of it. But if you get a number of those same symptoms cropping up in other places, then you might notice a pattern and can try to take steps to mitigate the impact of the outbreak. By having a broader perspective of security via the network’s members and within a circle of trust, CENIC is able to do a better job than some of those associates can do on their own.

JD: Here’s another way of looking at it. The security of each CENIC member is partially impacted by the security of the other CENIC members. For example, if one CENIC member is a particularly attractive hacking target, that member's systems or infrastructure could be compromised in ways that deteriorate the integrity of the network and thus negatively impact other members.

SP: We have been discussing development of a privacy policy, recognizing that there are operational reasons for why CENIC needs to look at certain data to keep things up and running. For example, unless California’s Department of Transportation counts the number of cars on a road, it doesn’t know whether to put in another lane. In much the same way, many organizations need to know how assets are being used. CENIC needs to know how its assets are used in order to ensure it has the capacity needed by its members. The need for transparency is there.

Can you share any examples of the impact of network threats?

JD: The creativity of hackers is impressive. With our help, a major research university was able to trace regular spikes in network activity to pirated videos that had been downloaded to campus networked copiers with unused memory. These are purely embedded systems where there’s little anyone, can do to secure them.

SP:  There’s also the rapid growth of mobile devices over the past fifteen years. It used to be that you could think of security in an organization with a perimeter of sorts. There was one way in and out, and that was the network. Now there are so many ways in and out—the iPhones, the tablets, the USB sticks. With the distributed systems, we can assume that at any given time systems have been compromised. A risk-management approach assumes there is a certain amount of compromise in any network and focuses on the areas of biggest risk.