White Papers

The new Sciance of Information Security

Planning for a Data Migration Project

Is Offshoring for you?

 

Useful Links

Glossary of Internet and Web Jargon

Search Engines: A comparison

How to use Subject-Focused Directories

Web graphics resources
Template Samples


Developrs: Register with Us
 
Information Security, a new Science
 

In this day and age of hacking, viruses, and cyber attacks, Information security is rapidly emerging as a formal discipline.  Borrowing tools from economics and game theory, researchers are discovering new (and sometimes old) ways of securing networks and systems.  Systems and machines are becoming more interdependent.  There is a growing need to establish responsibility and security beyond the individual component level. Establishing sound security practices also goes beyond software or hardware, the people involved in managing networks also affects vulnerability.  Systems need to be designed with security built in, rather than added after the fact. This article covers the directions being taken by the researchers engaged in the study of information security, provides examples of where good ideas have gone bad, and offers suggestions on what directions have been found to work best.

Information security lends itself to analysis via modeling and evolutionary game theory.  These tools are emerging as vehicles for exposing weaknesses in networks whether they are collections of interdependent programs or complex networks like the internet.  Researchers are using these tools to address strategic and policy issues affecting information security. These systems use multi-round simulations to study how attackers and defenders interact. They discovered that, and it is borne out by real world examples, that ring architectures like file-sharing systems tend to be more fragile than clique architectures like terrorist cells.  A security leak, once exploited, can be used to access other nodes more easily in a ring architecture.

Market and Communitarian environments are two extremes of networks. Market environments rely on an independent enforcement and monitoring agency.  The individuals participating in transactions are not visible to each other.  Communitarian environments have built-in monitoring since all the participants interact with each other on a regular basis.  The Stock Exchange and the SEC are a prime example of a market environment.  Although file-sharing networks at first glance appear to be communitarian, the large user base and the anonymity of the transactions function more like a market environment without any controls. There is no agency overseeing the transactions.

Data is entered into a system and edited for validity.  Once validated, it is accepted by the system and used for further processing.  The system “trusts” that the data will not crash the system.  Which, of course, happens?  Incorporating further edits throughout the system will reduce the chance of a failure deeper in the system. Over time, as the system changes, the integrity of the data deteriorates since the usage of the data is changing.  The system also becomes especially vulnerable to historical data that was validated using old rules. So, in a communitarian approach to system design, each system component is responsible for guarding against corrupt data.  This idea directly contradicts classical systems design methodologies.

Shifting away from a market-based security strategy decreases the likelihood of a disruption. Communitarian networks depend on repeated transactions among the nodes.  This creates “trust” and strengthens the network. Market and Communitarian environments are extreme environments. In an effort to balance security and performance companies find a “happy medium” in which security is high without placing too much of a burden on performance.

Attacks on a system or network can take two general forms, intended or accidental.  Intended, or malicious, attacks are perpetrated by individuals or organizations intent on disrupting the network or accessing confidential information.  In recent years hackers have been know to steal offsite backup tapes (sometimes while in transit.)  These attackers all have two things in common.  They do not want to get caught and they want to achieve a goal.  Security can be increased by increasing the likelihood of detection and increasing the cost of exposure (i.e. Punishment).  Companies should retrieve and review the router logs, for example.  This information can be used to locate repeated attackers and suspicious network activity.  One key area of vulnerability is in error handling.  An attack may be masked to look like an error.  While this doesn’t necessarily reduce the risk of detection, it can reduce the risk of punishment.

The second type of attack is the accident.  A user accidentally deletes a file or a wrong connection is made.  The network is then compromised in that its service levels have been greatly reduced.  Every disruption can be viewed as an attack regardless of how it originated since the results are the same. Back in the 1980’s a San Francisco bay area firm had a policy of firing someone if the online transaction processing system crashed.  The system didn’t crash for a year and a half (an impressive record.)  When it did finally crash, it was due to a contract (union) electrician that threw a circuit breaker.  So, since blame did not rest with any employee directly, no one was fired.
Over the past few years organizations have come to realize that security failures can occur from a totally unexpected direction, the staff itself.  The people guarding the information will not suffer if there is a failure.  They have no incentives to provide that extra measure of security. U.S. and British banks treat fraud differently due in large part to legal and perceptual differences.  US banks are liable for credit card fraud costs whereas in British banks the assumption is that the customer is wrong or lying.  UK banks spend more on security than American banks, yet suffer more fraud.  UK bank staffers became lax in their security efforts and as a result the amount of fraud skyrocketed. This example illustrates one of the fundamental principles of the economic analysis of liability: it should be assigned to the party that can do the best job of managing the risk.

A June 1, 2000 article in The NY Times  stated that “If a particular user's computer is taken over, should he have liability for the cost of the attack on someone else? The average user is essentially clueless about how to prevent his computer from being taken over, so assigning liability to him would be pointless. Assigning liability to the network operator would make more sense.  A typical security analysis involves identifying weak points in a system and indicating who might be in a position to fix them. But security analysts should go one step further and examine the incentives of those responsible for the system. Such an analysis could be used to assign liability so that those who are best positioned to control the risks have appropriate incentives to do so.”   In many organizations it is enough to make sure the blame for a security failure rests with someone else, typically vendor software.
Another example is the anti-virus industry.  Consumers might be willing to spend $20 to protect their own hard drives but, take no interest in “Denial-of-Service” attacks waged against their ISP’s.  A few months ago hackers took over some computers at a university and used them to shut down Yahoo and other web sites.  The University considered the takeover of their systems a “nuisance” since they didn’t have to bear the brunt of the outage or the cost.

Software vendors develop systems that not only operate on the more popular platforms but also interoperate with other software.  This creates a “network” of software products.  Security can be an obstacle to their goals. Software vendors are under pressure to “get the product out to the marketplace.”  In many cases security is added after the fact.  This network is as strong as its weakest link. Each software product (or node) trusts every other product it interacts with to provide a “safe haven” for the information being shared

There are new systems being developed that apply modeling theory to systems design.  These systems are able to analyze a design and expose its weaknesses.  One such example is the Alloy Language developed at MIT. Alloy has been used to find flaws in the published designs of several systems, and has been taught at over a dozen universities.

Ross Anderson and Tyler Moore, et al at Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD UK., in an article published in a recent issue of Science explores the “recent results and live research challenges in the economics of information security”.  It is difficult to assess security risks in software when the users cannot distinguish between secure and insecure software.  This allows insecure software to dominate the marketplace.  Organizations should evaluate the security levels of the software they purchase.  Special consideration should be given to how that information is protected as it passes from one system to another.

Economics theory can be applied to computer security issues. Danezis and Anderson created what they called “the Red-Blue” model.  Each node in a network (ex. Routers, workstations, servers, even users) is assigned a preference and is assigned a “defense budget” that assesses its ability to resist an attack.  The “game” proceeds in a round-robin fashion that assesses each node’s ability to defend itself against an attack that goes against its preference.  They found that a diversity of preferences, as opposed to having the same preferential “rules” on the various nodes, enabled the system to better defend itself.

System reliability is not always a function of the sum of the individual efforts.  How well a system performs can be a function of a minimum effort model for developers, while testing can be a function of a maximum effort model.  A lazy programmer can inadvertently introduce coding errors while a super-tester can carry the weight of their entire team.  The model suggests that there should be fewer, more competent, developers and more testers. Researchers are considering the impact of interdependent risk. The actions of one party will affect another.  In one example involving the airline industry, customer baggage is searched at the point of entry but when it is transferred to another airline, it is assumed to be secure because it was searched by a trusted associate airline.  Developers of a system, and the testers, are assigned non-overlapping tasks.  Each member is “trusted” to do their part.  So, the strength of the system depends on the “weakest link”.  Additionally, changes to one part of a system or network will inspire testing of just the changes and nearby components.  That is components that are directly related to the changed component.  So, adhere to a simple rule: Change one thing, Test everything.  Although this strategy may not seem cost-effective, it does expose hidden weaknesses.  Many companies focus attention on development.  It provides visible returns and directly supports the business model in place at the time.  Quality Assurance groups and departments are dropped or greatly reduced in size and scope. This shifts the balance toward the “weakest link”.

In the software industry, researchers have raised the question of whether frequent patches and vulnerability disclosures are a good thing.  The argument against frequent disclosures centers on the idea that publication focuses attention on vulnerabilities and inspires more attacks.  Ozment and Schecter did a study examining the OpenBSD operating system.  They found that although attacks increased, patching the system reduced the vulnerabilities over time, even though the patching process itself gave rise to attacks.  Revealing vulnerabilities in core software (ex. Windows) has been known to inspire allied vendors to improve their software’s security.  Software vendors decide on the level of security they will incorporate, more security does imply more cost.  This also implies that the customer will be the party to discover or be the victim of vulnerabilities. There are companies that specialize in vulnerability analysis and reporting.  iDefense Labs and Tipping Point are two examples of such companies.

In a recent trend, and to avoid government regulation, organizations have created certification programs for both software and web sites. TRUSTe is a non-profit certification authority.  BBBOnLine, sponsored by the Better Business Bureau provides a database of 34000+ “reliable” websites.  It is tied to BBB customer experience feedback. The aim was to improve customer perception.  Unfortunately, the most likely recipients of these certificates were the dubious companies they were trying to warn against.  Although 3% of web sites are considered malicious, 8% of web sites with certificates are malicious.  2.73% of web sites ranked at the top of web searches were bad but, 4.44% of web sites that bought advertising from the search engines were bad.  So, Edelman’s (Harvard University) conclusion was “don’t click on ads”.

Information security is emerging as an individual field.  While it is still in its infancy, only about 100 researchers are active worldwide, it is able to draw on the tools of other disciplines.  Some of the results already obtained go against long-standing attitudes.  The idea that systems should have their functionality built first and then add in additional features later, like security, creates weak systems.  Designing robustness into the system from the beginning causes fewer problems in the long run.  New tools are becoming available to enable developers to include security measures in the initial implementation of new systems. For example: IBM's new set of security software is intended to enable developers and application providers to more easily engineer security into the initial software design. IBM’s Secure Shell Library for Java is a lightweight implementation of the Internet Engineering Task Force (IETF) Secure Shell (SSH-2) protocol.  It automatically encrypts data transmitted from one computer to another, including passwords and information stored in e-commerce or financial files.

More and more researchers are treating information security as a unique discipline.  Using tools borrowed from other areas of study such as game theory, social theory, and network analysis, they are studying the infrastructure that facilitates the movement of information.  By extending the definition of a network to include the hardware, software, and the people that manage it, researchers are analyzing the total picture.  Early results offer tantalizing results.  Information security personnel, software developers, and even users of at risk networks that look beyond their immediate responsibilities strengthen their networks.  Repeated interaction builds trust, whether the relationship is among routers or users.  Localizing the security requirements of individual nodes reduces the chance of a break-in spreading to the entire network.  Organizations need to include security as an integral part of their systems and policies.  The future holds much promise. New techniques and solutions are being published every day.

 
Online Networking Inc.   dennis@onlinenetworking.org