NDSS Symposium 1993

First Privacy and Security Research Group (PSRG) Workshop on Network and Distributed System Security was hosted 10 – 12 February 1993 at San Diego, California.

The PSRG met in San Diego on February 8-10, 1993. PSRG members continued work on an Internet Security Architecture document. An initial, very rough draft of the document was reviewed in detail and many revisions were proposed. A format was adopted for additional sections of the document, and members volunteered to write several of these sections. Rob Shirey, who was acting as editor of the document, would be making revisions and accepting additional inputs an preparation for producing the next draft of the document.

Following the PSRG meeting, on February 10-12, the first PSRG Workshop on Network and Distributed System Security took place in San Diego. This workshop co-sponsored by Lawrence Livermore national Laboratory and the Internet Society, was organized by Dan Nessett (LLNL). Dan and the rest of the PSRG members acted as the program committee for this workshop, selecting 12 papers from over 20 submissions. Over 160 Proceedings were provided to attendees and an second printing o the proceedings is being undertaken to satisfy additional demand. Plans are underway to make this an annual event, under Internet Society sponsorship. Dan Nessett will serve as chair for next year’s event, with Rob Shirey (Mitre) and Russ Housley (Xerox) as program committee chairs.

The next PSRG meeting was scheduled for July 7-9, 2003, in Cambridge (UK), immediately preceding the IETF meeting in Amsterdam.

Program

Sessions and accepted papers.

Read More

Proceedings of the First PSRG Workshop on Network and Distributed System Security

Dan Nessett
PSRG Workshop Chairman
Lawrence Livermore National Laboratory

On February 11-12, 1993, the Privacy and Security Research Group (PSRG) of the Internet Research Task Force hosted a Workshop on Network and Distributed System Security in San Diego. The workshop was co-sponsored by the Internet Society and Lawrence Livermore National Laboratory. Attracting 163 participants, the workshop was a resounding success, far surpassing the PSRG initial expectations. In fact the response was so good that the PSRG has proposed it become an annual Internet Society meeting called the “ISOC Symposium on Network and Distributed system Security.”

The first session of the workshop was called “Privacy for Large Networks.” It was chaired by Robert Shirey (MITRE) and consisted of two paper presentations.

Dennis Branstad (NIST) and Robert Aiken (LLNL/DOE) authored an invited paper entitled, “NREN Security Issues : Policies and Technologies,” which was presented by Aiken. After discussing the results of a NIST study on NREN security policy, Aiken Outlined some of the implementation plans for security in the NREN, both in the near and medium term. The highest priority security service will be user authentication with access control, application security and Security management also receiving significant attention. Aiken then discussed some impediments to implementing security services within the NREN and a gave summary of available and future technologies that could be used to provide the desired security services.

The second paper in the session was authored by Seow-Hiong Goh, Yeow Meng Chee and Michael Yap (National Computer Board, Singapore) and presented by Goh. The paper wags entitled, “Security and Management in IT2000,” which covered the security aspects of a project in Singapore to turn it into an “Intelligent Island” by the year 2000. After discussing some of the objectives of IT2000, Goh described the interaction between authentication, authorization and the IT2000 directory service. IT2000 will use a combination of public-key and secret key encryption to provide authentication and authorization security services, relying on smart card Or magnetic card technology for the Storage of personal private/secret keys. Certificates stored in the IT2000 directory service will be used to implement the authorization service. IT2000 supports three categories of agent : 1) Proxy agents, 2) Principle agents, and 3) Public agents. Proxy agents are used by services providers to accept service requests and hand them off for servicing. They represent the contact point for what may in fact be a distributed implementation of the offered service. It also relieves the service provider of any concern with authentication, authorization and accounting, although, at their discretion, services also may perform their own authentication and authorization. Each service customer acts in the role of either a principal agent or a public agent. Principal agents represent the actions of an authenticated individual, while public agents represent the actions of an unauthenticated individual. An example of the latter occurs when an individual uses a public service, such as a public access terminal that provides information about the movies being screened at a particular location. Goh used this architectural formulation to elaborate the protocols that will be used in IT2000 to implement its authentication and authorization services.

The second session of the workshop was a panel entitled “Layer Wars : options for placement of security in the OSI Reference Model.” The session was chaired by Russ Housley (Xerox) and consisted of L. Kirk Barker (Datotek), Paul Lambert (Motorola) and James Zmuda (Hughes). Barker argued that some security services, such as thwarting low-level traffic flow analysis, can only be placed at layer 2. He argued for placing security services at the layer in which they were the most effective. Lambert argued against placing Security services at the application level because there are existing applications that must be protected and which cannot be modified. He agreed with Barker that Security services should be placed in the layer where they best meet the protection requirements. He also pointed out that another advantage of placing security services at layer 2 is that they are not installed in an otherwise untrusted operating system. This raises the probability that they will not be defeated by attackers taking advantage of os vulnerabilities. Zmuda pointed out that the transport layer is the first (looking from the bottom up) to support true process-to-process communications. Since users May not trust protection mechanisms “in the middle of the network”, the transport level is a natural location to place security services. Finally, Linn argued that applications are closest to the user and also are easier to retrofit than lower layer functionality, which requires widespread agreement among a large and sometimes uncooperative group of designers and implementers. Consequently, he argued that the application layer is the optimal site for installing security services. At the end of the session, all participants seemed to agree that security services are required in more than one layer and they disagreed only about the optimal distribution of services among the layers.

After lunch, John Linn chaired a session on “Electronic Documents.” Two papers were presented.

The first, entitled “Electronic Commission Management,” by Vesna Ristic and Peter Lipp (Technishe Universitaet, Graz, Austria) was Presented by Ristic. It described a scheme for managing a formal electronic conference of commissioners, such as those involved in administering regulations. The main topic presented was how to implement a secure voting facility, allowing commissioners to make and record formal decisions. The requirements for the normal voting scheme were (quoting from the paper) : “1. Only legitimate voters are allowed to vote and each of them only once. 2. The voting authority can read the votes and publish them to other voters during the voting phase. 3. Only the voter and the voting authority know which Strategy any given voter adopted. 4. After publishing the vote, a voter can check if her vote has been properly counted.” The voting scheme is based on an application of RSA public-key technology and takes advantage of certain properties of exponentiation. Another scheme for secret voting was also described Satisfying the following requirements (again, quoting from the paper) : “1. Only legitimate voters may vote, and each of them only once. 2. Only the voter knows her voting strategy. 3. After publishing the outcome of the election, a voter may check if her vote has been properly counted. If not, she can complain without jeopardizing the ballot secrecy. 4. (Optionally) Each voter can change her mind (cancel and recast her vote), also without jeopardizing the ballot secrecy.”

The second paper of the “Electronic Documents” session was entitled “Workflow.2000 – Electronic Document Authorization in Practice,” by Addison Fischer (Fischer International Systems). This paper covered the use of electronic documentation authorization to form contracts. Fischer gave an overview of Workflow.2000, discussing many of its attributes.

The final session of the first workshop day was devoted to “Privacy Enhanced Mail.” It was chaired by Steve Kent (BBN) and consisted of four papers.

The first was entitled “Security Issues of a UNIX PEM Implementation,” by James Galvin, et. al. (Trusted Information Systems) and was presented by Galvin. The presentation covered the TIS/PEM implementation, which is a reference implementation of privacy enhanced mail. Several of its features were notable. TIS/PEM isolates much of its Crypto functionality in a local key manager (LKM), which is a set of software and an associated database. LKM controls access to its data according to three rules: 1) some objects are public and may be retrieved by any system user, irrespective of whether the user is an authorized PEM user; 2) some objects may only be modified or deleted by authorized PEM users; and 3) some objects are accessible only by a certificate administrator. An example of the first class of objects is a certificate revocation list. A userUs private key is an example of the second class of objects. The third Class of objects are system wide security data, such as a CA certificate. The LKM database is kept as a set of UNIX files. This allows multiple systems that are NFS capable to use the Same database. Finally, Galvin described the way TIS/PEM validates certificates.

The second paper was “Implementing Privacy Enhanced Mail on VMS” by Michael Taylor (DEC). Taylor described four PEM prototypes he developed over the course of 3 years. Each identified several problems with the implementation strategy employed and with PEM service definitions. Barly on it became clear that key management was an absolute necessity for the successful deployment of any privacy enhanced mail system. The second prototype solved this and other problems by implementing an early version of PEM, defined according to RFC-1040. However, this prototype was not well integrated with the normal System mail facility. Consequently, a third prototype was built. Taylor discussed several problems with integrating PEM into an existing mail user interface. The final prototype followed the revised PEM specification found in RFC-1113, RFC-1114 and RFC-1115. Taylor also described several problems he encountered when trying to deploy PEM and also some of the lessons that he learned during the prototype development.

The third paper was “Distributed Public Key Certificate Management” by Charles Gardiner (BBN). Gardiner described the SafeKeyper Certificate Signing Unit (CSU) developed at BBN Communications to support the key management infrastructure required for PEM. The SafeKeyper CSU is a finely engineered crypto-key management device, which basic function is the signing of X.509 (PEM) certificates. It also Signs Certificate Revocation Lists for subsequent distribution to PEM customers. Gardiner discussed the operation of the unit including its authorization, audit and configuration management functions.

The final paper of this session was “Protecting the Integrity of Privacy- Enhanced Electronic Mail with DES-based Authentication Codes” by Stuart Stubblebine (USC-ISI) and Virgil Gligor (Univ. of Maryland). The paper was presented by Stubblebine and it discussed a security flaw in the Original symmetric-key-distribution part of the PEM specification. This Flaw was found using techniques developed by the authors to analyze cryptographic protocols. The attack centers on the DES-MAC checksums used when the PEM symmetric key distribution scheme is employed (as opposed to the technique used in the reference implementation based on X.509 certificates). Since corrected, this flaw would have allowed an intruder to send a message to another principal and make it appear it came from a third principal.

The second day of the workshop began with a panel session entitled “Exportable algorithms : Promise or Pandora”. The panel session chair was Steve Kent (BBN) and consisted of Steve Dusse’ (RSA Data security, Inc.), Ilene Rosenthal (Software Publishers Association) and Robert Rosenthal (NIST). Rob Rosenthal delineated the position of NIST on the issue of exportable algorithms. NIST does not set policy in this area, since it is nota regulatory agency. Rather NIST sets security standards that follow established governmental policy. Ilene Rosenthal argued that customers want encryption services in their consumer products. She pointed to recent recorded mobile telephone conversations of Prince Charles and Lady Diana as an example where encryption services in consumer products would have Saved customers from Significant embarrassment. In addition, while national defense objectives may be served by the availability of clear-text traffic of our adversaries, national security also depends on a strong economic base. If the u.s. government prevents U.S. business from selling products that include standard strength encryption technology, foreign competitors will develop and capture this market, which will contribute to a less vibrant economy. In fact foreign competitors already have products that incorporate encryption technology. Dusse reported on his company’s experience with encryption export controls. The RC2 and RC4 encryption algorithms are exportable, but only if their key size is limited and only if the algorithms are protected by non-disclosure agreements. One member of the audience, Rob Shirey, commented that the panel was not well balanced and that NSA’s side of the story was not properly presented. Shirey claimed that there are good reasons to control the export of Crypto technology and Suggested that next year’s Symposium should provide NSA with the opportunity of making their case.

The second session, called “Distributed Systems,” contained 3 papers and was chaired by Clifford Neuman (USC-ISTI).

The first paper, “Practical Authorization in Large Heterogeneous Distributed systems,” by John Fletcher and Dan Nessett (LLNL) described an authorization technique that allows remote execution without requiring that the remote execution server runs at root. The paper was presented by Fletcher. Fletcher presented 7 requirements for access control in practical scientific and engineering computing environments and then critiqued existing access control mechanisms in their light. Two kinds of authorization were described, layered and server-centric and the advantages and disadvantages of each discussed. Fletcher then presented a layered authorization Scheme that, unlike others of this kind, does not require root privilege. The central idea is to keep an individual’s password encrypted on the server machine, but keep the password’s encryption key, super-encrypted by a master key, on the client machine. This means that an intruder must break into both the server and client machine file systems to obtain useful authorization information. The client presents the encrypted password key to the server in order to be authorized to use the associated user context on that machine.

The second paper was “Extending the OSF DCE Authorization System to Support Practical Delegation,” by Marlena Erdos and Joseph Pato (Hewlett- Packard). The paper, presented by Pato discussed a proposed mechanism that supports rights delegation in DCE. This scheme has the three following important properties : 1) an intermediary may operate on objects in a way that includes the initiator’s identity as an access control parameter; 2) target servers can make use of the distinction between initiators and intermediaries; and 3) Clients may place restrictions on the operations that intermediaries perform on their behalf. The scheme extends DCE’s Privilege Attribute Certificates (PACs), placing within them delegation information to achieve the above three goals. Pato discussed how these extended PACs would be used and how their scheme compares with other approaches to rights delegation.

The final paper of this Session was “TRUFFLES – a secure Service for Widespread File Sharing” by Peter Reiher, et. al (UCLA), Jeff Cook and Stephen Crocker (Trusted Information Systems). The paper was presented by Reiher. TRUFFLES stands for TRUsted Ficus FiLE System and is an enhancement of the Ficus file system developed at UCLA. TRUFFLES allows file sharing to be established between autonomous administrative domains through the use of PEM, which sets up the sharing information by distributing keys for authentication and encryption of file traffic. Reiher described the Ficus file system on which TRUFFLES is based and compared their work with other file system service and security mechanisms.

The session after lunch was a panel session on “Network Security Using smart Cards” chaired by Dave Balenson (Trusted Information Systems) and consisting of Jeff Schiller (MIT), Marjan Krajewski (MITRE) and Jim Dray (NIST). Schiller discussed several issues affecting the security of smart cards and argued that the some smart cards don’t have the required functionality to support secure applications when using DSS as the underlying crypto algorithm. He also pointed out some shortcomings of Dss when compared to RSA. Krajewski described some of the work they are doing at MITRE to integrate smart card technology with Kerberos. The smart card holds the user’s private key encrypted by a key derived from his password. Finally, Jim Dray described the Advanced Smartcard Access Control System (ACACS), which is under development at NIST, Datakey and Trusted Information systems. Dray gave an overview of the system and described how it was integrated into several applications including PEM.

The final session of the workshop was a panel session discussing the issue “Should Security be Legislated?” The panel was chaired by Cliff Neuman (USC-ISI) and consisted of Steve Kent (BBN), Rob Rosenthal (NIST) and Jeff Schiller (MIT).