Separation of mechanism and policy

From HandWiki

The separation of mechanism and policy[1] is a design principle in computer science. It states that mechanisms (those parts of a system implementation that control the authorization of operations and the allocation of resources) should not dictate (or overly restrict) the policies according to which decisions are made about which operations to authorize, and which resources to allocate. While most commonly discussed in the context of security mechanisms (authentication and authorization), separation of mechanism and policy is applicable to a range of resource allocation problems (e.g. CPU scheduling, memory allocation, quality of service) as well as the design of software abstractions.[citation needed]

Per Brinch Hansen introduced the concept of separation of policy and mechanism in operating systems in the RC 4000 multiprogramming system.[2] Artsy and Livny, in a 1987 paper, discussed an approach for an operating system design having an "extreme separation of mechanism and policy".[3][4] In a 2000 article, Chervenak et al. described the principles of mechanism neutrality and policy neutrality.[5]

Rationale and implications

The separation of mechanism and policy is the fundamental approach of a microkernel that distinguishes it from a monolithic one. In a microkernel, the majority of operating system services are provided by user-level server processes.[6] It is important for an operating system to have the flexibility of providing adequate mechanisms to support the broadest possible spectrum of real-world security policies.[7]

It is almost impossible to envision all of the different ways in which a system might be used by different types of users over the life of the product. This means that any hard-coded policies are likely to be inadequate or inappropriate for some (or perhaps even most) potential users. Decoupling the mechanism implementations from the policy specifications makes it possible for different applications to use the same mechanism implementations with different policies. This means that those mechanisms are likely to better meet the needs of a wider range of users, for a longer period of time.

If it is possible to enable new policies without changing the implementing mechanisms, the costs and risks of such policy changes can be greatly reduced. In the first instance, this could be accomplished merely by segregating mechanisms and their policies into distinct modules: by replacing the module which dictates a policy (e.g. CPU scheduling policy) without changing the module which executes this policy (e.g. the scheduling mechanism), we can change the behaviour of the system. Further, in cases where a wide or variable range of policies are anticipated depending on applications' needs, it makes sense to create some non-code means for specifying policies, i.e. policies are not hardcoded into executable code but can be specified as an independent description. For instance, file protection policies (e.g. Unix's user/group/other read/write/execute) might be parametrized. Alternatively an implementing mechanism could be designed to include an interpreter for a new policy specification language. In both cases, the systems are usually accompanied by a deferred binding mechanism (e.g. late binding of configuration options via configuration files, or runtime programmability via APIs) that permits policy specifications to be incorporated to the system or replaced by another after it has been delivered to the customer.

An everyday example of mechanism/policy separation is the use of card keys to gain access to locked doors. The mechanisms (magnetic card readers, remote controlled locks, connections to a security server) do not impose any limitations on entrance policy (which people should be allowed to enter which doors, at which times). These decisions are made by a centralized security server, which (in turn) probably makes its decisions by consulting a database of room access rules. Specific authorization decisions can be changed by updating a room access database. If the rule schema of that database proved too limiting, the entire security server could be replaced while leaving the fundamental mechanisms (readers, locks, and connections) unchanged.

Contrast this with issuing physical keys: if you want to change who can open a door, you have to issue new keys and change the lock. This intertwines the unlocking mechanisms with the access policies. For a hotel, this is significantly less effective than using key cards.

See also

Notes

  1. Butler W. Lampson and Howard E. Sturgis. Reflections on an Operating System Design [1] Communications of the ACM 19(5):251-265 (May 1976)
  2. "Per Brinch Hansen • IEEE Computer Society". http://www.computer.org/web/awards/pioneer-per-hansen. 
  3. Miller, M. S., & Drexler, K. E. (1988). "Markets and computation: Agoric open systems". In Huberman, B. A. (Ed.). (1988), pp. 133–176. The Ecology of Computation. North-Holland.
  4. Artsy, Yeshayahu et al., 1987.
  5. Chervenak 2000 p.2
  6. Raphael Finkel, Michael L. Scott, Artsy Y. and Chang, H. [www.cs.rochester.edu/u/scott/papers/1989_IEEETSE_Charlotte.pdf Experience with Charlotte: simplicity and function in a distributed operating system]. IEEE Trans. Software Engng 15:676-685; 1989. Extended abstract presented at the IEEE Workshop on Design Principles for Experimental Distributed Systems, Purdue University; 1986.
  7. R. Spencer, S. Smalley, P. Loscocco, M. Hibler, D. Andersen, and J. Lepreau The Flask Security Architecture: System Support for Diverse Security Policies In Proceedings of the Eighth USENIX Security Symposium, pages 123–139, Aug. 1999.

References

External links