Which of the following is not a change control principle of the clark-wilson model?

Information Gathering

Craig Wright, in The IT Regulatory and Standards Compliance Handbook, 2008

Biba and Clark Wilson

The Biba Model or Biba Integrity Model is a formal state transition system of data security policies designed to express a set of access control rules in order to ensure data integrity. Data and subjects are ordered by their levels of integrity into groups or arrangements. Biba is designed so that a subject cannot corrupt data in a level ranked higher than the subject's and to restrict corruption of data at a lower level than the subject's.

The Biba model was created to thwart a weakness in the Bell-LaPadula Model. The Bell-LaPadula model only addresses data confidentiality and not integrity.

The Clark-Wilson integrity model presents a methodology to specify and analyze an integrity policy for a data system. The chief concern of this model is the formalizing of a notion of information integrity through the prevention of data corruption in a system as a result of either faults or malicious purposes. An integrity policy depicts the method to be used by the data items in the system in order to remain valid as they are transitioned from one system state to another. The model stipulates the capabilities of those principals deployed within the system and the model delineates certification and enforcement rules.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492669000059

An analysis of security access control on healthcare records in the cloud

P. Chinnasamy, ... K. Shankar, in Intelligent Data Security Solutions for e-Health Applications, 2020

Mandatory access control (MAC) for EHR

Mandatory Access Control provides the security for a centralized and authorized server that is designed by a designated and approved security head. All users are similarly found by the access approach, and in this, no super client exists as in DAC. The requirement for a MAC component occurs when a framework’s security strategy manages the security decisions that need not be chosen by the owner of the object, whereas the data protection choices must be implemented by the system. Typically, the obligatory access control occurs in military-style security. Numerous interfaces and security-mark systems are commonly used to foresee access to MAC. MAC is constantly connected with the access Multi-Level Security (MLS). After security, the wholeness and privacy of the system are its most significant features. Consequently, it can expand the security to the most astounding level in an entire system, and it possess low adaptability. The purpose behind this is framework security, and, in any case, even this does not guarantee outright protection. It is used in government and military framework upgrades because of these highlights. While MAC performs the operations, it does not consider the relationship it has with the clients into thought, and it also does not consider the security award that guarantees that every client in the working system performs the client assignments, thinking about this reality. Users and procedures must have suitable access to these classes before they can associate with these objects.

Multilevel Systems (MLS): In this system, data access is overseen by naming the subjects and objects. Before permitting the cooperation between the two, a subject’s mark is assessed against the object’s label. A decent comprehension of MLS would not be complete without understanding its starting points and the issues it was meant to solve. The U.S. military and insight networks have generally isolated the information based on its security classification.

Bell-La-Padula: In 1976, the U.S. Department of Defense developed the Bell La-Padula model to formulate the secrecy criteria. The model describes two mandatory rules for access control (Fig. 5). This module also presents a concept of a state machine, with a lot of permissible states in a computer system, characterizing a mathematical model of computation that is used to plan both computer programs and successive rational circuits.

Which of the following is not a change control principle of the clark-wilson model?

Fig. 5. The Bell-La-Padula model for MAC.

No-Read Up: This means that an object with a higher level cannot be pursued by a subject with a specified level of safety.

No-Write Down: This means that an object with a lower level cannot be written down by a subject with a specified level of safety.

In a medical setting, for example, a doctor is given a higher level of security than a nurse. This model keeps the doctor from trading a medical record encoded at the nurse’s degree of security, even though this might be earnestly required. The doctor abuses the said security policy by trading the data during emergency surgery.

Biba model

A sort of scheme, that guarantees data integrity, is described in the Biba model. Integrity labels are given to procedures and objects inside this Biba model. The low procedures honestly are not able to write for objects of high integrity. The procedures of high integrity, at the same time, cannot peruse things of reduced integrity.

A high integrity strategy can peruse low-honesty things that fall under the variety known as “low watermarking,” which improves the adaptability of the Biba model. However, this is decreased to the safety level of the object until the manual reconfiguration.

In EHR, the Biba model addresses the integrity issue discussed earlier by prohibiting the nurse from writing incorrect data upward.

The model, be that as it may, provides a serious degree of inflexibility because the doctor cannot access the notes of the nurse. It is probable that the use of required access-control forms, in an EHR setting, will be challenging to infer from a large number of purchasers associated with these plans, a wide assortment of information types, and a readiness to provide the patients their due ownership and (partial) control over their medical records.

However, the use of some kind of MAC approach in an EHR scheme cannot be avoided, as medical officials, in the long run, need to dole out the access rights.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128195116000066

Domain 3

Eric Conrad, ... Joshua Feldman, in Eleventh Hour CISSP® (Third Edition), 2017

Biba model

While many governments are primarily concerned with confidentiality, most businesses desire to ensure that the integrity of the information is protected at the highest level. Biba is the model of choice when integrity protection is vital.

Fast Facts

The Biba model has two primary rules: the Simple Integrity Axiom and the * Integrity Axiom.

Simple Integrity Axiom: “No read down”; a subject at a specific clearance level cannot read data at a lower classification. This prevents subjects from accessing information at a lower integrity level. This protects integrity by preventing bad information from moving up from lower integrity levels.

* Integrity Axiom: “No write up”; a subject at a specific clearance level cannot write data to a higher classification. This prevents subjects from passing information up to a higher integrity level than they have clearance to change. This protects integrity by preventing bad information from moving up to higher integrity levels.

Biba is often used where integrity is more important than confidentiality. Examples include time and location-based information.

Did You Know?

Biba takes the Bell-LaPadula rules and reverses them, showing how confidentiality and integrity are often at odds. If you understand Bell-LaPadula (no read up; no write down), you can extrapolate Biba by reversing the rules: “no read down”; “no write up.”

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128112489000036

Frustration Strategies

Timothy J. Shimeall, Jonathan M. Spring, in Introduction to Information Security, 2014

Integrity Models

Integrity is the property that data has not been changed, destroyed, or lost in an unauthorized or accidental manner [1]. This implies both that control is exercised over the content of information in the system and over modifications made to that information. The primary formal model of integrity, the Biba model that extends the Bell–LaPadula model, incorporates both of these aspects.

The Biba model focuses on mandatory integrity policies [2]. It specifies integrity labels on each object in the system, which cannot be modified by any operation on the data (although a new copy of the object with a different integrity label is possible). Each subject has an integrity class (maximum level of integrity) and an effective integrity rating (taken from the integrity of information sources that have been read, and no higher than the integrity class), shown in Figure 6.8. In contrast to Bell–LaPadula, most Biba applications have had only a small number of integrity levels (e.g., just “user” and “administrator”). The model then defines a simple integrity policy that a subject may not read sources of lower integrity than his or her effective integrity rating, and a * property that a subject may only write objects that are of his or her effective integrity rating or lower.

Which of the following is not a change control principle of the clark-wilson model?

Figure 6.8. Biba model properties.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597499699000067

Federal Information Security Fundamentals

Stephen D. Gantz, Daniel R. Philpott, in FISMA and the Risk Management Framework, 2013

Brief History of Information Security

Contemporary information security traces its legacy to initial research in computer security conducted during the 1960s, driven by the need to protect classified information in DoD computer systems providing access and shared resources to multiple users. Working from a well-defined context in which users access, store, and process classified information using computer systems, initial research and administrative and technical recommendations focused on robust access control mechanisms and protection against unauthorized disclosure of information [13]. Subsequent information flow control models developed for the military, such as the well-known Bell-LaPadula and Biba multi-level security models, implement mandatory access control policies well suited to environments that use formal classification of information assets, and users. The Bell-LaPadula model [14] assigns classification levels to information objects and the subjects (human or system) that seek access to those objects. Subjects must have a classification level equal to or higher than the objects they view and cannot disclose information to a classification level below their own. These rules—often expressed simply as “no read up” and “no write down”—are intended to prevent the unauthorized disclosure of information, and thus preserve its confidentiality. The Biba model[15] is structurally similar to Bell-LaPadula but is intended to safeguard the integrity of information, rather than its confidentiality. The Biba model assigns integrity classifications to subjects and objects as an indication of reliability or trustworthiness. Subjects must have an integrity level equal to or lower than the objects they view, and cannot write information to an object at a higher integrity level. The Biba rules—sometimes simplified to “no read down” and “no write up”—guard against the corruption or loss of integrity of relatively more trusted information by preventing exposure to less trusted information. Access control is an essential aspect of security relevant to protecting both confidentiality and integrity, but these models are less applicable to commercial or non-military government organizations that favor role-based access control models or other approaches that emphasize authorization based on what a user wants to do or how information will be used. For transactional systems or technology-enabled business processes that place as much or more importance on protecting information integrity as on maintaining confidentiality, government organizations may find different security models more applicable, including those developed outside military or other government domains [16].

The collective lesson learned from working with different early secure computing approaches is that context matters and theoretical models that attempted to provide uniform methods for protecting confidentiality, integrity, and availability without considering contextual factors could not successfully be applied across the diverse range of government information systems. The National Security Telecommunications and Information Systems Security Committee, responsible for information security policy and guidance for national security systems in the federal government, adopted a multi-dimensional information systems security model developed in 1991 by John McCumber and incorporated in official instructions establishing information security training standards [17]. This model addresses confidentiality, integrity, and availability requirements for information in different states (storage, processing, and transmission) from the perspectives of technology, policy and practice, and education, training, and awareness [4]. The separate consideration of confidentiality, integrity, and availability needs of information systems in specific operational contexts remains a central feature of federal security standards and guidance for identifying system security requirements and determining the appropriate controls to satisfy those requirements.

Current procedures for security categorization, control selection, and certification and accreditation used by federal agencies in all sectors share many common aspects introduced in early secure computing models. One key difference between current information security practices and the original models from which those practices derive is the recent emphasis on risk-based decision making. The number and variety of recommended security controls and level of detail provided in implementation guidance for those controls has grown substantially with the development and enhancement of system certification and accreditation process and the application of those processes to a larger number of federal information systems. Where earlier approaches emphasized the application of entire models to every relevant information system [18], even controls included in recommended baselines for meeting minimum security requirements are subject to tailoring based on system scope, the availability of alternative or compensating controls, and organization-specific parameters, including risk tolerance [19]. Although determinations of security control applicability—at individual information system or organization-wide levels—are at the discretion of each organization, government agencies operating in similar sectors or working with particular types of sensitive information often adopt similar approaches to security control selection.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496414000023

Policy-Driven System Management

Henrik Plate, ... Stefano Paraboschi, in Computer and Information Security Handbook (Third Edition), 2013

Access Control Models

AC commonly denotes the combination of the authentication, authorization, and auditing services. Each of the three services is organized into two parts: a policy directing how the system resources should be accessed, and a mechanism responsible for implementing the policy. Of the six components, the one that mostly characterizes the AC system is the authorization policy.

The authorization policy, given an access request produced by a user with the intention of executing a specific action on a resource, has the goal of specifying whether the request has to be authorized. Many proposals have emerged for representing the authorization policy. The design of the authorization policy has to balance the requirements of expressivity carefully with the strict efficiency constraints that limit the amount of computational resources that can be dedicated to processing each access request. There are a few long-term trends, clearly visible in the evolution of IT technology, that have a specific impact. The evolution of computer systems is significantly increasing the amount of computational resources available to process an access request; the application of AC increasingly occurs in distributed systems where the delay caused by network access can support a more complex evaluation; and application requirements are becoming more complex and require the evaluation of sophisticated conditions. These trends lead to a progressive increase in the amount of resources available to process each access request and to a corresponding enrichment of the AC models used to represent the authorization policies. We summarize the evolution of these models, describing their main features. Each model typically extends previous proposals and offers specific new functionalities.

The classical AC model, which is still the basis of most operating systems and databases, is the discretionary AC (DAC) model, which starts from the assumptions that users have the ability to manage the information they access without restrictions, and users are able to specify the access policy for the resources they create. The access policy can be represented by (subject, resource, action) triples that describe each permitted action. The elements of the policy can be organized by resource (offering access control lists) or subject (offering capabilities). This AC model presents a number of variants; for instance, a user can transfer the privileges he received; also, programs can be configured to give to users who invoke the program the privileges of the owner of the program.

A first evolution of the DAC model was represented by the mandatory AC (MAC) models, which establish policies that restrict the set of operations that users can apply over the resources they are authorized to access. The most famous of these models is the Bell–LaPadula model [24], which was designed with inspiration from the approaches used to protect information in military and intelligence environments, classifying resources according to their secrecy level, and giving users clearance to access resources up to a certain level; restrictions are then imposed on the flow of information. The restrictions guarantee that information will not flow from a high level to a low level, independent of the actions of the users. Alternative MAC models have been defined supporting integrity (the Biba model[25]) and protection of conflict of interest requirements (the Chinese Wall model [26]). When applied to real systems, all of these models showed significant shortcomings, mostly owing to their rigidity; the presence of covert channels in real systems limited the robustness of the approach. The idea of establishing restrictions on the operations of a resource that depend on properties of resources and users is an advanced aspect that characterizes AC solutions.

The RBAC model [27] assumes that the assignment of privileges to users is mediated by the specification of roles. Roles represent functions in an organization, which require a collection of privileges to execute their tasks. Users are assigned a role depending on the organizational function that is given to them. The model is particularly interesting for large enterprises, in which this structure of the access policy greatly facilitates the management of the security requirements. The evolution of the access policy can be better controlled, with a clear strategy to support the evolution of the required resource privileges and of the organization. The use of roles in general requires two kinds of authorization: system authorization specifying the privileges a role should acquire and role authorization specifying the subjects that can enact the defined roles. This increase in structural complexity is in most cases mitigated by a significant reduction in the size of the policy and by easier management. The RBAC model has been successful and has been adopted in many environments, with dedicated support in the Structured Query Language 3 standard and modern operating systems.

A modern family of solutions is represented by attribute-based AC (ABAC) models. These models assume that authorizations can express conditions on properties of the resource and the subject. For instance, assuming that each resource has an attribute that denotes the identifier of the subject that created the resource, a single authorization can specify the ownership privilege for all creators of every resource; this would either require an ad hoc mechanism in the AC module or the creation of a distinct authorization for every resource. Advantages are evident in terms of flexibility and expressive power of the ABAC model. The main obstacle to its adoption in real systems has always been worry about the performance impact of such a solution. In the scenario of AC for cooperating Web services, this worry is mitigated by the high cost of each access that makes acceptable the cost required to evaluate predicates on resource and user properties. The XACML proposal is a notable example of a language that supports an ABAC model. The evolution of AC solutions is expected to lead to wider adoption of ABAC models.

Another family of authorization models focuses on specification of the constraint that different users have to be involved in executing a task. This is consistent with the “separation of privilege” principle presented earlier, which urges the design of secure systems to be robust against breaches of trust, by making critical actions executable only with the cooperation of a defined number of distinct users. The classical example is the withdrawal at a bank of an amount larger than a given threshold, which requires the cooperation of a bank teller and the supervision of the branch director. The damage that can be done by a corrupt or blackmailed bank employee is then limited.

Several models have been proposed for representation of these AC restrictions, commonly called Separation of Duty (SoD) constraints. The design of such a model typically occurs extending a role-based model, specifying different privileges for roles that are required in the execution of a given process, and then imposing that the same subject cannot enact two roles. There are two variants of SoD conflicts: static and dynamic. Static SoD assumes the roles to be rigidly assigned to users, who will always enact the same role in every action. Dynamic SoD assumes that the conflict has to be evaluated within the domain of a specific action instance, where distinct users have to enact conflicting roles. The same user may enact conflicting roles as long as they are enacted in different action instances. Static SoD is the model receiving greater support, typically at the level of policy definition, with the availability of techniques that permit the identification of assignments of privileges to users that may violate the constraint. SAP AC, for instance, supports the detection of static SoD conflicts during role design and provisioning. Dynamic SoD requires robust support from the execution system, and this remains a critical requirement. Dynamic SoD requires some robust way to keep track of the history of access within an action. An extensive analysis of these issues appears in Chapter 59.

We note again that PBM can offer the opportunity to use a sophisticated high-level policy model for the design of the policy, taking the responsibility for mapping the high-level representation to the concrete AC model that the system responsible for the implementing the system can offer. This is one advantage of using such an approach. The PoSecCo project, described later in the chapter, offers an abstract flexible policy language, with support for RBAC and ABAC models. This policy can be translated into a collection of policies for the low-level systems that are used in the IT infrastructure, each with its own restrictions. This represents an interesting approach to applying modern AC models in a scenario where common network protocols, operating systems, database management systems, Web servers, and application servers are used.

To conclude this analysis, we want to quickly present the concept of obligation, which presents specific features that clearly distinguish it from the concept of authorization presented earlier. Obligations are ECA policies that serve to determine future actions a subject has to perform (on a target) as a reaction to specific events. As noted by Sloman [28], obligation policies are enforced at the subject, whereas AC policies are enforced at the resource side. They also differ from the provisional AC model proposed by Jajodia [29], which extends traditional yes–no AC answers by saying that subjects must cause a set of conditions to be evaluated to true “prior” to authorizing a request. Obligations are used in many management fields such as quality of service (increase resources if a service-level agreement is not satisfied), privacy management (delete user information after 6 months), auditing (raise an alarm if some events occur), dynamic network reconfiguration based on security or network events [activate backup links in case of distributed denial of access (DDoS)], and activity scheduling (periodic backups). Complex events can be specified from basic events using event expressions managed also using external monitoring or event services and reused in many policies. Because of their operative and temporal nature, obligations are used not only in security-relevant policies but also for workflow management. Unlike AC, obligations policies are often unenforceable; that is, the system cannot ensure that each obligation will actually be fulfilled by the subject [30]. There is a lack of security controls able to enforce obligations. Even if ad hoc agents have been proposed sometimes [31] and some tool is available, to be practically used, obligation policies are often mapped to other enforceable authorization mechanisms [32].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128038437000260

A Comprehensive Survey on Attacks, Security Issues and Blockchain Solutions for IoT and IIoT

Jayasree Sengupta, ... Sipra Das Bit, in Journal of Network and Computer Applications, 2020

4.3 Case Study on Applications of IIoT

Application of IoT in various industries to bring about digitization is rapidly evolving and growing. IoT applications are being deployed and/or developed in various sectors including smart factory, smart healthcare, smart grid, smart transportation etc. In this context, we choose two of these sectors and perform a case study in the following subsections on the current research trends in these areas. We elaborate on the research works relating to security issues and privacy concerns in these areas.

4.3.1 Smart Factory

Smart Factory is a huge leap from traditional automation in industries to a fully automated, connected and flexible system. A smart factory integrates data from system-wide physical, operational, and human assets to drive manufacturing, maintenance, inventory tracking, digitization of operations and other types of activities across the entire manufacturing network (Deloitte). Apart from the benefits that smart factories have brought about in the production, certain other security issues have also come to the forefront. Here, we highlight on some of the existing works to deal with the potential problems prevalent in smart factories.

Wan et al. (2019) have introduced Blockchain technology into the existing IIoT architecture for constructing a more secure, reliable and distributed architecture suitable for industrial environments. The improved architecture has introduced the Bitcoin design to form a multi-center, easily expandable partially distributed architecture. They have also incorporated the Bell-La Padula (BLP) along with Biba Model to address three major security requirements such as confidentiality, integrity and availability (CIA) that further strengthens the security of the architecture.

The work (Yao et al., 2019) has introduced fog computing as an intermediate layer in the IIoT architecture to meet certain application specific requirements of smart digital manufacturing. However, introduction of fog layer has led to several other security issues to rise up. In this context, the authors have proposed an Attribute Credential based Public Key Cryptography (AC-PKC) scheme to meet authentication and privacy-preserving access control requirements of this new architecture. The algorithm proposes a two level verification scheme which requires a fog node to generate a signature for the command it issues for an actuator. The actuator on receiving this, has to perform the two level verification to authenticate that the command was indeed issued by a trusted fog node. Similarly, access control is imposed by introducing fuzzy authentication technique which verifies whether data processing service is to be provided to the requester or not. Moreover, the shared data is also encrypted with a protected key to avoid data leakage.

4.3.2 Smart Grid

Smart Grid refers to an improved electricity supply chain that runs from a major power plant all the way to our homes (Mazza, 2007). The basic concept of Smart Grid is to add monitoring, analysis, control, and communication capabilities to the national electrical delivery system to maximize the throughput of the system while reducing the energy consumption (Mazza, 2007). Towards implementing the Smart Grid, the concept of IoT is introduced. Amid all the advantages that the traditional systems have gained, smart grid has also been exposed to several security vulnerabilities and threats. We highlight some of the ongoing research work to deal with these issues here.

Traditional TCP/IP based network communications are not suitable for secure communication in IIoT based Smart Grids. Therefore, the work (Chaudhary et al., 2018) proposes a Software Defined Networking (SDN) enabled multi attribute secure communication model for IIoT based Smart Grid environment. The communication model is designed using a cuckoo-filter based forwarding scheme at SDN control plane. To secure the data generated at IIoT devices and stored at cloud servers, an Attribute Based Encryption (ABE) scheme is designed. Finally, to allow users to verify the integrity and authenticity of their data stored at cloud servers, a peer entity authentication scheme using Kerberos is also presented.

Electricity consumption data from a Smart Grid is largely used for big data analysis. This raises a need of data utility along with single user's privacy preservation. To achieve this objective Liu et al. (2019a) have proposed a practical privacy preserving data aggregation scheme without using a Trusted Third Party (TTP). In the proposed scheme the trusted users construct a virtual aggregation area to mask his own data.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1084804519303418

Which of the following principles ensures no unnecessary access to data exists by regulating members so they can perform only the minimum data manipulation needed?

Which access control principle specifies that no unnecessary access to data exists by regulating members so they can perform only the minimum data manipulation necessary? The principle of limiting users' access privileges to the specific information required to perform their assigned tasks is known as minimal access.

Which of the following is the original purpose of ISO IEC 17799?

The purpose for ISO/IEC 17799? ANSWER: Its purpose is to give recommendations for information security management for use by those who are responsible for initiating, implementing or maintaining security in their organization.

What is required of the separation of duties principle quizlet?

Separation of duties is the principle by which members of the organization can access the minimum amount of information for the minimum amount of time necessary to perform their required duties. Lattice-based access control specifies the level of access each subject has to each object, if any.

What is the information security principle that requires significant tasks to be split up so that more than 1 individual is required to complete them?

The principle of separation of duties says that no user should have all the privileges necessary to complete a critical business function by themselves. Instead, the critical business function should be divided into discrete tasks and the appropriate privilege granted to different users.