Which term is used for the degree to which a system performs its intended function?

Discrete Ageing Concepts

N. Unnikrishnan Nair, ... N. Balakrishnan, in Reliability Modelling and Analysis in Discrete Time, 2018

4.1 Introduction

The reliability of a unit or system is thought of as the probability that it carries out the intended function over a specific period of time. In this context, the age of the unit is reckoned as the time during which it functions in this manner before failing to do so. Also by the term ageing, we mean the phenomenon by which the life remaining to the unit is affected by its current age in some probabilistic sense. Generally, ageing is classified into positive ageing, negative ageing and no ageing. If the residual lifetime is decreasing when the age is increasing, we say that the unit is ageing positively. For instance, various equipments in common use or mechanical systems decrease its efficiency due to wear and tear as a result of prolonged use. Naturally, the remaining lifespan decreases when the time for which they are used increases. Thus, in short, the effect of ageing is to decrease the reliability in the case of positive ageing. On the other hand, there are situations in which the performance of a unit improves with increasing age. A classical example is that of human being whose remaining lifetime increases once they pass infancy. So also is the case of equipments that undergo efficient preventive maintenance. In all such cases, we say that the unit has negative ageing, which in technical terms is explained as the increase in residual lifetime as the age increases. In contrast to the above two ageing categories, there are situations in which the residual lifetime remains the same at all ages. This is described as no-ageing property. We have seen in Chapter 2 that only geometric distribution processes such a property among all discrete lifetime models. The basic reliability concepts such as survival (reliability) function, hazard rate, mean residual life and others form the fabric with which the ageing characteristics are built upon.

Various ageing criteria discussed below play a fundamental role in the development of reliability theory and practice. To begin with, they give us an indication of the pattern of failure exhibited by the unit and the behaviour of the lifetime as the unit undergoes ageing. Secondly, life distributions can be classified according to the ageing concepts, which would enable choice of the appropriate model easier. For example, if the equipment exhibits positive ageing, one needs to consider only the subclass of distributions that possesses such a characteristic to choose the model instead of the entire set of models that can be proposed as candidate models. Further, information about the particular category of ageing can be used as an additional input in non-parametric inference. Many of the ageing criteria considered here can be expressed in terms of certain geometric properties such as convexity, star-shapedness and stochastic orders. This will serve as additional information to be employed in modelling and analysis. Finally, there are various properties possessed by the ageing classes that can be successfully employed in analysis and prediction.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012801913900004X

Basic Reliability Concepts

N. Unnikrishnan Nair, ... N. Balakrishnan, in Reliability Modelling and Analysis in Discrete Time, 2018

2.1 Introduction

As mentioned in the last chapter, the reliability of a device is the probability that the device performs its intended function for a given period of time under conditions specified for its operation. When the device does not perform its function satisfactorily, we say that it has failed. When the random variable X represents the lifetime of a device, the observation on X is realized as the time of failure. The primary concern in reliability theory is then to understand the pattern in which failures occur for different devices and under varying operating environments. This is often done by analyzing the observed failure times or ages at failure with the help of a model that satisfactorily represents the predominant features of the data. One direct method is to find a probability distribution that provides a reasonable fit to the observations. Sometimes, there may exist more than one distribution that can pass an appropriate goodness-of-fit test. In any case, it is more desirable to find a probability model that manifests certain physical properties of the failure mechanism. In reliability theory, some basic concepts that help in the study of failure patterns have been developed. The objective of this chapter is to define such concepts and discuss their properties and inter-relationships. Two important aspects that necessitate the study of these concepts are (a) various functions considered in this context determine the life distribution uniquely, so that the knowledge of their functional form is equivalent to that of the distribution itself, and (b) it should be easier to deal with these functions than the distribution function or probability density function of the corresponding distributions.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128019139000026

Electromagnetic Compatibility

J.F. Dawson, ... C.A. Marshman, in Encyclopedia of Physical Science and Technology (Third Edition), 2003

V.A.3 European EMC Regulations—EMC Directive (89/336/EEC)

EMC regulations apply throughout Europe and have had a major impact on the development of EMC regulations throughout the world.

The European regulations result from European Commission Directive 89/336/EEC, which affects all electrical or electronic systems or products sold throughout the European Economic Area (EEA). It also encompasses all electromagnetic phenomena. As a “new approach directive,” the technical requirements are defined by European standards. The new approach directives were designed to remove technical barriers to trade within the European community.

The essential protection requirements of the EMC Directive are as follows:

Equipment should be constructed so that it will not affect broadcast services or the intended function of other equipment—the emission aspect.

Equipment should have an inherent immunity to externally generated electromagnetic disturbances—the immunity aspect.

Note that the FCC regulations and Japanese Voluntary Council for the Control of Interference (VCCI) requirements do not have an equivalent requirement for immunity.

The EMC Directive specifies the routes available to manufacturers to show that their product complies with these protection requirements.

The simplest route is to demonstrate compliance with an appropriate European standard. This is a standard whose reference number has been published in the Official Journal of the European Communities (OJEC) and is a CENELEC (the European electrical standards body) Euro Norm (EN) that has been transposed into a national standard. An example is EN 55022 (the same as CISPR 22), the emission standard for information technology equipment; the transposed UK standard is BS EN 55022 and the transposed German standard is DIN EN 55022. The standards define emission limits, immunity levels, and the tests that should be performed on equipment to show that it meets these limits and levels. While the European regulations do not explicitly require a product to be tested in order to demonstrate compliance with the protection requirements, it must be demonstrated that the product complies with the standard and therefore by implication must be tested in accordance with the standard. Testing may be performed by the manufacturer or by a third party. There is no requirement for the testing laboratory to have accreditation; however, the use of an accredited laboratory will provide a manufacturer with an assurance that the testing has been performed to the standard correctly and the manufacturer can obtain an accredited test certificate.

When standards are not available to a manufacturer or the equipment has features that mean that a standard can only be partly applied, then the manufacturer must use the “technical construction file” (TCF) route to compliance. Essentially the manufacturer assembles the technical information demonstrating that the product meets the protection requirements. These data, which is likely to include test results, must be reviewed by a competent body appointed by the national authorities. The requirements for a competent body are laid down in Annex II to the EMC Directive. A competent body must demonstrate that it has the appropriate expertise, operates systems that ensure client confidentiality, and has the independence to make an impartial judgement. Such systems are usually ensured by quality assurance to standards such as ISO 9002 and EN45011.

The essential features of a TCF are:

Part I: description of the apparatus

a.

Identification of the apparatus

b.

A technical description

Part II: Procedures used to ensure conformity of the apparatus to the protection requirements:

a.

Technical rationale

b.

Detail of significant design aspects

c.

Test data

Part III: Report (or certificate) from a competent body

For radio transmission equipment (including transceivers) compliance with the Radio and Telecommunications Terminal Equipment (R&TTE) Directive is required except in the case of air traffic management equipment, which is required to conform with the EMC Directive by the “type examination” route. This means that the equipment must be submitted to a notified body (NB; an organization which has been notified to the European Commission by the national competent authority). The NB will require a type examination to be performed. This may be carried out by the NB or one of the NB's approved test laboratories.

When conformance with the protection requirements of the EMC Directive has been demonstrated by one of these three methods, a Declaration of Conformity is issued by the manufacturer and the European Community mark, the CE marking, affixed to the product or its packaging. It should be noted that the CE marking implies that the product complies with all of the new approach directives applicable to it (e.g., machinery safety).

The Australian EMC Framework follows broadly the same pattern as the European regulations, while the U.S. FCC regulations are much more specific and apply to the emission aspects only of “digital” and “industrial scientific and medical equipment,” Parts 15 and 18, respectively of the Code of Federal Regulations (CFR) 47 (see Section V.A.1).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122274105002106

System Performance Evaluation Tool Selection and Use

Paul J. Fortier, Howard E. Michel, in Computer Systems Performance Evaluation and Prediction, 2003

11.3 Conducting experiments

Given that we have selected a tool, constructed our model, and validated the model, we must next develop our experiments to perform the initially intended function for the performance study. To develop our experiments we must have an idea as to what the performance metrics will be (performance variables) for the study. We saw previously, in Chapter 8, that to develop a set of performance variables we must begin by developing a list of services to be offered by the system under study. Given that we have done this selection and definition of services, we next must determine all possible outcomes for the service. For example, each service can have a request for service pending, be in service, be completing service, or rejecting a service request. The results of the service request are to accept the request for future service, perform the service either correctly or incorrectly, or simply reject the request as not being possible. For example, the lock manager for a database system can accept a request for a lock and either grant it, delay it, perform the request erroneously, or refuse the lock request altogether.

If the system performs the request correctly, the performance is measured as the time taken to perform the service, the rate the service is performed at, and the resources consumed while performing the requested service. These three metrics relate to the measures of responsiveness, productivity, and utilization—all important components of any computer system's performance study. These measures have also been altered to show speed, reliability, and availability. For example, the responsiveness of a transaction processing system is measured by its response time. This consists of the time between a transaction's request for service and the response to the transaction from the server. The transaction processing systems productivity is measured by its throughput. The throughput consists of the number of transactions performed completely during some prescribed unit of time (e.g., per minute or second). The third measure, utilization, provides a measure of a resource's business. In the transaction processing example, we could see what percentage of time the server is busy serving transactions versus the time it is idle during the same interval of time as the throughput measure. Using such information we can begin to isolate problems within our system. For example, the service that is the most highly utilized may represent the system's bottleneck.

If a service is done erroneously, we also wish to capture this information. Error detection and measurement are important in determining a service's resiliency and tolerance to errors. For example, in the transaction processing system, we may want to know what caused the errors and the failure of transactions being processed. It may be important to know that an error occurred due to a hardware failure or a software failure, or was caused by contention or a poor transaction design. Each will tell us something about the product we are evaluating. If a resource cannot perform the service function at all, it may imply it is down, or 100 percent utilized. Once again, knowing what state the resource is in will aid in the determination of its overall performance.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781555582609500114

Values in Engineering Design

Ibo van de Poel, in Philosophy of Technology and Engineering Sciences, 2009

5.1 Efficiency and effectiveness

A first-order approach to optimal design is to consider design to be optimal if it results in an artifact that optimally fulfills the desired function. But how do we know — or measure — whether a design optimally fulfills its function? Two measures come to the fore: effectiveness and efficiency. Effectiveness can be defined as the degree to which an artifact fulfills its function. Efficiency could be defined as the ratio between the degree to which an artifact fulfills its function and the effort required to achieve that effect. As is pointed out in by Alexander (this Volume, Part V), efficiency in the modern sense is usually construed as an output/input ratio. The energetic efficiency of a coal plant may thus be defined as the ratio between the energy contained in the power produced and the thermal energy contained in the unburned coal.

Historically, there seems to be a close connection between the rise of the modern notion of efficiency and optimal design. Frederick Taylor, for example, believed that production should be based on “the one best way” which, according to him, was simply the most efficient way [Taylor, 1911].

Two things need to be noted with respect to effectiveness and efficiency. Firstly, effectiveness and efficiency are different values that may well conflict. The design that most effectively fulfills its intended function may not necessarily be the most efficient one. A very effective vacuum cleaner that removes more dust than a less effective one may nevertheless be less energy-efficient, that is to say, it may use more energy per unit of dust removed than the less effective vacuum cleaner. So, we may be faced with a conflict between effectiveness and efficiency. A well-defined notion of optimal design requires a solution to this potential conflict. Secondly, effectiveness and efficiency are often very difficult to measure. Although this is partly a practical problem, this difficulty is often based on the more fundamental problem that often neither the function of an artifact (i.e. its output) nor the input can be uniformly formulated. This is witnessed, for example, by the fact that the desirable function of an artifact is often expressed in terms of a range of functional requirements, which may conflict. The following quote from Petroski about the design of paper clips illustrates this point:

Among the imperfect things about the Gem [the classic paper clip, IvdP] that many a recent inventor has discovered or rediscovered when reflecting upon how the “perfected” paper clip is used to clip papers together are the following:

1.

It goes only one way. Half the time, the user has to turn the clip around before applying.

2.

It does not just slip on. The user first has to spread the loops apart.

3.

It does not always stay on. The clip gets snagged on papers or other objects and gets pulled off.

4.

It tears the papers. The sharp ends of the clip dig into the papers when it is removed.

5.

It does not hold many papers well. The clip either twists badly out of shape or flies off the pile.

6.

It bulks up stacks of papers. A lot of file space can be taken up by paper clips.

When a design removes one of the annoyances, it more likely than not fails to address some others or adds a new one. … All design involves conflicting objectives and hence compromise and the best designs will always be those that come up with the best compromise. Finding a way to bend a piece of wire into a form that satisfies each and every objective of a paper clip is no easy task, but that does not mean that people do not try. [Petroski, 1996, p. 29-30]

This quote illustrates two points. One is that the ideal of optimal design or what Petroski calls “perfected” design is an important source of inspiration for designers. As long as the perfect paper clip does not exist, people will try to design it. The other is that in practice this ideal will probably never be achieved: the best is always the best compromise. The crucial question then is how to determine what the best compromise is. This requires trade-offs between the different requirements and it is unclear how we can make these trade-offs in a justified way (see also Kroes et al., this Volume, Part III).

The actual situation is, however, even worse. Up until now, we have conceived of optimal design as design that optimally fulfills its intended function, or — put differently — as that which maximizes the (expected) utility value of the design. However, as argued in Section 3, the value of technological artifacts is not restricted to their utility value. The question that then arises is: what would it mean to try to maximize the overall value of technological artifacts during design, what would optimal design in such a broader sense amount to? Engineers have, in fact, dealt with this problem and have developed a number of approaches to the issue. The following two such approaches will be briefly discussed below: cost-benefit-analysis and multiple criteria design analysis.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444516671500409

Risk Management Framework Steps 3 & 4

Stephen D. Gantz, Daniel R. Philpott, in FISMA and the Risk Management Framework, 2013

Operational Controls

Implementing and assessing operational controls often requires explicit system-specific or organizational capabilities to be funded, established, staffed, and managed on an ongoing basis. Many operational controls have formally documented processes and procedures to explain their intended function and guide the execution of the services, functions, or activities they provide, but assessment procedures for these controls often require evidence of operational capabilities, including the results of tests or exercises. Security control assessors may choose to interview personnel with responsibility for performing processes and activities specified in operational controls [8], so the key role involved in implementing and assessing these controls includes the individuals who operate or administer the relevant capabilities for the system (or for the organization in the case of common controls). The breadth of controls within the operational family demands a wide range of implementation skills and abilities, particularly when requirements warrant operational control resources dedicated to the information system. Agencies often implement certain types of operational controls at the organizational level, including security awareness and training, contingency planning, incident response, and continuous monitoring. Other operational controls may be prescribed by the organization’s information security program but implemented at an information system level, such as configuration management, system maintenance, personnel security, and many aspects of system and information integrity.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597496414000084

Philosophy of Architecture

Christian Illies, Nicholas Ray, in Philosophy of Technology and Engineering Sciences, 2009

3.2.2 Function and use of the building

What is the building designed for — or what is it used for presently? From a moral point of view, not all usages are equally good. We will, for example, rightly criticize the function of the former Berlin Wall, and expect architects to refuse to be involved in its design. Other cases are more ambivalent: architects are often involved in the design of supermarkets, and when they arrive on the fringe of a small town local traders are affected and campaigns might be mounted to prevent the new buildings gaining permission. Architects are party to this process (they might be involved on either side).

More fundamentally, we expect buildings to function well — to be fit for their purpose — and architects who design buildings which look good but do not work are behaving more like sculptors. But we have to recognise that the intended function is not necessarily the function which the building realizes; Don Ihde rightly calls this mistaken conclusion the “designer fallacy” [Ihde, 2008]. It might be that over a period of time the original function is no longer required (as in many castles), or that it has changed its use contingently. This can even be the case in a very short time period, however. Zaha Hadid was commissioned to provide a Fire Station at Weil am Rhein in 1993 by the Vitra group, but the building's purpose was swiftly made obsolete by the construction of another building and it has been used ever since for the display of furniture; it performs this function in addition to enhancing the “brand” for which it was designed, which was equally important for her clients.33 Evaluating the function can therefore have two objects, that originally intended and the actual realised use of a building.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444516671500471

Production capability management

In Practical E-Manufacturing and Supply Chain Management, 2004

Reliability-centered maintenance

Reliability-centered maintenance (RCM) is a process used to determine why and what type of maintenance strategy is required for equipment, based on its level of criticality and role in maintaining its operational function at the best achievable reliability. Reliability is a very broad term that focuses on the ability of a product to perform its intended function, and can be defined as the probability that an item will perform its intended function without failure for a specified period of time under specific conditions.

Performing a reliability analysis on a product or system can actually include a number of different analyses to determine how reliable the product or system is. Once these reliability and maintainability (RAM) analysis have been made, it is possible to anticipate the effects of design changes and corrections in order to improve reliability. The different RAM analyses are all related, and examine the reliability of the product or system from different perspectives, in order to determine possible problems and assist in analysing corrections and improvements.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780750662727500117

Usability of Biometric Systems

Mary Frances Theofanos, Brian Stanton, in Usability in Government Systems, 2012

System characteristics

The design and characteristics of the overall system also affect the system's performance, especially the throughput. The research has shown that the following three factors are of key consideration:

Design-in affordance — as much as possible the biometric device should encourage its use and convey its intended function. Many people are not familiar with biometric devices; their only exposure is what they have seen on TV. Unfortunately, this does not give them an accurate picture of biometrics. Biometric devices must have good affordance to help overcome these misperceptions.

Avoid negative affordance — an affordance that is not in congruence with the actual use. Many times it is in violation of a population stereotype. Consider, for example, a fingerprint platen that is glowing red. This glow may be perceived as hot or stop, don't touch. Other biometric devices may require a user to insert his/her arm in an enclosure — again, this is perceived as a risk to his/her well-being.

Provide instructional materials — if the system does not have good affordance, instructional materials are necessary. These materials require careful development, taking into account cultural differences, languages, and method of presentation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012391063900047X

Security Guidance for Operating Systems and Terminal Services

Tariq Bin Azad, in Securing Citrix Presentation Server in the Enterprise, 2008

Implementing Basic Host-Level Security

If you are using Windows XP, Internet Connection Firewall is a great option to consider for host-level access control. If you are using Windows 2000, you can use IPSec filters to block access to dangerous ports. The following script should be modified based on your server's intended function:

@echo off

REM NSOFTINCIPSec.bat

REM you need to install IpSecPol,exe from the URL listed next

REMhttp://www.microsoft.com/windows2000/techinfo/reskit/tools/existing/ipsecpol-o.asp

REM This batch file uses ipsecpol.exe to block inbound services not required

REM ICMP will be blocked too

REM You should modify this based on the requirements of your TS

ipsecpol -x -w REG -p “SpecOps3389” -r “block139” -n BLOCK -f *=0:139:TCP

ipsecpol -x -w REG -p “SpecOps3389” -r “block445” -n BLOCK -f *=0:445:TCP

ipsecpol -x -w REG -p “SpecOps3389” -r “block1433” -n BLOCK -f *=0:1433:TCP

ipsecpol -x -w REG -p “SpecOps3389” -r “block80” -n BLOCK -f *=0:80:TCP

ipsecpol -x -w REG -p “SpecOps3389” -r “block443” -n BLOCK -f *=0:443:TCP

ipsecpol -x -w REG -p “SpecOps3389” -r “blockUDP1434” -n BLOCK -f *=0:1434:UDP

If still not convinced, use this script to implement IPSec filters and then scan the server with SL.exe (www.foundstone.com). If you specify the basic options, you should receive no reply from the host. This time use SL.exe with the –g option set to 88 (Kerberos) and scan again. You should be able to see all the ports that are blocked by IPSec filters.

How can you stop this? You need to set the NoDefaultExempt key. This can be configured by setting the NoDefaultExempt Name Value to DWORD=1 in the Registry at HKLM\SYSTEM\ CurrentControlSet\Services\IPSEC.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597492812000020

Which term is used for the degree to which a system performs its intended function functionality reliability validity maintainability?

Reliability is defined as the probability that a product, system, or service will perform its intended function adequately for a specified period of time, or will operate in a defined environment without failure.

What term is used for the ability of a product or service to perform as expected under?

Reliability is the ability of a product or service to perform as expected under normal conditions.

Which term is used when the project's processes and products meet written specifications?

Conformance to requirements: The project's processes and products meet written specifications.

Is the term used for any instance where the product or service fails to meet customer requirements?

A defect is a product or service that fails to meet agreed to customer requirements.