What are the three types of constraints typically captured in object oriented design?

Residential Security System Example Using the Object-Oriented Systems Engineering Method

Sanford Friedenthal, ... Rick Steiner, in A Practical Guide to SysML (Third Edition), 2015

Identify system design constraints

Design constraints are those constraints that are imposed on the design solution, which in this example refers to the ESS design. These constraints are typically imposed by the customer, by the development organization, or by external regulations. The constraints may be imposed on the hardware, software, data, operational procedures, interfaces, or any other part of the system. Examples may include a constraint that the system must use predefined COTS hardware or software, use of a particular algorithm, or implement a specific interface protocol. For the ESS system, the design is constrained to include the legacy central monitoring station hardware as well as the communications network between the central monitoring station and the site installations.

Design constraints can have a significant impact on the design and should be validated prior to imposing them on the solution. A straightforward approach to address design constraints is to categorize the type of constraints (e.g., hardware, software, procedure, algorithm), identify the specific constraints for each category, and capture them as system requirements in the Requirements package along with the corresponding rationale. The design constraints are then integrated into the physical architecture, as discussed in Section 17.3.5.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128002025000175

Design Constraints and Optimization

R.C. Cofer, Benjamin F. Harding, in Rapid System Prototyping with FPGAs, 2006

9.2.1 Avoiding Design Over-Constraint

Effective design constraint requires design analysis and restraint to develop and maintain the correct constraint balance. Over-constraining a design will cause the tools to work harder to resolve conflicting or unreasonable requirements with limited resources. Design over-constraint can occur in several different ways. Some of the most common include simply assigning too many constraints, constraining noncritical portions of the design, and setting constraints beyond the required level of performance. An example of design over-constraint may occur when path-specific timing constraints have been set to a minimum path delay value far exceeding the required circuit performance. The principle “if a little is good then more must be better.” is seldom an appropriate philosophy when constraining an FPGA design.

What are the three types of constraints typically captured in object oriented design?

Over-constraining a design can result in a significant increase in the time required to place, route and analyze a design. The result is a longer design implementation time. Since the design implementation phase potentially occurs many times during a design cycle this can have a significant impact on design efficiency. A more serious design over-constraint consequence occurs when the place-and-route process can no longer successfully implement the design within the specified FPGA architecture. This may force an upgrade to a larger or faster speed-grade FPGA component if the over-constraint conditions are not adjusted.

To avoid design over-constraint a few simple guidelines should be followed. Start by constraining only the highest performance circuits and then add additional constraints as required in an iterative approach. Additionally try to leave significant margin within area constraints and avoid constraining lower performance circuits unnecessarily. A more detailed design optimization flow will be presented later in this chapter.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978075067866750010X

Proceedings of the 8th International Conference on Foundations of Computer-Aided Process Design

Diane Hildebrandt, James A. Fox, in Computer Aided Chemical Engineering, 2014

5 Conclusions

Given the design constraint, namely that the process must be adiabatic overall, a carbon efficiency of 104% represents the theoretical thermodynamic limit for this synthesis gas production. In addition the adiabatic process could be targeted to produce 55.7 kJ/mol CH4 work.

The flow sheet, Fig. 6 and Fig. 7, represents an ideal of what a good high level flowsheet that is feasible might look like. The flowsheet achieves the targeted 104% carbon efficiency, by implementing CO2 and H20 recycles. The operating pressures are set so as to allow work recovery and thereby improve reversibility. Furthermore heat and work flows are set up to also allow heat and work integration and thereby increase work recovery. The flowsheet recovers around 13 kJ/mol CH4 of work which is close to 25% of the available work. It provides a basis from which new processes may be developed, and existing processes improved.

The GH-space is a tool that allows the designer to interpret processes and identify major losses. This provides the designer with a valuable indicator of the modifications that should be made to improve the efficiency of the process under scrutiny.

However, syngas production is typically only one step within a larger process (Syngas being fed into Fischer-Tropsch or methanol synthesis or the hydrogen being used in ammonia synthesis) and it would be better to optimise the overall process. However we have chosen the simplified process to illustrate the power of the method

The technique demonstrated in this paper is a departure from the traditional method of designing and optimizing a process, one unit at a time. Instead we have proposed that it may be better to consider a process in terms of the interactions of all the units together, so that it can be designed and optimized as a whole.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444634337500146

Introduction

Jacob Murray, ... Behrooz Shirazi, in Sustainable Wireless Network-on-Chip Architectures, 2016

Abstract

Power and wire design constraints are forcing the adoption of new design methodologies for system-on-chip, namely, those that incorporate modularity and explicit parallelism. Researchers have recently pursued scalable communication-centric interconnect fabrics, such as Networks-on-Chip (NoCs), which possess many features that are particularly attractive. These communication-centric interconnect fabrics are characterized by varying trade-offs with regard to latency, throughput, energy dissipation, and silicon area requirements. This chapter focuses on traditional NoC topologies, routing strategies, and interconnect backbone.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012803625900008X

Spacecraft Structures

Tetsuo Yasaka, Junjiro Onoda, in Encyclopedia of Physical Science and Technology (Third Edition), 2003

II.A Mechanical Environment

The most severe design constraints to satellite structures are derived from the mechanical environment during the launch phase. Each launcher provides environment data in terms of steady-state acceleration, low-frequency vibration, acoustic vibration, and shock. These data are specified at the satellite/launch vehicle interface plane, and the probability that the actual load does not exceed the specified values is 99% or 3σ level.

The steady-state acceleration is defined in both longitudinal and lateral directions. Longitudinal acceleration is induced by the launch vehicle thrust, and, therefore, it has a tendency to increase with depletion of propellant, reaching its maximum toward the end of each stage burn. Lateral acceleration is a result of the lateral motion of the vehicle, usually coupled with lateral thrust components and attack angle during the atmospheric flight. Dynamical responses of the vehicle may be induced by transient loads such as gust, thrust build-up or tail-off, and structure-propulsion coupling. Accelerations induced by these dynamical responses are clearly time-varying, but the maximum response accelerations of low-frequency components are super-imposed to the steady-state acceleration, and the combined load factors are defined for the purpose of design criteria (Table I).

TABLE I. Flight Load Factors

Ariane 5
LongitudinalLateral
Acceleration (g)StaticDynamicStatic + dynamic
Lift-off −1.7 ±1.5 ±2
Maximum dynamic pressure −2.7 ±0.5 ±2
SRB end of flight −4.55 ±1.45 ±1
Main core thrust tail-off −0.2 ±1.4 ±0.25
Max. tension case: SRB jettisoning +2.5 ±0.9
The Quasi-Static-Loads (QSL) applied to payload center of gravity.
H−IIA
LongitudinalLateral
Acceleration (g)StaticDynamicStatic + dynamic
Lift-off (max. compression) −1.7 ±1.5 ±1.8
Lift-off (max. tension) −0.1 ±1.8
Main engine cut-off −4.0 ±0.5
Main engine cut-off-transit +1.0 ±1.0

Maximum loads at the top of the payload adapter.

The low-frequency vibration represents the sinusoidal component of the accelerations, induced by the vehicle dynamic response due to unsteady forces of engine thrust, gust, attitude control limit cycle, and engine-structure coupling. Most commonly, this is defined in the frequency range of 5–100 Hz, with respect to both longitudinal and lateral directions (Table II). Higher-frequency components are considered in random vibration or acoustic vibration environment. High-frequency vibration is induced by acoustic noises emitted from rocket exhaust jet and aerodynamic turbulence. The vibration excitation of the vehicle structure and acoustic excitation of air within the vehicle shroud determine the high-frequency random vibration environment of the spacecraft. Recently, acoustic excitation tests were found to be more representative than random vibration tests for the actual behavior of the spacecraft structure during flight and, therefore, random vibration environment is often omitted in the description of the launch environment. The acoustic environment is described in terms of sound pressure level as shown in Fig 8.

TABLE II. Sinusoidal Vibration Environment

LauncherDirectionFrequency HzAmplitude G 0-p
Ariane 5
Longitudinal 5–100 1.0
Lateral 2–25 0.8
25–100 0.6
H-IIA
Longitudinal 5–30 1.0
30–100 0.8
Lateral 5–18 0.7
18–100 0.6

What are the three types of constraints typically captured in object oriented design?

FIGURE 8. Acoustic environment: sound pressure level (H-IIA).

The shock environment is induced by a number of transient phenomena during the flight of the launch vehicle. The most prominent of those transients are pyrotechnic shocks experienced at the stage separation, the shroud jettison and the spacecraft separation. The shock spectrum defines the shock environment. Figure 9 shows the shock spectrum at the time of separation. Actual pyrotechnic actuation may replace the reproduction of the shock environment during ground tests.

What are the three types of constraints typically captured in object oriented design?

FIGURE 9. Shock spectrum (H-IIA).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122274105008991

Tools For Chemical Product Design

R. Gani, ... S. Cignitti, in Computer Aided Chemical Engineering, 2016

4.2 Step 2: Computer-Aided Molecular Design Constraint Selection

In this step, the CAMD constraints are selected based on the problem definition. The constraints needed include objective function and structure, property, thermodynamic, and process models (See Section 2).

Step 2.1

The structural model is selected here, including the desired groups and, if needed, backbone structure is used. Structural models include models for how groups can be connected to form feasible molecules. The adjacency matrix can be used as a model here if differentiation between isomers and higher property prediction accuracy of primary properties is needed.

The target molecular groups are based on a selection of the 220 Marrero and Gani groups and the occurrence of molecular and functional groups (a list of the 220 first order groups are given in Appendix A). In the backbone structure, a fixed part of the molecule can be defined, such as a benzene ring, and the molecules generated will be built on the benzene structure. In addition, for the generation of polymer monomer units, two free-bonding sites should be left free. If a pure molecule in a mixture is to be designed, it also needs to be defined. This is necessary when calculating mixture properties where the application process of the designed solvents needs to be evaluated.

Step 2.2

Thermodynamic model selection is necessary when a phase equilibrium model or equation of state is to be utilized for the prediction of mixture miscibility, azeotropic composition, phase behavior, and other problem-specific properties. These types of models are readily available; for example, solvent design problems require the estimation of liquid phase activity coefficients for use in phase equilibrium (vapor–liquid, liquid–liquid, or solid–liquid systems). Examples of models of this type are UNIFAC (Fredenslund et al., 1975) for liquid phase activity coefficients and/or the Soave-Redlich-Kwong (SRK) equation of state (Soave, 1972) for vapor phase fugacity coefficients.

Property models must be selected based on the identified target properties for a specific product design problem. Depending on the type of property (primary, secondary, functional, and mixture), relevant models are stored in the model library of the framework, which has been implemented in the VPPD-Lab (product design simulator). For example, if it is necessary to estimate the critical temperature of the generated molecules, the appropriate model plus group parameters are retrieved from the model library of the framework. The framework has a large collection of property models (Kalakul et al., 2016).

Step 2.3

The process model, if applicable, is introduced in this step and depends on whether the process issues are included or not in the product design problem. In the case of a refrigerant design, availability of a process operation models helps to simultaneously design the molecule and optimize the application process operation. Also in the case of solvent design, the application process performance model may be included. The process variables that affect the design problem objective function need to be clearly identified, and the process model equations are usually introduced as equality constraints, while process specifications are given as inequality constraints. The process variables should ideally be variables that also affect the chemical product properties; for example, the temperature of operation defines the process operation feasibility as well as the functional properties of the chemical product. For example, in a refrigerant design problem, a set of target properties, together with an objective function that is dependent on process variables affecting the operation of a thermodynamic cycle, may be specified.

Step 2.4

The objective function is defined here. Depending on the specific design problem, the selected objective function will be optimized subject to the process operation, the chemical product properties, with or without other specifications (cost, environmental impact, etc.). It is also possible not to define (select) any objective function, in which case, a set of unranked feasible solutions will be obtained.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978044463683600006X

An Overview of Architecture-Level Power- and Energy-Efficient Design Techniques

Ivan Ratković, ... Veljko Milutinović, in Advances in Computers, 2015

Abstract

Power dissipation and energy consumption became the primary design constraint for almost all computer systems in the last 15 years. Both computer architects and circuit designers intent to reduce power and energy (without a performance degradation) at all design levels, as it is currently the main obstacle to continue with further scaling according to Moore's law. The aim of this survey is to provide a comprehensive overview of power- and energy-efficient “state-of-the-art” techniques. We classify techniques by component where they apply to, which is the most natural way from a designer point of view. We further divide the techniques by the component of power/energy they optimize (static or dynamic), covering in that way complete low-power design flow at the architectural level. At the end, we conclude that only a holistic approach that assumes optimizations at all design levels can lead to significant savings.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0065245815000303

13th International Symposium on Process Systems Engineering (PSE 2018)

Anjan K. Tula, ... Rafiqul Gani, in Computer Aided Chemical Engineering, 2018

2 Methodology and Dataflow

The developed synthesis, design, analysis and improvement method is based on the threestage approach (Babi et al., 2015) for innovation as shown in Figure 1.

What are the three types of constraints typically captured in object oriented design?

Figure 1. Three stage approach for sustainable synthesis, design, analysis of flowsheets

In stage 1, process synthesis is performed, and all the feasible alternatives are generated for a given synthesis problem. This superstructure of alternatives is solved to identify the best processing route based on process constraints and performance criteria. In stage 2 additional design details are added to the process tasks-operations so that simulations with rigorous process models can be performed. The simulation results serve as input for equipment sizing and utilities requirement calculations, which in turn serve as input for process analysis (economic, sustainability factors, LCA factors, etc.). Based on this analysis process hot-spots are identified, which are translated into design targets, that is, if satisfied, eliminates the process hot-spots and generates sustainable flowsheet alternatives. In stage 3, the optimal processing route is further improved by applying various synthesis-intensification methods to determine process alternatives that match the targets for improvement. Any match of the design targets corresponds to a more sustainable process alternative. In this paper, only stage 1 is discussed in detail as the methodology has been extended for this stage.

2.1 Synthesis Stage

The synthesis method consists of 6 steps: (1) problem definition; (2) problem analysis; (3) process-groups selection; (4) generation of flowsheets; (5) superstructure generation; and (6) selection of optimal flowsheet.

2.1.1 Problem Definition:

The objective here is to define the synthesis problem, design constraints and the performance criteria based on which the generated alternatives are to be benchmarked. The structural description of the synthesis problem is defined through data on raw materials (input streams), desired products (outputs stream) and process specifications (for example, product purity, product recovery, etc.)

2.1.2 Problem Analysis:

The objective of this very important step is to generate the information required for solving the synthesis problem. Analysis is performed to further define the synthesis problem by knowledge bases and physical insight based methods. First, reaction analysis is performed to identify reaction tasks needed to produce the desired product as defined in the previous step. A database search is performed to find the reaction mechanisms yielding the desired product. Next, pure component and mixture property analysis is performed for all the chemical species listed in the problem to generate data that can be used for identification of separation tasks and corresponding feasible separation techniques. Analysis of 22 pure component properties for the system chemicals is performed. Based on this analysis a table of property ratios is generated for all the binary pairs of chemicals. Mixture property analysis identifies the existence of possible azeotropes, eutectic points and the available driving forces for all identified binary pairs. Based on the mixture analysis and the property ratios feasible separation techniques are identified using the extended Jaksland et al. (1995) physical insights-based method.

2.1.3 Process-Groups Selection:

Here, all the process-groups that are applicable for the synthesis problem are selected and initialized based on the problem definition and analysis steps. Information from the problem definition step is used to select and initialize the inlet and outlet process-groups with corresponding components. From the problem analysis step, corresponding reaction process-groups and separation process-groups are selected and initialized.

2.1.4 Generation of Flowsheets:

In this step, the initialized process-groups are combined according to a set of rules and specifications to generate feasible flowsheet structures. This is achieved through the flowsheet generation method based on a PG-combinatorial algorithm, which is governed by a set of connectivity and logical rules that ensure the generation of only feasible solutions, thereby avoiding any combinatorial explosion.

2.1.5 Superstructure Generation:

Based on all the generated process alternatives a superstructure is generated and represented in the form of a Processing Step-Interval Network (PSIN). Figure 2 represents an illustration of a superstructure linking raw materials and products using process intervals. Here process steps, which are represented as columns correspond to different sequences of processing tasks to convert raw materials to products. Process intervals represent different techniques to achieve the particular processing task. The main advantage of representing the superstructure in the form of PSIN is that a generic model (Quaglia et al., 2015) for the process interval can be used to perform optimization on the superstructure. The optimal processing route determined this way, also provides mass and energy balance results for each processing step. Through the generic model, each process interval of the superstructure is represented by a sequence of tasks (Figure 3): (1) mixing, (2) reaction, (3) waste separation, and (4) product separation. The model allows all the interval blocks to be defined by the same set of equations related to the tasks allowing simplified data flow and time effective formulation of large problems. The model is sufficiently flexible to allow multiple inlets to and outlets from the interval, including recycle streams from downstream intervals and bypasses.

What are the three types of constraints typically captured in object oriented design?

Figure 2. Generic superstructure representation (PSIN)

What are the three types of constraints typically captured in object oriented design?

Figure 3. Generic model for process interval

The model parameters required are given by the definition of the process-groups. The raw material compositions are obtained from the inlet process-groups, reaction process groups provide the conversion factors while the separation split factors are obtained from the separation process-groups. For instance, in the case of the distillation process-group, the recovery of the components lighter than the light key is set to 100% in the overhead product and the recovery of the components heavier than the heavy key is set to 100% in the bottom product. The recovery of the key components is greater than or equal to 99.5%. Similarly, separation factors based on different driving forces available for each process-group are defined.

2.1.6 Selection of Optimal Flowsheet:

The optimization problem formulated in step 5 is solved by direct approach (that is, all the equations are solved simultaneously). The problem formulation is in the form of a MI(N)LP which is solved here with an appropriate solver (using GAMS). The solution gives the optimal process alternative to convert the given raw material to products.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444642417500744

Reliable and power-aware architectures

A. Vega, ... R.F. DeMara, in Rugged Embedded Systems, 2017

7.1 Overview

In a microbenchmark generation framework, flexibility and generality are main design constraints since the nature of situations on which microbenchmarks can be useful is huge. We want the user to fully control the code being generated at assembly level. In addition, we want the user to specify high level (e.g., loads per instruction) or dynamic properties (e.g., instructions per cycle ratio) that the microbenchmark should have. Moreover, we want the user to quickly search the design space, when looking for a solution with the specifications.

In a microbenchmark generation process, microbenchmarks can have a set of properties which are static and dynamic. Microbenchmarks that fulfill a set of static properties can be directly generated since the static properties do not depend on the environment in which the microbenchmark is deployed. These types of properties include instruction distribution, code and data footprint, dependency distance, branch patterns, and data access patterns. In contrast, the generation of microbenchmarks with a given set of dynamic properties is a complex task. The dynamic properties are directly affected by the static microbenchmark properties as well as the architecture on which the microbenchmark is run. Examples of dynamic properties include instructions per cycle, memory hit/miss ratios, power, or temperature.

In general, it is hard to statically ensure dynamic properties of a microbenchmark. However, in some situations, using a deep knowledge of the underlying architecture and assuming a constrained execution environment, one can statically ensure the dynamic properties. Otherwise, to check whether dynamic properties are satisfied, simulation on a simulator or measurement on a real setup is needed. In that scenario, since the user can only control the static properties of a microbenchmark, a search for the design space is needed to find a solution.

Fig. 9 shows the high level picture of a microbenchmark generation process. In step number one, the user provides a set of properties. In the second step, if the properties are abstract (e.g., integer unit at 70% utilization), they are translated into architectural properties using the property driver. If the properties are not abstract, they can be directly forwarded to the next step, which is the microbenchmark synthesizer. The synthesizer takes the properties and generates an abstract representation of the microbenchmark with the properties that can be statically defined. Other parameters that are required to generate the microbenchmarks are assigned by using the models implemented in the architecture back-end. In this step, the call flow graph and basic blocks are created. Assignment of instructions, dependencies, memory patterns, and branch patterns are performed. The “architecture back-end” consists of three components: (a) definition of the instruction set architecture via op code syntax table, as well as the high-level parametric definition of the processor microarchitecture; (b) an analytical reference model that is able to calculate (precisely or within specified bounds) the performance and unit-level utilization levels of a candidate loop microbenchmark; (c) a (micro)architecture translator segment that is responsible for final (micro)architecture-specific consolidation and integration of the microbenchmark program. Finally, in the fourth step the property evaluator checks whether the microbenchmark fulfills the required properties. For that purpose, the framework can rely on a simulator, real execution on a machine or analytical models provided by the architecture back-end. If the microbenchmark fulfills the target properties, the final code is generated (step 6). Otherwise, the property evaluator modifies the input parameters of the code generator and an iterative process is followed until the search process finds the desired solution (step 5).

What are the three types of constraints typically captured in object oriented design?

Fig. 9. High-level description of a microbenchmark generation process.

These steps outline the workflow for a general use case. However, different cases require different features and steps. Moreover, for the cases where the user requires full-control of the code being generated, the framework can provide an application programming interface (API). The API allows the user to access all the abstractions and control steps defined in the workflow. Overall, the microbenchmark generation framework provides for an iterative generate and test methodology that is able to quickly zoom in on an acceptable microbenchmark that meets the specified requirements.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128024591000026

What Is Embedded Programming?

Bruce Powel Douglass PhD, in Design Patterns for Embedded Systems in C, 2011

1.3 What Did We Learn?

This chapter discussed the special characteristics of embedded systems – working within tight design constraints while meeting functional and quality-of-service constraints. To do this, embedded developers use tools from the embedded tool chain to develop on a more powerful host computer while targeting an often smaller and less capable target computing environment. Target environments run the gamut from small eight-bit processors with a few kilobytes of memory to networked collections of 64-bit systems. At the small scale, many systems are too tightly constrained in memory, performance, or cost to permit the use of a commercial RTOS, while at the large scale, the systems may include a heterogeneous set of operating systems, middleware, and databases. Often, embedded software must be developed simultaneously with the hardware, making it difficult to test and debug since the target environments may not exist when the software is being developed.

By far, the most common language for developing embedded systems is C. C has the advantages of high availability of compilers for a wide range of target processors and a well-deserved reputation for run-time efficiency. Nevertheless, the pressures to add capabilities while simultaneously reducing costs mean that embedded developers must be continually looking for ways to improve their designs, and their ability to design efficiently. Structured methods organize the software into parallel taxonomies, one for data and the other for behavior. Object-oriented methods combine the two to improve cohesion for inherently tightly-coupled elements and encapsulation of content when loose coupling is appropriate. While C is not an object-oriented language, it can be (and has been) used to develop object-based and object-oriented embedded systems.

The next chapter will discuss the role of process and developer workflows in development. These workflows will determine when and where developers employ design and design patterns in the course of creation of embedded software. It will go on to define what design patterns are, how they will be discussed and organized within this book, and how they can be applied into your designs.

Subsequent chapters will list a variety of design patterns that can be applied to optimize your designs. Chapter 3 provides a number of patterns that address access of hardware, such as keyboards, timers, memory, sensors, and actuators. Chapter 4 discusses concurrency patterns both for managing and executing concurrent threads as well as sharing resources among those threads. The last two chapters provide a number of patterns for the implementation and use of state machines in embedded systems: Chapter 5 provides basic state machine implementation-focused patterns, while Chapter 6 addresses the concerns of safety and high-reliability software.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781856177078000017