Computing environment where an Internet server hosts and deploys applications

Software Architecture

Richard F. Schmidt, in Software Engineering, 2013

3.3 Computing environment relationships and dependencies

The computing environment involves the collection of computer machinary, data storage devices, work stations, software applications, and networks that support the processing and exchange of electronic information demanded by the software solution. The computing environment involves the following relationships and dependencies with elements of the software architecture:

1.

Technology availability (requirements baseline). The performance of the software solution is constrained by the computing environment and must be factored into software product requirements. The number of intructions that can be excuted, data transfer rates, graphics resolution, and rendering rates are typical computing equipment measures that affect the subsequent performance of the software solution.

2.

Resource utilization and conservation (software product architecture). The availability of computer resources within the computing environment will constrain software product performance. Shared resource utilization models must be developed, especially for networked multi-user applications. A strategy for managing resources that establishes resource consumption, conservation, preservation, and recovery must be developed and incorporated into the software archtiecture.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124077683000033

Software Requirements Analysis Practice

Richard F. Schmidt, in Software Engineering, 2013

8.2.3 Identify the computing environment characteristics

The computing environment must be identified to establish the scope of the software product’s capacity to operate in a networked, collaborative, or multi-user environment. Computing environment characteristics should address computing mainframes, servers, workstations, data storage devices, plotters, operating systems, and other application software, such as database management systems. This information is needed as the basis for the definition of the computing development that must be instantiated to support software testing.

It is necessary to identify the computational boundary that may include local, wide area, wireless, and telecommunication networks. Establishing the computational boundary is used to understand how the software product needs to interact with the various elements of the computational environment and other external systems. In the case of an embedded software product, the boundary could be the system it operates within. However, if the system is part of a larger “system of systems,” then the computational boundary could be extended beyond the system boundary to other systems, which would indicate the need for external interfaces.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124077683000082

Introduction to Parallel Programming

Benedict R. Gaster, ... Dana Schaa, in Heterogeneous Computing with OpenCL (Second Edition), 2013

Introduction

Today's computing environments are becoming more multifaceted, exploiting the capabilities of a range of multi-core microprocessors, central processing units (CPUs), digital signal processors, reconfigurable hardware (FPGAs), and graphics processing units (GPUs). Presented with so much heterogeneity, the process of developing efficient software for such a wide array of architectures poses a number of challenges to the programming community.

Applications possess a number of workload behaviors, ranging from control intensive (e.g., searching, sorting, and parsing) to data intensive (e.g., image processing, simulation and modeling, and data mining). Applications can also be characterized as compute intensive (e.g., iterative methods, numerical methods, and financial modeling), where the overall throughput of the application is heavily dependent on the computational efficiency of the underlying hardware. Each of these workload classes typically executes most efficiently on a specific style of hardware architecture. No single architecture is best for running all classes of workloads, and most applications possess a mix of the workload characteristics. For instance, control-intensive applications tend to run faster on superscalar CPUs, where significant die real estate has been devoted to branch prediction mechanisms, whereas data-intensive applications tend to run fast on vector architectures, where the same operation is applied to multiple data items concurrently.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124058941000012

Intrusion Detection in Contemporary Environments

Tarfa Hamed, ... Stefan C. Kremer, in Computer and Information Security Handbook (Third Edition), 2017

4 Cloud Computing Models

Cloud computing environments have been constructed in different ways according to the service offered by that environment. In general, there are three different cloud computing models:

1.

Software-as-a-Service (SaaS): The cloud service provider (CSP) provides software for the user, which is running and deployed on cloud infrastructure. In this case, the user (consumer) is not responsible for managing or maintaining the cloud infrastructure, including network, servers, OSs, or any other application-related issues. The consumer just uses the software as a service on demand. Google Maps is an example of SaaS [15,41].

2.

Platform-as-a-Service (PaaS): The CSP provides a platform to the consumer to deploy consumer-created applications written in any programming language supported by the CSP. The consumer is not responsible for managing or maintaining the underlying infrastructure, such as the network, servers, OSs, or storage. However, the consumer controls the deployed applications and the hosting environment configurations. Google App Engine and Microsoft Azure are examples of PaaS [15,41].

3.

Infrastructure-as-a-Service (IaaS): The CSP provides the consumer with the processing, storage, networks, and other essential computing resources to enable the consumer to run his or her software, which can be OSs and applications. This model involves managing the physical cloud infrastructure by the provider. Amazon Web Service (AWS), Eucalyptus, and OpenNebula are examples of IaaS [15,41].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128038437000065

From Risk Management to Risk Engineering

M. Huth, ... R. Masucci, in Handbook of System Safety and Security, 2017

8.2.1 Ubiquitous Connectivity and Interoperability

Modern computing environments are characterized by their ubiquitous connectivity and interoperability among heterogeneous networks and diverse systems and devices. The numbers of connected devices today are extremely large. EMC Corporation estimates over 7 billion people will use 30 billion Internet-connected devices by 2020 [1], whereas Cisco and DHL predict a higher number—50 billion Internet-connected devices by the same date [2]. Disparate computing and network domains of 15 years ago have merged into an interconnected space that supports multiple models of usage, connectivity, and access via a shared infrastructure. The diversity of connected devices is enormous, including everything from data centers and full PC platforms to tablets, industrial control systems, disposable sensors, and RFID tags. This diversity of devices is matched by the diversity of supporting networks. Ubiquitous connectivity is beneficial for the users of the technologies and for the economy, leading to new efficiencies and increased productivity and providing a platform for widespread innovation. The challenges created by this environment are well known. Universal connectivity and interoperability complicate the analysis of threats and vulnerabilities, lead to uneven levels of protection in interconnected systems and elements of infrastructure, and, in many cases, can increase attack surfaces in yet-to-be understood ways.

The diversity of the environment makes it harder to evaluate and mitigate risks that such ICT systems either pose as components or services of other systems or that they themselves face in running and interacting with such complex environments. A major challenge that needs to be addressed is a methodology that can assess risk in a compositional way, so that risk analysis scales up; to develop such methods that can coherently examine risks pertaining to different aspects such as safety and security; and to their interaction.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128037737000085

Advances in Combinatorial Testing

Rachel Tzoref-Brill, in Advances in Computers, 2019

5.1 IBM Systems

Enterprise computing environments require constant availability. Time lost to unavailable server resources can cost financial institutions millions of dollars. When routine maintenance is performed for system hardware upgrades, code updates, or repairs, an outage of the server is scheduled. To significantly reduce the need for scheduled outages and help preserve the constant availability of the server, IBM®, POWER7®, and System z® use features collectively referred to as concurrent maintenance features. These features perform changes, additions, and repair of the system hardware and code while the system is running live business applications. Designing and testing these features is a very complex technical challenge.

Previous experience with the concurrent maintenance features on POWER6® revealed some significant deficiencies in system test, and therefore CT was used to design the system level test cases for POWER7. Following an extremely successful experience on POWER7, CT was also introduced to system test of System z concurrent upgrade [97].

System test is challenging in general, as it concentrates on the overall stress to the SUT. Multiple applications operate simultaneously exercising a number of different functions and features at the same time with interactions among them as they are all running together. As an indication of the complexity of system test, a single system test (a.k.a a trial) for the concurrent upgrade of System z can take well over 8 h. In contrast, function level testing is generally concerned with inputs and outputs of individual product features or software components. Such testing is concerned with verifying coverage of each input and output of the function, and variations are typically derived from the input and output value differences.

In the context of CT, these inherent differences between system test and function test add to the challenge of identifying parameters and values for a system level CT model, as system test lacks the formal inputs and outputs that exist in function test. Wojciak and Tzoref-Brill describe in [97] a generic methodology they came up with to model the system level test spaces for POWER7 and System z. As part of the methodology, they list a series of questions which would help derive parameters and their values for system level models. In the case of the concurrent maintenance features for POWER7, system test is concerned with performing the maintenance operations end to end including the full sequence of steps. The interactions modeled are between the firmware supporting the features and other software components executing on the system while maintenance is performed. In the case of System z, system testing for concurrent upgrade represents three distinct phases of operation with a set of preconditions occurring before the upgrade, a set of during conditions that happen while the firmware is managing the update, and a set of post conditions being done after the upgrade. The system test model treats the various functions, features, system stress, system states, and errors that can occur as part of those three distinct phases.

The IBM FOCUS tool [17] was used to define the models as well as to refine the resulting test plans. Pairwise coverage was used after experimenting also with three-way coverage and concluding that it seemed more than necessary. In addition, refinement of the pairwise requirements was done to remove insignificant interactions. The interactive refinement feature [67] in the IBM FOCUS tool was designed to support such cases. It allows educated decisions on what to exclude or modify in the test plan that results from CT by displaying the coverage gaps that are introduced for each manual modification step. It was used to reduce the appearance of lower importance values, while making sure that no coverage gaps are introduced in the process. Parameter value appearance biasing in the recommended tests was achieved by assigning weights to parameter values [18, 38] as part of the input to the CT algorithm.

Evaluation of the resulting models was done by analyzing test results, taking advantage of the fact that system test is done in waves. After each wave, the results from all test trials were analyzed and reviewed. When defects were found in trials, their root cause was analyzed and an attempt was made to correlate it to the trial parameter values. The review considered the following questions: how was the overall defect discovery quality and rate? Did all parameters seem to matter? Can some parameters be eliminated? Are new parameters necessary? The evaluation was then used to guide the next test wave.

Results from applying CT were evaluated in two dimensions: improvement in test quality, based on analysis of system test results, and improvement in the quality of the server concurrent maintenance features, based on analysis of field results. To evaluate test results, the defects per trial ratio (a.k.a. test case effectiveness metric) was measured, and its change along time was analyzed. Fig. 8A and B shows the ratio obtained in the different test cycles of each release on POWER6 and POWER7, respectively. Fig. 9A and B shows the defects per trial ratio for System z concurrent upgrade on z196 prior to the introduction of CT, and EC12 in which CT was applied, respectively. In system test of System z concurrent upgrade there was only one long test cycle, hence the analysis along time was done according to ratio per week.

Computing environment where an Internet server hosts and deploys applications

Fig. 8. Comparing POWER concurrent maintenance system test results prior to CT (POWER6) and with CT (POWER7). (A) POWER6 test outcomes per release; (B) POWER7 test outcomes per release.

Taken from P. Wojciak, R. Tzoref-Brill, System level combinatorial testing in practice—the concurrent maintenance case study, in: ICST, 2014, pp. 103–112.

Computing environment where an Internet server hosts and deploys applications

Fig. 9. Comparing System z concurrent upgrade system test results prior to CT (z196) and with CT (EC12). (A) z196 defects per test trail; (B) EC12 test defects per test trail.

Taken from P. Wojciak, R. Tzoref-Brill, System level combinatorial testing in practice—the concurrent maintenance case study, in: ICST, 2014, pp. 103–112.

In both cases, prior to CT there is no decreasing trend in the ratio as time progresses, while with CT, the decrease is evident. This trend indicates that test stability is reached, and may in turn indicate product stability. Furthermore, the first trials produced by CT contain higher numbers of unique interactions. As a result, the defects per trial ratio is at first high, then as time progresses, there are fewer interactions that remain to be tested in the CT generated tests. New interactions are still uncovering new defects but the discovery rate drops significantly. This means that with CT, more defects are found earlier during testing rather than being randomly distributed along time, and therefore stability can be reached faster.

Field results of concurrent maintenance on POWER7 were compared to those obtained on POWER6 in terms of the outcome of the concurrent maintenance operation. There are two types of possible failures of the maintenance operation, as witnessed on POWER6. The first is an abort, where the operation cannot be completed and there is a need to suspend it and retry at a later stage after consulting with the customer. The second and more severe failure type is a crash, where the entire machine comes down. As can be seen in Fig. 10, dramatic quality improvement has been experienced in the field for POWER7. While on POWER6 about 10% of the operations crashed, on POWER7 crashes were eliminated for all but a handful of cases. The percentage of aborts was reduced by half from 20% to 10%, and, respectively, the success rate increased from approximately 70% to 90%. Analyzing the aborted operations revealed that only 1.6% of the overall operations failed due to firmware errors. The other failures were due to causes on which testing (and therefore CT) has no control, e.g., insufficient servicer training or defective hardware. By way of comparison, on POWER6 10% of the overall operations failed due to firmware errors. Field results of concurrent upgrade on System z EC12 were absolutely remarkable. With dozens of successful operations without any failure, these were the best field results seen to date. The authors of [97] conclude from both experiences that testing with CT enabled reaching the right set of test cases to obtain the desired improvement in product quality while meeting the affordability requirements driven by fixed industrial budgets and schedules.

Computing environment where an Internet server hosts and deploys applications

Fig. 10. Power systems concurrent maintenance field outcome comparison.

Taken from P. Wojciak, R. Tzoref-Brill, System level combinatorial testing in practice—the concurrent maintenance case study, in: ICST, 2014, pp. 103–112.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0065245817300542

Distributed Information Resources

J.B. Lim, A.R. Hurson, in Advances in Computers, 1999

Abstract

A mobile computing environment involves accessing information through a wireless network connection. The mobile unit may be stationary, in motion, and/or intermittently connected to a fixed (wired) network. As technology advances are made in software and hardware, the feasibility of accessing information “anytime, anywhere” is becoming a reality. Furthermore, the diversity and amount of information available to a given user is increasing at a rapid rate. Current distributed and multidatabase systems are designed to allow timely and reliable access to large amounts of data from different data sources. Issues such as autonomy, heterogeneity, transaction management, concurrency control, transparency, and query resolution have been addressed by researchers for multidatabase systems. These issues are similar to many of the issues involved in accessing information in a mobile environment. However, in a mobile environment, additional complexities are introduced due to network bandwidth, processing power, energy, and display restrictions inherent in mobile devices.

This chapter discusses the fundamental issues involved with mobile data access, the physical environment of mobile systems, and currently implemented mobile solutions. Furthermore, the issues involved in accessing information in a multidatabase environment and mobile computing environment share similarities. Therefore, we propose to superimpose a wireless-mobile computing environment on a multidatabase system in order to realize a system capable of effectively accessing a large amount of data over a wireless medium. We show how one can easily map solutions from one environment to another. This new system is called a mobile data access system (MDAS), which is capable of accessing heterogeneous data sources through both fixed and wireless connections. We will show the feasibility of mapping solutions from one environment to another.

Within the scope of this new environment, a new hierarchical concurrency control algorithm is introduced that allows a potentially large number of users to simultaneously access the available data. Current multidatabase concurrency control schemes do not efficiently manage these accesses because they do not address the limited bandwidth and frequent disconnection associated with wireless networks. The proposed concurrency control algorithm—v-lock—uses global locking tables created with semantic information contained within the hierarchy. The locking tables are subsequently used to serialize global transactions, and detect and remove global deadlocks. The performance of the new algorithm is simulated and the results are presented.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0065245808600194

Energy-efficient Allocation of Real-time Virtual Machines in Cloud Data Centers Using Interval-packing Techniques

Wenhong Tian, Yong Zhao, in Optimized Cloud Resource Management and Scheduling, 2015

6.2.3 Real-time VM request model

The Cloud computing environment is a suitable solution for real-time VM service because it leverages virtualization [22]. When users request execution of their real-time VMs in a Cloud data center, appropriate VMs are allocated.

Example 1

A real-time VM request can be represented in an interval vector: vmRequestID(VM typeID, start time, finish time, requested capacity). In Figure 6.2, vm1(1, 0, 6, 0.25) shows that for VM request ID vm1, the VM requested is of Type1 (corresponding to integer 1) with a start time of 0 and a finish time of 6 (i.e., finished at the 6th slot after start time of 0), and 25% of the total capacity of Type1 PM. Other requests can be represented in similar ways. Figure 6.2 shows the life cycles of VM allocation in a slotted-time window format using two PMs, where PM#1 hosts vm1, vm2, and vm3 while PM#2 hosts vm4, vm5, and vm6. Notice that the total capacity restriction has to be met in each interval.

Computing environment where an Internet server hosts and deploys applications

Figure 6.2. VM allocations using two PMs.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128014769000069

End-User Computing Concepts

Joseph B. O'Donnell, G. Lawrence Sanders, in Encyclopedia of Information Systems, 2003

II.C. Prototype Development

The EUC computing environment is geared toward a flexible and iterative systems development process. EUC is well suited for use of the prototyping systems development methodology, which is also a repetitive process. Prototyping is the process of building an experimental system for demonstration and evaluation so that the users can determine requirements. The steps in the EUC prototyping process are as follows:

1.

Users identify their basic requirements through their firsthand knowledge of the process.

2.

Users develop an EUC initial prototype (experimental model) through use of fourth-generation language software. This prototype may only contain some of the functionality of the proposed system, but provides an overall feel and look of the proposed system.

3.

Users interact with the prototype and suggest changes. The end user evaluates the functionality of the system functions by comparing the operation of the prototype to the system requirements.

4.

Users revise and improve the EUC prototype. The user makes the changes to the prototype to meet the system requirements.

5.

Repeat steps 3 and 4 until users are satisfied.

The prototyping process is very different than the traditional systems development life-cycle approach that stresses completion of planning and analysis before beginning the design process. Benefits of prototyping include facilitating the learning process, speed of development, flexibility, and a better ability to meet end-user needs. EUC prototyping allows the user to learn the requirements throughout the process of creating and testing the system rather than the more abstract process involved in the analysis phase of the systems development life cycle. Some researchers believe that prototyping represents a trial-and-error approach that is fundamental to the human learning process. Another benefit of the prototyping approach is rapid development of the system by providing a working model with limited functionality early in the process. EUC prototyping is flexible as the design of the application continually changes. Finally, researchers have found that systems built through prototyping better meet the needs of the users.

A major disadvantage of prototyping is that it may not be appropriate for large or complex systems, which may require significant amounts of coordination. However, some use of prototyping may be very useful for large projects when integrated with systems development life-cycle methodologies. Furthermore, use of EUC for large and complex projects may not be appropriate due to the limited systems analysis skills of most users. EUC and prototyping may both be best suited for small, less complex systems.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404000551

Deploying Windows 7 in an Enterprise Environment

Jorge Orchilles, in Microsoft Windows 7 Administrator's Reference, 2010

Current Environment

Document your existing computing environment, looking at your organization's structure and how it supports users. Use this assessment to determine your readiness for desktop deployment of Windows 7. The three major areas of your computing environment to assess include your hardware, software, and network.

Hardware – Do your desktop and laptop computers meet the minimum hardware requirements for Windows 7? In addition to meeting these requirements, all hardware must be compatible with Windows 7.

Software – Are your applications compatible with Windows 7? Make sure that all of your applications, including line-of-business (LOB) applications, work with computers running Windows 7.

Network – Document your network architecture, including topology, size, and traffic patterns. Also, determine which users need access to various applications and data, and describe how they obtain access.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495615000036

What is a computing environment where an Internet server hosts and deploys applications?

Cloud computing is on-demand access, via the internet, to computing resources—applications, servers (physical servers and virtual servers), data storage, development tools, networking capabilities, and more—hosted at a remote data center managed by a cloud services provider (or CSP).

Which cloud service can be described as a computing environment where an Internet server hosts and deploys applications?

Platform as a service (PaaS) is a cloud computing model where a third-party provider delivers hardware and software tools to users over the internet. Usually, these tools are needed for application development. A PaaS provider hosts the hardware and software on its own infrastructure.

What is the name of the technology that gives a computer the capability to recognize peripheral devices as you install them?

The BIOS identifies all of the computer's peripheral devices, such as hard drives and expansion cards. It first looks for plug-and-play devices and assigns a number to each, but it doesn't enable the devices at this time.

What is an application used in both schools and corporate training environments?

A learning management system (LMS) is a software application or web-based technology used to plan, implement and assess a specific learning process.