791 resultados para Computing clouds
Resumo:
The move towards IT outsourcing is the first step towards an environment where compute infrastructure is treated as a service. In utility computing this IT service has to honor Service Level Agreements (SLA) in order to meet the desired Quality of Service (QoS) guarantees. Such an environment requires reliable services in order to maximize the utilization of the resources and to decrease the Total Cost of Ownership (TCO). Such reliability cannot come at the cost of resource duplication, since it increases the TCO of the data center and hence the cost per compute unit. We, in this paper, look into aspects of projecting impact of hardware failures on the SLAs and techniques required to take proactive recovery steps in case of a predicted failure. By maintaining health vectors of all hardware and system resources, we predict the failure probability of resources based on observed hardware errors/failure events, at runtime. This inturn influences an availability aware middleware to take proactive action (even before the application is affected in case the system and the application have low recoverability). The proposed framework has been prototyped on a system running HP-UX. Our offline analysis of the prediction system on hardware error logs indicate no more than 10% false positives. This work to the best of our knowledge is the first of its kind to perform an end-to-end analysis of the impact of a hardware fault on application SLAs, in a live system.
Resumo:
Conformance testing focuses on checking whether an implementation. under test (IUT) behaves according to its specification. Typically, testers are interested it? performing targeted tests that exercise certain features of the IUT This intention is formalized as a test purpose. The tester needs a "strategy" to reach the goal specified by the test purpose. Also, for a particular test case, the strategy should tell the tester whether the IUT has passed, failed. or deviated front the test purpose. In [8] Jeron and Morel show how to compute, for a given finite state machine specification and a test purpose automaton, a complete test graph (CTG) which represents all test strategies. In this paper; we consider the case when the specification is a hierarchical state machine and show how to compute a hierarchical CTG which preserves the hierarchical structure of the specification. We also propose an algorithm for an online test oracle which avoids a space overhead associated with the CTG.
Resumo:
An important issue in the design of a distributed computing system (DCS) is the development of a suitable protocol. This paper presents an effort to systematize the protocol design procedure for a DCS. Protocol design and development can be divided into six phases: specification of the DCS, specification of protocol requirements, protocol design, specification and validation of the designed protocol, performance evaluation, and hardware/software implementation. This paper describes techniques for the second and third phases, while the first phase has been considered by the authors in their earlier work. Matrix and set theoretic based approaches are used for specification of a DCS and for specification of the protocol requirements. These two formal specification techniques form the basis of the development of a simple and straightforward procedure for the design of the protocol. The applicability of the above design procedure has been illustrated by considering an example of a computing system encountered on board a spacecraft. A Petri-net based approach has been adopted to model the protocol. The methodology developed in this paper can be used in other DCS applications.
Resumo:
A fuzzy system is developed using a linearized performance model of the gas turbine engine for performing gas turbine fault isolation from noisy measurements. By using a priori information about measurement uncertainties and through design variable linking, the design of the fuzzy system is posed as an optimization problem with low number of design variables which can be solved using the genetic algorithm in considerably low amount of computer time. The faults modeled are module faults in five modules: fan, low pressure compressor, high pressure compressor, high pressure turbine and low pressure turbine. The measurements used are deviations in exhaust gas temperature, low rotor speed, high rotor speed and fuel flow from a base line 'good engine'. The genetic fuzzy system (GFS) allows rapid development of the rule base if the fault signatures and measurement uncertainties change which happens for different engines and airlines. In addition, the genetic fuzzy system reduces the human effort needed in the trial and error process used to design the fuzzy system and makes the development of such a system easier and faster. A radial basis function neural network (RBFNN) is also used to preprocess the measurements before fault isolation. The RBFNN shows significant noise reduction and when combined with the GFS leads to a diagnostic system that is highly robust to the presence of noise in data. Showing the advantage of using a soft computing approach for gas turbine diagnostics.
Resumo:
A symmetric solution X satisfying the matrix equation XA = AtX is called a symmetrizer of the matrix A. A general algorithm to compute a matrix symmetrizer is obtained. A new multiple-modulus residue arithmetic called floating-point modular arithmetic is described and implemented on the algorithm to compute an error-free matrix symmetrizer.
Resumo:
An attempt to diagnose the dominant forcings which drive the large-scale vertical velocities over the monsoon region has been made by computing the forcings like diabatic heating fields,etc. and the large-scale vertical velocities driven by these forcings for the contrasting periods of active and break monsoon situations; in order to understand the rainfall variability associated with them. Computation of diabatic heating fields show us that among different components of diabatic heating it is the convective heating that dominates at mid-tropospheric levels during an active monsoon period; whereas it is the sensible heating at the surface that is important during a break period. From vertical velocity calculations we infer that the prime differences in the large-scale vertical velocities seen throughout the depth of the atmosphere are due to the differences in the orders of convective heating; the maximum rate of latent heating being more than 10 degrees Kelvin per day during an active monsoon period; whereas during a break monsoon period it is of the order of 2 degrees Kelvin per day at mid-tropospheric levels. At low levels of the atmosphere, computations show that there is large-scale ascent occurring over a large spatial region, driven only by the dynamic forcing associated with vorticity and temperature advection during an active monsoon period. However, during a break monsoon period such large-scale spatial organization in rising motion is not seen. It is speculated that these differences in the low-level large-scale ascent might be causing differences in convective heating because the weaker the low level ascent, the lesser the convective instability which produces deep cumulus clouds and hence lesser the associated latent heat release. The forcings due to other components of diabatic heating, namely, the sensible heating and long wave radiative cooling do not influence the large-scale vertical velocities significantly.
Resumo:
A real or a complex symmetric matrix is defined here as an equivalent symmetric matrix for a real nonsymmetric matrix if both have the same eigenvalues. An equivalent symmetric matrix is useful in computing the eigenvalues of a real nonsymmetric matrix. A procedure to compute equivalent symmetric matrices and its mathematical foundation are presented.
Resumo:
The management and coordination of business-process collaboration experiences changes because of globalization, specialization, and innovation. Service-oriented computing (SOC) is a means towards businessprocess automation and recently, many industry standards emerged to become part of the service-oriented architecture (SOA) stack. In a globalized world, organizations face new challenges for setting up and carrying out collaborations in semi-automating ecosystems for business services. For being efficient and effective, many companies express their services electronically in what we term business-process as a service (BPaaS). Companies then source BPaaS on the fly from third parties if they are not able to create all service-value inhouse because of reasons such as lack of reasoures, lack of know-how, cost- and time-reduction needs. Thus, a need emerges for BPaaS-HUBs that not only store service offers and requests together with information about their issuing organizations and assigned owners, but that also allow an evaluation of trust and reputation in an anonymized electronic service marketplace. In this paper, we analyze the requirements, design architecture and system behavior of such a BPaaS-HUB to enable a fast setup and enactment of business-process collaboration. Moving into a cloud-computing setting, the results of this paper allow system designers to quickly evaluate which services they need for instantiationg the BPaaS-HUB architecture. Furthermore, the results also show what the protocol of a backbone service bus is that allows a communication between services that implement the BPaaS-HUB. Finally, the paper analyzes where an instantiation must assign additional computing resources vor the avoidance of performance bottlenecks.
Resumo:
The energy input to giant molecular clouds is recalculated, using the proper linearized equations of motion, including the Coriolis force and allowing for changes in the guiding center. Perturbation theory yields a result in the limit of distant encounters and small initial epicyclic amplitudes. Direct integration of the motion equations allows the strong encounter regime to be studied. The present perturbation theory result differs by a factor of order unity from that of Jog and Ostriker (1988). The result of present numerical integrations for the 2D (planar) velocity dispersion is presented. The accretion rate for a molecular cloud in the Galactic disk is calculated.
Resumo:
A Geodesic Constant Method (GCM) is outlined which provides a common approach to ray tracing on quadric cylinders in general, and yields all the surface ray-geometric parameters required in the UTD mutual coupling analysis of conformal antenna arrays in the closed form. The approach permits the incorporation of a shaping parameter which permits the modeling of quadric cylindrical surfaces of desired sharpness/flatness with a common set of equations. The mutual admittance between the slots on a general parabolic cylinder is obtained as an illustration of the applicability of the GCM.
Resumo:
The Grad–Shafranov reconstruction is a method of estimating the orientation (invariant axis) and cross section of magnetic flux ropes using the data from a single spacecraft. It can be applied to various magnetic structures such as magnetic clouds (MCs) and flux ropes embedded in the magnetopause and in the solar wind. We develop a number of improvements of this technique and show some examples of the reconstruction procedure of interplanetary coronal mass ejections (ICMEs) observed at 1 AU by the STEREO, Wind, and ACE spacecraft during the minimum following Solar Cycle 23. The analysis is conducted not only for ideal localized ICME events but also for non-trivial cases of magnetic clouds in fast solar wind. The Grad–Shafranov reconstruction gives reasonable results for the sample events, although it possesses certain limitations, which need to be taken into account during the interpretation of the model results.
Resumo:
We address the problem of computing the level-crossings of an analog signal from samples measured on a uniform grid. Such a problem is important, for example, in multilevel analog-to-digital (A/D) converters. The first operation in such sampling modalities is a comparator, which gives rise to a bilevel waveform. Since bilevel signals are not bandlimited, measuring the level-crossing times exactly becomes impractical within the conventional framework of Shannon sampling. In this paper, we propose a novel sub-Nyquist sampling technique for making measurements on a uniform grid and thereby for exactly computing the level-crossing times from those samples. The computational complexity of the technique is low and comprises simple arithmetic operations. We also present a finite-rate-of-innovation sampling perspective of the proposed approach and also show how exponential splines fit in naturally into the proposed sampling framework. We also discuss some concrete practical applications of the sampling technique.