998 resultados para Could computing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conformance testing focuses on checking whether an implementation. under test (IUT) behaves according to its specification. Typically, testers are interested it? performing targeted tests that exercise certain features of the IUT This intention is formalized as a test purpose. The tester needs a "strategy" to reach the goal specified by the test purpose. Also, for a particular test case, the strategy should tell the tester whether the IUT has passed, failed. or deviated front the test purpose. In [8] Jeron and Morel show how to compute, for a given finite state machine specification and a test purpose automaton, a complete test graph (CTG) which represents all test strategies. In this paper; we consider the case when the specification is a hierarchical state machine and show how to compute a hierarchical CTG which preserves the hierarchical structure of the specification. We also propose an algorithm for an online test oracle which avoids a space overhead associated with the CTG.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The static response of thin, wrinkled membranes is studied using both a tension field approximation based on plane stress conditions and a 3D nonlinear elasticityformulation, discretized through 8-noded Cosserat point elements. While the tension field approach only obtains the wrinkled/slack regions and at best a measure of the extent of wrinkliness, the 3D elasticity solution provides, in principle, the deformed shape of a wrinkled/slack membrane. However, since membranes barely resist compression, the discretized and linearized system equations via both the approaches are ill-conditioned and solutions could thus be sensitive to discretizations errors as well as other sources of noises/imperfections. We propose a regularized, pseudo-dynamical recursion scheme that provides a sequence of updates, which are almost insensitive to theregularizing term as well as the time step size used for integrating the pseudo-dynamical form. This is borne out through several numerical examples wherein the relative performance of the proposed recursion scheme vis-a-vis a regularized Newton strategy is compared. The pseudo-time marching strategy, when implemented using 3D Cosserat point elements, also provides a computationally cheaper, numerically accurate and simpler alternative to that using geometrically exact shell theories for computing large deformations of membranes in the presence of wrinkles. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An important issue in the design of a distributed computing system (DCS) is the development of a suitable protocol. This paper presents an effort to systematize the protocol design procedure for a DCS. Protocol design and development can be divided into six phases: specification of the DCS, specification of protocol requirements, protocol design, specification and validation of the designed protocol, performance evaluation, and hardware/software implementation. This paper describes techniques for the second and third phases, while the first phase has been considered by the authors in their earlier work. Matrix and set theoretic based approaches are used for specification of a DCS and for specification of the protocol requirements. These two formal specification techniques form the basis of the development of a simple and straightforward procedure for the design of the protocol. The applicability of the above design procedure has been illustrated by considering an example of a computing system encountered on board a spacecraft. A Petri-net based approach has been adopted to model the protocol. The methodology developed in this paper can be used in other DCS applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Preferential cleavage of active genes by DNase I has been correlated with a structurally altered conformation of DNA at the hypersensitive site in chromatin. To have a better understanding of the structural requirements for gene activation as probed by DNase I action, digestability by DNase I of synthetic polynucleotides having the ability to adopt B and non-B conformation (like Z-form) was studied which indicated a marked higher digestability of the B-form of DNA. Left handed Z form present within a natural sequence in supercoiled plasmid also showed marked resistance towards DNase I digestion. We show that alternating purine-pyrimidine sequences adopting Z-conformation exhibit DNAse I foot printing even in a protein free system. The logical deductions from the results indicate that 1) altered structure like Z-DNA is not a favourable substrate for DNase I, 2) both the ends of the alternating purine-pyrimidine insert showed hypersensitivity, 3) B-form with a minor groove of 12-13 A is a more favourable substrate for DNase I than an altered structure, 4) any structure of DNA deviating largely from B form with a capacity to flip over to the B-form are potential targets for the DNase I enzymic probes in naked DNA.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A fuzzy system is developed using a linearized performance model of the gas turbine engine for performing gas turbine fault isolation from noisy measurements. By using a priori information about measurement uncertainties and through design variable linking, the design of the fuzzy system is posed as an optimization problem with low number of design variables which can be solved using the genetic algorithm in considerably low amount of computer time. The faults modeled are module faults in five modules: fan, low pressure compressor, high pressure compressor, high pressure turbine and low pressure turbine. The measurements used are deviations in exhaust gas temperature, low rotor speed, high rotor speed and fuel flow from a base line 'good engine'. The genetic fuzzy system (GFS) allows rapid development of the rule base if the fault signatures and measurement uncertainties change which happens for different engines and airlines. In addition, the genetic fuzzy system reduces the human effort needed in the trial and error process used to design the fuzzy system and makes the development of such a system easier and faster. A radial basis function neural network (RBFNN) is also used to preprocess the measurements before fault isolation. The RBFNN shows significant noise reduction and when combined with the GFS leads to a diagnostic system that is highly robust to the presence of noise in data. Showing the advantage of using a soft computing approach for gas turbine diagnostics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A symmetric solution X satisfying the matrix equation XA = AtX is called a symmetrizer of the matrix A. A general algorithm to compute a matrix symmetrizer is obtained. A new multiple-modulus residue arithmetic called floating-point modular arithmetic is described and implemented on the algorithm to compute an error-free matrix symmetrizer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A real or a complex symmetric matrix is defined here as an equivalent symmetric matrix for a real nonsymmetric matrix if both have the same eigenvalues. An equivalent symmetric matrix is useful in computing the eigenvalues of a real nonsymmetric matrix. A procedure to compute equivalent symmetric matrices and its mathematical foundation are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The management and coordination of business-process collaboration experiences changes because of globalization, specialization, and innovation. Service-oriented computing (SOC) is a means towards businessprocess automation and recently, many industry standards emerged to become part of the service-oriented architecture (SOA) stack. In a globalized world, organizations face new challenges for setting up and carrying out collaborations in semi-automating ecosystems for business services. For being efficient and effective, many companies express their services electronically in what we term business-process as a service (BPaaS). Companies then source BPaaS on the fly from third parties if they are not able to create all service-value inhouse because of reasons such as lack of reasoures, lack of know-how, cost- and time-reduction needs. Thus, a need emerges for BPaaS-HUBs that not only store service offers and requests together with information about their issuing organizations and assigned owners, but that also allow an evaluation of trust and reputation in an anonymized electronic service marketplace. In this paper, we analyze the requirements, design architecture and system behavior of such a BPaaS-HUB to enable a fast setup and enactment of business-process collaboration. Moving into a cloud-computing setting, the results of this paper allow system designers to quickly evaluate which services they need for instantiationg the BPaaS-HUB architecture. Furthermore, the results also show what the protocol of a backbone service bus is that allows a communication between services that implement the BPaaS-HUB. Finally, the paper analyzes where an instantiation must assign additional computing resources vor the avoidance of performance bottlenecks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Geodesic Constant Method (GCM) is outlined which provides a common approach to ray tracing on quadric cylinders in general, and yields all the surface ray-geometric parameters required in the UTD mutual coupling analysis of conformal antenna arrays in the closed form. The approach permits the incorporation of a shaping parameter which permits the modeling of quadric cylindrical surfaces of desired sharpness/flatness with a common set of equations. The mutual admittance between the slots on a general parabolic cylinder is obtained as an illustration of the applicability of the GCM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Under the project `Seasonal Prediction of the Indian Monsoon' (SPIM), the prediction of Indian summer monsoon rainfall by five atmospheric general circulation models (AGCMs) during 1985-2004 was assessed. The project was a collaborative effort of the coordinators and scientists from the different modelling groups across the country. All the runs were made at the Centre for Development of Advanced Computing (CDAC) at Bangalore on the PARAM Padma supercomputing system. Two sets of simulations were made for this purpose. In the first set, the AGCMs were forced by the observed sea surface temperature (SST) for May-September during 1985-2004. In the second set, runs were made for 1987, 1988, 1994, 1997 and 2002 forced by SST which was obtained by assuming that the April anomalies persist during May-September. The results of the first set of runs show, as expected from earlier studies, that none of the models were able to simulate the correct sign of the anomaly of the Indian summer monsoon rainfall for all the years. However, among the five models, one simulated the correct sign in the largest number of years and the second model showed maximum skill in the simulation of the extremes (i.e. droughts or excess rainfall years). The first set of runs showed some common bias which could arise either from an excessive sensitivity of the models to El Nino Southern Oscillation (ENSO) or an inability of the models to simulate the link of the Indian monsoon rainfall to Equatorial Indian Ocean Oscillation (EQUINOO), or both. Analysis of the second set of runs showed that with a weaker ENSO forcing, some models could simulate the link with EQUINOO, suggesting that the errors in the monsoon simulations with observed SST by these models could be attributed to unrealistically high sensitivity to ENSO.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A state-of-the-art model of the coupled ocean-atmosphere system, the climate forecast system (CFS), from the National Centres for Environmental Prediction (NCEP), USA, has been ported onto the PARAM Padma parallel computing system at the Centre for Development of Advanced Computing (CDAC), Bangalore and retrospective predictions for the summer monsoon (June-September) season of 2009 have been generated, using five initial conditions for the atmosphere and one initial condition for the ocean for May 2009. Whereas a large deficit in the Indian summer monsoon rainfall (ISMR; June-September) was experienced over the Indian region (with the all-India rainfall deficit by 22% of the average), the ensemble average prediction was for above-average rainfall during the summer monsoon. The retrospective predictions of ISMR with CFS from NCEP for 1981-2008 have been analysed. The retrospective predictions from NCEP for the summer monsoon of 1994 and that from CDAC for 2009 have been compared with the simulations for each of the seasons with the stand-alone atmospheric component of the model, the global forecast system (GFS), and observations. It has been shown that the simulation with GFS for 2009 showed deficit rainfall as observed. The large error in the prediction for the monsoon of 2009 can be attributed to a positive Indian Ocean Dipole event seen in the prediction from July onwards, which was not present in the observations. This suggests that the error could be reduced with improvement of the ocean model over the equatorial Indian Ocean.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of computing the level-crossings of an analog signal from samples measured on a uniform grid. Such a problem is important, for example, in multilevel analog-to-digital (A/D) converters. The first operation in such sampling modalities is a comparator, which gives rise to a bilevel waveform. Since bilevel signals are not bandlimited, measuring the level-crossing times exactly becomes impractical within the conventional framework of Shannon sampling. In this paper, we propose a novel sub-Nyquist sampling technique for making measurements on a uniform grid and thereby for exactly computing the level-crossing times from those samples. The computational complexity of the technique is low and comprises simple arithmetic operations. We also present a finite-rate-of-innovation sampling perspective of the proposed approach and also show how exponential splines fit in naturally into the proposed sampling framework. We also discuss some concrete practical applications of the sampling technique.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Floquet analysis is widely used for small-order systems (say, order M < 100) to find trim results of control inputs and periodic responses, and stability results of damping levels and frequencies, Presently, however, it is practical neither for design applications nor for comprehensive analysis models that lead to large systems (M > 100); the run time on a sequential computer is simply prohibitive, Accordingly, a massively parallel Floquet analysis is developed with emphasis on large systems, and it is implemented on two SIMD or single-instruction, multiple-data computers with 4096 and 8192 processors, The focus of this development is a parallel shooting method with damped Newton iteration to generate trim results; the Floquet transition matrix (FTM) comes out as a byproduct, The eigenvalues and eigenvectors of the FTM are computed by a parallel QR method, and thereby stability results are generated, For illustration, flap and flap-lag stability of isolated rotors are treated by the parallel analysis and by a corresponding sequential analysis with the conventional shooting and QR methods; linear quasisteady airfoil aerodynamics and a finite-state three-dimensional wake model are used, Computational reliability is quantified by the condition numbers of the Jacobian matrices in Newton iteration, the condition numbers of the eigenvalues and the residual errors of the eigenpairs, and reliability figures are comparable in both the parallel and sequential analyses, Compared to the sequential analysis, the parallel analysis reduces the run time of large systems dramatically, and the reduction increases with increasing system order; this finding offers considerable promise for design and comprehensive-analysis applications.