983 resultados para test cases generator
Resumo:
We present a method to enhance fault localization for software systems based on a frequent pattern mining algorithm. Our method is based on a large set of test cases for a given set of programs in which faults can be detected. The test executions are recorded as function call trees. Based on test oracles the tests can be classified into successful and failing tests. A frequent pattern mining algorithm is used to identify frequent subtrees in successful and failing test executions. This information is used to rank functions according to their likelihood of containing a fault. The ranking suggests an order in which to examine the functions during fault analysis. We validate our approach experimentally using a subset of Siemens benchmark programs.
Resumo:
A distributed Lagrangian moving-mesh finite element method is applied to problems involving changes of phase. The algorithm uses a distributed conservation principle to determine nodal mesh velocities, which are then used to move the nodes. The nodal values are obtained from an ALE (Arbitrary Lagrangian-Eulerian) equation, which represents a generalization of the original algorithm presented in Applied Numerical Mathematics, 54:450--469 (2005). Having described the details of the generalized algorithm it is validated on two test cases from the original paper and is then applied to one-phase and, for the first time, two-phase Stefan problems in one and two space dimensions, paying particular attention to the implementation of the interface boundary conditions. Results are presented to demonstrate the accuracy and the effectiveness of the method, including comparisons against analytical solutions where available.
Resumo:
Modelling spatial covariance is an essential part of all geostatistical methods. Traditionally, parametric semivariogram models are fit from available data. More recently, it has been suggested to use nonparametric correlograms obtained from spatially complete data fields. Here, both estimation techniques are compared. Nonparametric correlograms are shown to have a substantial negative bias. Nonetheless, when combined with the sample variance of the spatial field under consideration, they yield an estimate of the semivariogram that is unbiased for small lag distances. This justifies the use of this estimation technique in geostatistical applications. Various formulations of geostatistical combination (Kriging) methods are used here for the construction of hourly precipitation grids for Switzerland based on data from a sparse realtime network of raingauges and from a spatially complete radar composite. Two variants of Ordinary Kriging (OK) are used to interpolate the sparse gauge observations. In both OK variants, the radar data are only used to determine the semivariogram model. One variant relies on a traditional parametric semivariogram estimate, whereas the other variant uses the nonparametric correlogram. The variants are tested for three cases and the impact of the semivariogram model on the Kriging prediction is illustrated. For the three test cases, the method using nonparametric correlograms performs equally well or better than the traditional method, and at the same time offers great practical advantages. Furthermore, two variants of Kriging with external drift (KED) are tested, both of which use the radar data to estimate nonparametric correlograms, and as the external drift variable. The first KED variant has been used previously for geostatistical radar-raingauge merging in Catalonia (Spain). The second variant is newly proposed here and is an extension of the first. Both variants are evaluated for the three test cases as well as an extended evaluation period. It is found that both methods yield merged fields of better quality than the original radar field or fields obtained by OK of gauge data. The newly suggested KED formulation is shown to be beneficial, in particular in mountainous regions where the quality of the Swiss radar composite is comparatively low. An analysis of the Kriging variances shows that none of the methods tested here provides a satisfactory uncertainty estimate. A suitable variable transformation is expected to improve this.
Resumo:
Data assimilation aims to incorporate measured observations into a dynamical system model in order to produce accurate estimates of all the current (and future) state variables of the system. The optimal estimates minimize a variational principle and can be found using adjoint methods. The model equations are treated as strong constraints on the problem. In reality, the model does not represent the system behaviour exactly and errors arise due to lack of resolution and inaccuracies in physical parameters, boundary conditions and forcing terms. A technique for estimating systematic and time-correlated errors as part of the variational assimilation procedure is described here. The modified method determines a correction term that compensates for model error and leads to improved predictions of the system states. The technique is illustrated in two test cases. Applications to the 1-D nonlinear shallow water equations demonstrate the effectiveness of the new procedure.
Resumo:
An important test of the quality of a computational model is its ability to reproduce standard test cases or benchmarks. For steady open–channel flow based on the Saint Venant equations some benchmarks exist for simple geometries from the work of Bresse, Bakhmeteff and Chow but these are tabulated in the form of standard integrals. This paper provides benchmark solutions for a wider range of cases, which may have a nonprismatic cross section, nonuniform bed slope, and transitions between subcritical and supercritical flow. This makes it possible to assess the underlying quality of computational algorithms in more difficult cases, including those with hydraulic jumps. Several new test cases are given in detail and the performance of a commercial steady flow package is evaluated against two of them. The test cases may also be used as benchmarks for both steady flow models and unsteady flow models in the steady limit.
Resumo:
The arbitrarily structured C-grid, TRiSK (Thuburn, Ringler, Skamarock and Klemp, 2009, 2010) is being used in the ``Model for Prediction Across Scales'' (MPAS) and is being considered by the UK Met Office for their next dynamical core. However the hexagonal C-grid supports a branch of spurious Rossby modes which lead to erroneous grid-scale oscillations of potential vorticity (PV). It is shown how these modes can be harmlessly controlled by using upwind-biased interpolation schemes for PV. A number of existing advection schemes for PV are tested, including that used in MPAS, and none are found to give adequate results for all grids and all cases. Therefore a new scheme is proposed; continuous, linear-upwind stabilised transport (CLUST), a blend between centred and linear-upwind with the blend dependent on the flow direction with respect to the cell edge. A diagnostic of grid-scale oscillations is proposed which gives further discrimination between schemes than using potential enstrophy alone and indeed some schemes are found to destroy potential enstrophy while grid-scale oscillations grow. CLUST performs well on hexagonal-icosahedral grids and unrotated skipped latitude-longitude grids of the sphere for various shallow water test cases. Despite the computational modes, the hexagonal icosahedral grid performs well since these modes are easy and harmless to filter. As a result TRiSK appears to perform better than a spectral shallow water model.
Resumo:
Quasi-uniform grids of the sphere have become popular recently since they avoid parallel scaling bottle- necks associated with the poles of latitude–longitude grids. However quasi-uniform grids of the sphere are often non- orthogonal. A version of the C-grid for arbitrary non- orthogonal grids is presented which gives some of the mimetic properties of the orthogonal C-grid. Exact energy conservation is sacrificed for improved accuracy and the re- sulting scheme numerically conserves energy and potential enstrophy well. The non-orthogonal nature means that the scheme can be used on a cubed sphere. The advantage of the cubed sphere is that it does not admit the computa- tional modes of the hexagonal or triangular C-grids. On var- ious shallow-water test cases, the non-orthogonal scheme on a cubed sphere has accuracy less than or equal to the orthog- onal scheme on an orthogonal hexagonal icosahedron. A new diamond grid is presented consisting of quasi- uniform quadrilaterals which is more nearly orthogonal than the equal-angle cubed sphere but with otherwise similar properties. It performs better than the cubed sphere in ev- ery way and should be used instead in codes which allow a flexible grid structure.
Resumo:
Steep orography can cause noisy solutions and instability in models of the atmosphere. A new technique for modelling flow over orography is introduced which guarantees curl free gradients on arbitrary grids, implying that the pressure gradient term is not a spurious source of vorticity. This mimetic property leads to better hydrostatic balance and better energy conservation on test cases using terrain following grids. Curl-free gradients are achieved by using the co-variant components of velocity over orography rather than the usual horizontal and vertical components. In addition, gravity and acoustic waves are treated implicitly without the need for mean and perturbation variables or a hydrostatic reference profile. This enables a straightforward description of the implicit treatment of gravity waves. Results are presented of a resting atmosphere over orography and the curl-free pressure gradient formulation is advantageous. Results of gravity waves over orography are insensitive to the placement of terrain-following layers. The model with implicit gravity waves is stable in strongly stratified conditions, with N∆t up to at least 10 (where N is the Brunt-V ̈ais ̈al ̈a frequency). A warm bubble rising over orography is simulated and the curl free pressure gradient formulation gives much more accurate results for this test case than a model without this mimetic property.
Resumo:
We compare five general circulation models (GCMs) which have been recently used to study hot extrasolar planet atmospheres (BOB, CAM, IGCM, MITgcm, and PEQMOD), under three test cases useful for assessing model convergence and accuracy. Such a broad, detailed intercomparison has not been performed thus far for extrasolar planets study. The models considered all solve the traditional primitive equations, but employ di↵erent numerical algorithms or grids (e.g., pseudospectral and finite volume, with the latter separately in longitude-latitude and ‘cubed-sphere’ grids). The test cases are chosen to cleanly address specific aspects of the behaviors typically reported in hot extrasolar planet simulations: 1) steady-state, 2) nonlinearly evolving baroclinic wave, and 3) response to fast timescale thermal relaxation. When initialized with a steady jet, all models maintain the steadiness, as they should—except MITgcm in cubed-sphere grid. A very good agreement is obtained for a baroclinic wave evolving from an initial instability in pseudospectral models (only). However, exact numerical convergence is still not achieved across the pseudospectral models: amplitudes and phases are observably di↵erent. When subject to a typical ‘hot-Jupiter’-like forcing, all five models show quantitatively di↵erent behavior—although qualitatively similar, time-variable, quadrupole-dominated flows are produced. Hence, as have been advocated in several past studies, specific quantitative predictions (such as the location of large vortices and hot regions) by GCMs should be viewed with caution. Overall, in the tests considered here, pseudospectral models in pressure coordinate (PEBOB and PEQMOD) perform the best and MITgcm in cubed-sphere grid performs the worst.
Resumo:
Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The widespread use of service-oriented architectures (SOAs) and Web services in commercial software requires the adoption of development techniques to ensure the quality of Web services. Testing techniques and tools concern quality and play a critical role in accomplishing quality of SOA based systems. Existing techniques and tools for traditional systems are not appropriate to these new systems, making the development of Web services testing techniques and tools required. This article presents new testing techniques to automatically generate a set of test cases and data for Web services. The techniques presented here explore data perturbation of Web services messages upon data types, integrity and consistency. To support these techniques, a tool (GenAutoWS) was developed and applied to real problems. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
We design and investigate a sequential discontinuous Galerkin method to approximate two-phase immiscible incompressible flows in heterogeneous porous media with discontinuous capillary pressures. The nonlinear interface conditions are enforced weakly through an adequate design of the penalties on interelement jumps of the pressure and the saturation. An accurate reconstruction of the total velocity is considered in the Raviart-Thomas(-Nedelec) finite element spaces, together with diffusivity-dependent weighted averages to cope with degeneracies in the saturation equation and with media heterogeneities. The proposed method is assessed on one-dimensional test cases exhibiting rough solutions, degeneracies, and capillary barriers. Stable and accurate solutions are obtained without limiters. (C) 2010 Elsevier B.V. All rights reserved.
2D QSAR and similarity studies on cruzain inhibitors aimed at improving selectivity over cathepsin L
Resumo:
Hologram quantitative structure-activity relationships (HQSAR) were applied to a data set of 41 cruzain inhibitors. The best HQSAR model (Q(2) = 0.77; R-2 = 0.90) employing Surflex-Sim, as training and test sets generator, was obtained using atoms, bonds, and connections as fragment distinctions and 4-7 as fragment size. This model was then used to predict the potencies of 12 test set compounds, giving satisfactory predictive R-2 value of 0,88. The contribution maps obtained from the best HQSAR model are in agreement with the biological activities of the study compounds. The Trypanosoma cruzi cruzain shares high similarity with the mammalian homolog cathepsin L. The selectivity toward cruzam was checked by a database of 123 compounds, which corresponds to the 41 cruzain inhibitors used in the HQSAR model development plus 82 cathepsin L inhibitors. We screened these compounds by ROCS (Rapid Overlay of Chemical Structures), a Gaussian-shape volume overlap filter that can rapidly identify shapes that match the query molecule. Remarkably, ROCS was able to rank the first 37 hits as being only cruzain inhibitors. In addition, the area under the curve (AUC) obtained with ROCS was 0.96, indicating that the method was very efficient to distinguishing between cruzain and cathepsin L inhibitors. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
The client, Senselogic, had noticed an increased demand for an e-commerce system integrated into its product, SiteVision, something that did not previously exist. Senselogic wanted to integrate a third-party system to manage e-commerce. The problem was that there were very many e-commerce solutions to choose from. In order to select the best system it was necessary to evaluate the e-commerce systems and compare them to each other. To identify the elements that has to be included in an e-commerce system a study of literature was conducted. From the parts identified in the study of literature, a number of criteria were presented. Those criteria were then supplemented with additional criteria that Senselogic required of an e-commerce system before integrating it with SiteVision. Before the evaluation a number of test cases were created to test whether the ecommerce systems fulfilled to the criteria developed. These test cases were then used in the evaluation of the e-commerce systems while a test score was filled in. Then the evaluations of the different systems were compiled and compared in order to see which system best fulfilled the criteria. There was one system that scored higher than the others in the evaluation created. That was the system chosen to integrate with SiteVision.
Resumo:
Det mobila operativsystemet Android är idag ett ganska dominerande operativsystem på den mobila marknaden dels på grund av sin öppenhet men också på grund av att tillgängligheten är stor i och med både billiga och dyra telefoner finns att tillgå. Men idag har Android inget fördefinierat designmönster vilket leder till att varje utvecklare får bestämma själv vad som ska användas, vilket ibland kan leda till onödigt komplex kod i applikationerna som sen blir svårtestad och svårhanterlig. Detta arbete ämnar jämföra två designmönster, Passive Model View Controller (PMVC) och Model View View-Model (MVVM), för att se vilket designmönster som blir minst komplext med hjälp av att räkna fram mätvärden med hjälp av Cyclomatic Complexity Number (CCN). Studien är gjord utifrån arbetssättet Design & Creation och ämnar bidra med: kunskap om vilket mönster man bör välja, samt om CCN kan peka ut vilka delar i en applikation som kommer att ta mer eller mindre lång tid att testa. Under studiens gång tog vi även fram skillnader på om man anväder sig av den så kallade Single Responsibilyt Principle (SRP) eller inte. Detta för att se om separerade vyer gör någon skillnad i applikationernas komplexitet. I slutändan så visar studien på att komplexiteten i små applikationer är väldigt likvärdig, men att man även på små applikationer kan se skillnad på hur komplex koden är men också att kodkomplexitet på metodnivå kan ge riktlinjer för testfall.