954 resultados para Test Case Generator
Resumo:
Pertinent domestic and international developments involving issues related to tensions affecting religious or belief communities have been increasingly occupying the international law agenda. Those who generate and, thus, shape international law jurisprudence are in the process of seeking some of the answers to these questions. Thus the need for reconceptualization of the right to freedom of religion or belief continues as demands to the right to freedom of religion or belief challenge the boundaries of religious freedom in national and international law. This thesis aims to contribute to the process of “re-conceptualization” by exploring the notion of the collective dimension of freedom of religion or belief with a view to advance the protection of the right to freedom of religion or belief. The case of Turkey provides a useful test case where both the domestic legislation can be assessed against international standards, while at the same time lessons can be drawn for the improvement of the standard of international review of the protection of the collective dimension of freedom of religion or belief. The right to freedom of religion or belief, as enshrined in international human rights documents, is unique in its formulation in that it provides protection for the enjoyment of the rights “in community with others”.1 It cannot be realized in isolation; it crosses categories of human rights with aspects that are individual, aspects that can be effectively realized only in an organized community of individuals and aspects that belong to the field of economic, social and cultural rights such as those related to religious or moral education. This study centers on two primary questions; first, what is the scope and nature of protection afforded to the collective dimension of freedom of religion or belief in international law, and, secondly, how does the protection of the collective dimension of freedom of religion or belief in Turkey compare and contrast to international standards? Section I explores and examines the notion of the collective dimension of freedom of religion or belief, and the scope of its protection in international law with particular reference to the right to acquire legal personality and autonomy religious/belief communities. In Section II, the case study on Turkey constitutes the applied part of the thesis; here, the protection of the collective dimension is assessed with a view to evaluate the compliance of Turkish legislation and practice with international norms as well as seeking to identify how the standard of international review of the collective dimension of freedom of religion or belief can be improved.
Resumo:
Accurate knowledge of species’ habitat associations is important for conservation planning and policy. Assessing habitat associations is a vital precursor to selecting appropriate indicator species for prioritising sites for conservation or assessing trends in habitat quality. However, much existing knowledge is based on qualitative expert opinion or local scale studies, and may not remain accurate across different spatial scales or geographic locations. Data from biological recording schemes have the potential to provide objective measures of habitat association, with the ability to account for spatial variation. We used data on 50 British butterfly species as a test case to investigate the correspondence of data-derived measures of habitat association with expert opinion, from two different butterfly recording schemes. One scheme collected large quantities of occurrence data (c. 3 million records) and the other, lower quantities of standardised monitoring data (c. 1400 sites). We used general linear mixed effects models to derive scores of association with broad-leaf woodland for both datasets and compared them with scores canvassed from experts. Scores derived from occurrence and abundance data both showed strongly positive correlations with expert opinion. However, only for occurrence data did these fell within the range of correlations between experts. Data-derived scores showed regional spatial variation in the strength of butterfly associations with broad-leaf woodland, with a significant latitudinal trend in 26% of species. Sub-sampling of the data suggested a mean sample size of 5000 occurrence records per species to gain an accurate estimation of habitat association, although habitat specialists are likely to be readily detected using several hundred records. Occurrence data from recording schemes can thus provide easily obtained, objective, quantitative measures of habitat association.
Resumo:
Mutation testing has been used to assess the quality of test case suites by analyzing the ability in distinguishing the artifact under testing from a set of alternative artifacts, the so-called mutants. The mutants are generated from the artifact under testing by applying a set of mutant operators, which produce artifacts with simple syntactical differences. The mutant operators are usually based on typical errors that occur during the software development and can be related to a fault model. In this paper, we propose a language-named MuDeL (MUtant DEfinition Language)-for the definition of mutant operators, aiming not only at automating the mutant generation, but also at providing precision and formality to the operator definition. The proposed language is based on concepts from transformational and logical programming paradigms, as well as from context-free grammar theory. Denotational semantics formal framework is employed to define the semantics of the MuDeL language. We also describe a system-named mudelgen-developed to support the use of this language. An executable representation of the denotational semantics of the language is used to check the correctness of the implementation of mudelgen. At the very end, a mutant generator module is produced, which can be incorporated into a specific mutant tool/environment. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Test is an area in system development. Test can be performed manually or automated. Test activities can be supported by Word documents and Excel sheets for documenting and executing test cases and as well for follow up, but there are also new test tools designed to support and facilitate the testing process and the activities of the test. This study has described manual test and identified strengths and weaknesses of manual testing with a testing tool called Microsoft Test Manager (MTM) and of manual testing using test cases and test log templates developed by the testers at Sogeti. The result that emerged from the problem and strength analysis and the analysis of literature studies and firsthand experiences (in terms of creating, documenting and executing test cases) addresses the issue of the following weaknesses and strengths. Strengths of the test tool is that it contains needed functionality all in one place and it is available when needed without having to open up other programs which saves many steps of activity. Strengths with test without the support of test tools is mainly that it is easy to learn and gives a good overview, easy to format text as desired and flexible to changes during execution of a test case. Weaknesses in test with the support of test tools include that it is difficult to get a good overview of the entire test case, that it is not possible to format the text in the test steps. It is as well not possible to modify the test steps during execution. It is also difficult to use some of the test design techniques of TMap, for example a checklist, when using the test tool MTM. Weaknesses with test without the support of the testing tool MTM is that the tester gets many more steps of activities to do compared to doing the same activities with the support of the testing tool MTM. There is more to remember because the documents the tester use are not directly linked. Altogether the strengths of the test tool stands out when it comes to supporting the testing process.
Resumo:
In der vorliegenden Arbeit werden Entwicklung und Test einesneuartigen Interferometers mit zwei örtlich separierten,phasenkorrelierten Röntgenquellen zur Messung des Realteilsdes komplexen Brechungsindex von dünnen, freitragendenFolien beschrieben. Die Röntgenquellen sind zwei Folien, indenen relativistische Elektronen der Energie 855 MeVÜbergangsstrahlung erzeugen. Das am Mainzer Mikrotron MAMIrealisierte Interferometer besteht aus einer Berylliumfolieeiner Dicke von 10 Mikrometer und einer Nickel-Probefolieeiner Dicke von 2.1 Mikrometer. Die räumlichenInterferenzstrukturen werden als Funktion desFolienabstandes in einer ortsauflösenden pn-CCD nach derFourier-Analyse des Strahlungsimpulses mittels einesSilizium-Einkristallspektrometers gemessen. Die Phase derIntensitätsoszillationen enthält Informationen über dieDispersion, die die in der strahlaufwärtigen Folie erzeugteWelle in der strahlabwärtigen Probefolie erfährt. AlsFallstudie wurde die Dispersion von Nickel im Bereich um dieK-Absorptionskane bei 8333 eV, sowie bei Photonenenergien um9930 eV gemessen. Bei beiden Energien wurden deutlicheInterferenzstrukturen nachgewiesen, wobei die Kohärenz wegenWinkelmischungen mit steigendem Folienabstand bzw.Beobachtungswinkel abnimmt. Es wurden Anpassungen vonSimulationsrechnungen an die Messdaten durchgeführt, die diekohärenzvermindernden Effekte berücksichtigen. Aus diesenAnpassungen konnte bei beiden untersuchten Energien dieDispersion der Nickelprobe mit einer relativen Genauigkeitvon kleiner gleich 1.5 % in guter Übereinstimmung mit derLiteratur bestimmt werden.
Resumo:
The research literature on metalieuristic and evolutionary computation has proposed a large number of algorithms for the solution of challenging real-world optimization problems. It is often not possible to study theoretically the performance of these algorithms unless significant assumptions are made on either the algorithm itself or the problems to which it is applied, or both. As a consequence, metalieuristics are typically evaluated empirically using a set of test problems. Unfortunately, relatively little attention has been given to the development of methodologies and tools for the large-scale empirical evaluation and/or comparison of metaheuristics. In this paper, we propose a landscape (test-problem) generator that can be used to generate optimization problem instances for continuous, bound-constrained optimization problems. The landscape generator is parameterized by a small number of parameters, and the values of these parameters have a direct and intuitive interpretation in terms of the geometric features of the landscapes that they produce. An experimental space is defined over algorithms and problems, via a tuple of parameters for any specified algorithm and problem class (here determined by the landscape generator). An experiment is then clearly specified as a point in this space, in a way that is analogous to other areas of experimental algorithmics, and more generally in experimental design. Experimental results are presented, demonstrating the use of the landscape generator. In particular, we analyze some simple, continuous estimation of distribution algorithms, and gain new insights into the behavior of these algorithms using the landscape generator.
Resumo:
Usually, data mining projects that are based on decision trees for classifying test cases will use the probabilities provided by these decision trees for ranking classified test cases. We have a need for a better method for ranking test cases that have already been classified by a binary decision tree because these probabilities are not always accurate and reliable enough. A reason for this is that the probability estimates computed by existing decision tree algorithms are always the same for all the different cases in a particular leaf of the decision tree. This is only one reason why the probability estimates given by decision tree algorithms can not be used as an accurate means of deciding if a test case has been correctly classified. Isabelle Alvarez has proposed a new method that could be used to rank the test cases that were classified by a binary decision tree [Alvarez, 2004]. In this paper we will give the results of a comparison of different ranking methods that are based on the probability estimate, the sensitivity of a particular case or both.
Resumo:
Carrying out information about the microstructure and stress behaviour of ferromagnetic steels, magnetic Barkhausen noise (MBN) has been used as a basis for effective non-destructive testing methods, opening new areas in industrial applications. One of the factors that determines the quality and reliability of the MBN analysis is the way information is extracted from the signal. Commonly, simple scalar parameters are used to characterize the information content, such as amplitude maxima and signal root mean square. This paper presents a new approach based on the time-frequency analysis. The experimental test case relates the use of MBN signals to characterize hardness gradients in a AISI4140 steel. To that purpose different time-frequency (TFR) and time-scale (TSR) representations such as the spectrogram, the Wigner-Ville distribution, the Capongram, the ARgram obtained from an AutoRegressive model, the scalogram, and the Mellingram obtained from a Mellin transform are assessed. It is shown that, due to nonstationary characteristics of the MBN, TFRs can provide a rich and new panorama of these signals. Extraction techniques of some time-frequency parameters are used to allow a diagnostic process. Comparison with results obtained by the classical method highlights the improvement on the diagnosis provided by the method proposed.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
Smoothing the potential energy surface for structure optimization is a general and commonly applied strategy. We propose a combination of soft-core potential energy functions and a variation of the diffusion equation method to smooth potential energy surfaces, which is applicable to complex systems such as protein structures; The performance of the method was demonstrated by comparison with simulated annealing using the refinement of the undecapeptide Cyclosporin A as a test case. Simulations were repeated many times using different initial conditions and structures since the methods are heuristic and results are only meaningful in a statistical sense.
Resumo:
Quantitation of progesterone (P(4)) in biological fluids is often performed by radioimmunoassay (RIA), whereas liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) has been used much less often. Due to its autoconfirmatory nature, LC-MS/MS greatly minimizes false positives and interference. Herein we report and compare with RIA an optimized LC-MS/MS method for rapid, efficient, and cost-effective quantitation of P(4) in plasma of cattle with no sample derivatization. The quantitation of plasma P(4) released from three nonbiodegradable, commercial, intravaginal P(4)-releasing devices (IPRD) over 192 h in six ovariectomized cows was compared in a pairwise study as a test case. Both techniques showed similar P(4) kinetics (P > 0.05) whereas results of P(4) quantitation by RIA were consistently higher compared with LC-MS/MS (P < 0.05) due to interference and matrix effects. The LC-MS/MS method was validated according to the recommended analytical standards and displayed P(4) limits of detection (LOD) and quantitation (LOQ) of 0.08 and a 0.25 ng/mL, respectively. The high selective LC-MS/MS method proposed herein for P(4) quantitation eliminates the risks associated with radioactive handling; it also requires no sample derivatization, which is a common requirement for LC-MS/MS quantitation of steroid hormones. Its application to multisteroid assays is also viable, and it is envisaged that it may provide a gold standard technique for hormone quantitation in animal reproductive science studies. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
A new method is presented to determine an accurate eigendecomposition of difficult low temperature unimolecular master equation problems. Based on a generalisation of the Nesbet method, the new method is capable of achieving complete spectral resolution of the master equation matrix with relative accuracy in the eigenvectors. The method is applied to a test case of the decomposition of ethane at 300 K from a microcanonical initial population with energy transfer modelled by both Ergodic Collision Theory and the exponential-down model. The fact that quadruple precision (16-byte) arithmetic is required irrespective of the eigensolution method used is demonstrated. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
With the advent of object-oriented languages and the portability of Java, the development and use of class libraries has become widespread. Effective class reuse depends on class reliability which in turn depends on thorough testing. This paper describes a class testing approach based on modeling each test case with a tuple and then generating large numbers of tuples to thoroughly cover an input space with many interesting combinations of values. The testing approach is supported by the Roast framework for the testing of Java classes. Roast provides automated tuple generation based on boundary values, unit operations that support driver standardization, and test case templates used for code generation. Roast produces thorough, compact test drivers with low development and maintenance cost. The framework and tool support are illustrated on a number of non-trivial classes, including a graphical user interface policy manager. Quantitative results are presented to substantiate the practicality and effectiveness of the approach. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
Marking its fiftieth anniversary in late 2001, the ANZUS alliance remains Australia's primary security relationship and one of the United States' most important defence arrangements in the Asia-Pacific region. It is argued here that ANZUS has defied many common suppositions advanced by international relations theorists on how alliances work. It thus represents an important refutation of arguments that they are short-term instruments of mere policy expediency and are largely interest-dependent. Cultural and normative factors are powerful, if often underrated, determinants for ANZUS's perpetuation. ANZUS may thus constitute an important test case for expanding our understanding of alliance politics beyond the usual preconditions and prerogatives normally associated with such a relationship.