875 resultados para requirement-based testing
Resumo:
Power system real time security assessment is one of the fundamental modules of the electricity markets. Typically, when a contingency occurs, it is required that security assessment and enhancement module shall be ready for action within about 20 minutes’ time to meet the real time requirement. The recent California black out again highlighted the importance of system security. This paper proposed an approach for power system security assessment and enhancement based on the information provided from the pre-defined system parameter space. The proposed scheme opens up an efficient way for real time security assessment and enhancement in a competitive electricity market for single contingency case
Resumo:
Previous work on generating state machines for the purpose of class testing has not been formally based. There has also been work on deriving state machines from formal specifications for testing non-object-oriented software. We build on this work by presenting a method for deriving a state machine for testing purposes from a formal specification of the class under test. We also show how the resulting state machine can be used as the basis for a test suite developed and executed using an existing framework for class testing. To derive the state machine, we identify the states and possible interactions of the operations of the class under test. The Test Template Framework is used to formally derive the states from the Object-Z specification of the class under test. The transitions of the finite state machine are calculated from the derived states and the class's operations. The formally derived finite state machine is transformed to a ClassBench testgraph, which is used as input to the ClassBench framework to test a C++ implementation of the class. The method is illustrated using a simple bounded queue example.
Resumo:
This field study was a combined chemical and biological investigation of the relative effects of using dispersants to treat oil spills impacting mangrove habitats. The aim of the chemistry was to determine whether dispersant affected the short- or long-term composition of a medium range crude oil (Gippsland) stranded in a tropical mangrove environment in Queensland, Australia. Sediment cores from three replicate plots of each treatment (oil only and oil plus dispersant) were analyzed for total hydrocarbons and for individual molecular markers (alkanes, aromatics, triterpanes, and steranes). Sediments were collected at 2 days, then 1, 7, 13 and 22 months post-spill. Over this time, oil in the six treated plots decreased exponentially from 36.6 +/- 16.5 to 1.2 +/- 0.8 mg/g dry wt. There was no statistical difference in initial oil concentrations, penetration of oil to depth, or in the rates of oil dissipation between oiled or dispersed oil plots. At 13 months, alkanes were >50% degraded, aromatics were similar to 30% degraded based upon ratios of labile to resistant markers. However, there was no change in the triterpane or sterane biomarker signatures of the retained oil. This is of general forensic interest for pollution events. The predominant removal processes were evaporation (less than or equal to 27%) and dissolution (greater than or equal to 56%), with a lag-phase of 1 month before the start of significant microbial degradation (less than or equal to 7%). The most resistant fraction of the oil that remained after 7 months (the higher molecular weight hydrocarbons) correlated with the initial total organic carbon content of the soil. Removal rate in the Queensland mangroves was significantly faster than that observed in the Caribbean and was related to tidal flushing. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
Computational models complement laboratory experimentation for efficient identification of MHC-binding peptides and T-cell epitopes. Methods for prediction of MHC-binding peptides include binding motifs, quantitative matrices, artificial neural networks, hidden Markov models, and molecular modelling. Models derived by these methods have been successfully used for prediction of T-cell epitopes in cancer, autoimmunity, infectious disease, and allergy. For maximum benefit, the use of computer models must be treated as experiments analogous to standard laboratory procedures and performed according to strict standards. This requires careful selection of data for model building, and adequate testing and validation. A range of web-based databases and MHC-binding prediction programs are available. Although some available prediction programs for particular MHC alleles have reasonable accuracy, there is no guarantee that all models produce good quality predictions. In this article, we present and discuss a framework for modelling, testing, and applications of computational methods used in predictions of T-cell epitopes. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
This article examines the efficiency of the National Football League (NFL) betting market. The standard ordinary least squares (OLS) regression methodology is replaced by a probit model. This circumvents potential econometric problems, and allows us to implement more sophisticated betting strategies where bets are placed only when there is a relatively high probability of success. In-sample tests indicate that probit-based betting strategies generate statistically significant profits. Whereas the profitability of a number of these betting strategies is confirmed by out-of-sample testing, there is some inconsistency among the remaining out-of-sample predictions. Our results also suggest that widely documented inefficiencies in this market tend to dissipate over time.
Resumo:
Fogo selvagem (FS) is mediated by pathogenic, predominantly IgG4, anti-desmoglein 1 (Dsg1) autoantibodies and is endemic in Limao Verde, Brazil. IgG and IgG subclass autoantibodies were tested in a sample of 214 FS patients and 261 healthy controls by Dsg1 ELISA. For model selection, the sample was randomly divided into training (50%), validation (25%), and test (25%) sets. Using the training and validation sets, IgG4 was chosen as the best predictor of FS, with index values above 6.43 classified as FS. Using the test set, IgG4 has sensitivity of 92% (95% confidence interval (95% CI): 82-95%), specificity of 97% (95% CI: 89-100%), and area under the curve of 0.97 ( 95% CI: 0.94-1.00). The IgG4 positive predictive value (PPV) in Limao Verde (3% FS prevalence) was 49%. The sensitivity, specificity, and PPV of IgG anti-Dsg1 were 87, 91, and 23%, respectively. The IgG4-based classifier was validated by testing 11 FS patients before and after clinical disease and 60 Japanese pemphigus foliaceus patients. It classified 21 of 96 normal individuals from a Limao Verde cohort as having FS serology. On the basis of its PPV, half of the 21 individuals may currently have preclinical FS and could develop clinical disease in the future. Identifying individuals during preclinical FS will enhance our ability to identify the etiological agent(s) triggering FS.
Resumo:
Fuzzy Bayesian tests were performed to evaluate whether the mother`s seroprevalence and children`s seroconversion to measles vaccine could be considered as ""high"" or ""low"". The results of the tests were aggregated into a fuzzy rule-based model structure, which would allow an expert to influence the model results. The linguistic model was developed considering four input variables. As the model output, we obtain the recommended age-specific vaccine coverage. The inputs of the fuzzy rules are fuzzy sets and the outputs are constant functions, performing the simplest Takagi-Sugeno-Kang model. This fuzzy approach is compared to a classical one, where the classical Bayes test was performed. Although the fuzzy and classical performances were similar, the fuzzy approach was more detailed and revealed important differences. In addition to taking into account subjective information in the form of fuzzy hypotheses it can be intuitively grasped by the decision maker. Finally, we show that the Bayesian test of fuzzy hypotheses is an interesting approach from the theoretical point of view, in the sense that it combines two complementary areas of investigation, normally seen as competitive. (C) 2007 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
This paper describes a practical application of MDA and reverse engineering based on a domain-specific modelling language. A well defined metamodel of a domain-specific language is useful for verification and validation of associated tools. We apply this approach to SIFA, a security analysis tool. SIFA has evolved as requirements have changed, and it has no metamodel. Hence, testing SIFA’s correctness is difficult. We introduce a formal metamodelling approach to develop a well-defined metamodel of the domain. Initially, we develop a domain model in EMF by reverse engineering the SIFA implementation. Then we transform EMF to Object-Z using model transformation. Finally, we complete the Object-Z model by specifying system behavior. The outcome is a well-defined metamodel that precisely describes the domain and the security properties that it analyses. It also provides a reliable basis for testing the current SIFA implementation and forward engineering its successor.
Resumo:
Despite the increasing utilization of all-ceramic crown systems, their mechanical performance relative to that of metal ceramic restorations (MCR) has yet to be determined. This investigation tested the hypothesis that MCR present higher reliability over two Y-TZP all-ceramic crown systems under mouth-motion fatigue conditions. A CAD-based tooth preparation with the average dimensions of a mandibular first molar was used as a master die to fabricate all restorations. One 0.5-mm Pd-Ag and two Y-TZP system cores were veneered with 1.5 mm porcelain. Crowns were cemented onto aged (60 days in water) composite (Z100, 3M/ESPE) reproductions of the die. Mouth-motion fatigue was performed, and use level probability Weibull curves were determined. Failure modes of all systems included chipping or fracture of the porcelain veneer initiating at the indentation site. Fatigue was an acceleration factor for all-ceramic systems, but not for the MCR system. The latter presented significantly higher reliability under mouth-motion cyclic mechanical testing.
Resumo:
General practitioners wanting to practise evidence-based medicine (EBM) are constrained by time factors and the great diversity of clinical problems they deal with. They need experience in knowing what questions to ask, in locating and evaluating the evidence, and in applying it. Conventional searching for the best evidence can be achieved in daily general practice. Sometimes the search can be performed during the consultation, but more often it can be done later and the patient can return for the result. Case-based journal clubs provide a supportive environment for GPs to work together to find the best evidence at regular meetings. An evidence-based literature search service is being piloted to enhance decision-making for individual patients. A central facility provides the search and interprets the evidence in relation to individual cases. A request form and a results format make the service akin to pathology testing or imaging. Using EBM in general practice appears feasible. Major difficulties still exist before it can be practised by all GPs, but it has the potential to change the way doctors update their knowledge.
Resumo:
A combination of modelling and analysis techniques was used to design a six component force balance. The balance was designed specifically for the measurement of impulsive aerodynamic forces and moments characteristic of hypervelocity shock tunnel testing using the stress wave force measurement technique. Aerodynamic modelling was used to estimate the magnitude and distribution of forces and finite element modelling to determine the mechanical response of proposed balance designs. Simulation of balance performance was based on aerodynamic loads and mechanical responses using convolution techniques. Deconvolution was then used to assess balance performance and to guide further design modifications leading to the final balance design. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Bond's method for ball mill scale-up only gives the mill power draw for a given duty. This method is incompatible with computer modelling and simulation techniques. It might not be applicable for the design of fine grinding ball mills and ball mills preceded by autogenous and semi-autogenous grinding mills. Model-based ball mill scale-up methods have not been validated using a wide range of full-scale circuit data. Their accuracy is therefore questionable. Some of these methods also need expensive pilot testing. A new ball mill scale-up procedure is developed which does not have these limitations. This procedure uses data from two laboratory tests to determine the parameters of a ball mill model. A set of scale-up criteria then scales-up these parameters. The procedure uses the scaled-up parameters to simulate the steady state performance of full-scale mill circuits. At the end of the simulation, the scale-up procedure gives the size distribution, the volumetric flowrate and the mass flowrate of all the streams in the circuit, and the mill power draw.
Resumo:
The selection, synthesis and chromatographic evaluation of a synthetic affinity adsorbent for human recombinant factor VIIa is described. The requirement for a metal ion-dependent immunoadsorbent step in the purification of the recombinant human clotting factor, FVIIa, has been obviated by using the X-ray crystallographic structure of the complex of tissue factor (TF) and Factor VIIa and has directed our combinatorial approach to select, synthesise and evaluate a rationally-selected affinity adsorbent from a limited library of putative ligands. The selected and optimised ligand comprises a triazine scaffold bis-substituted with 3-aminobenzoic acid and has been shown to bind selectively to FVIIa in a Ca2+-dependent manner. The adsorbent purifies FVIIa to almost identical purity (>99%), yield (99%), activation/degradation profile and impurity content (∼1000 ppm) as the current immunoadsorption process, while displaying a 10-fold higher static capacity and substantially higher reusability and durability. © 2002 Elsevier Science B.V. All rights reserved.
Resumo:
With the advent of object-oriented languages and the portability of Java, the development and use of class libraries has become widespread. Effective class reuse depends on class reliability which in turn depends on thorough testing. This paper describes a class testing approach based on modeling each test case with a tuple and then generating large numbers of tuples to thoroughly cover an input space with many interesting combinations of values. The testing approach is supported by the Roast framework for the testing of Java classes. Roast provides automated tuple generation based on boundary values, unit operations that support driver standardization, and test case templates used for code generation. Roast produces thorough, compact test drivers with low development and maintenance cost. The framework and tool support are illustrated on a number of non-trivial classes, including a graphical user interface policy manager. Quantitative results are presented to substantiate the practicality and effectiveness of the approach. Copyright (C) 2002 John Wiley Sons, Ltd.