972 resultados para Specification searching


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a new identification method based on the residual white noise autoregressive criterion (Pukkila et al. , 1990) to select the order of VARMA structures. Results from extensive simulation experiments based on different model structures with varying number of observations and number of component series are used to demonstrate the performance of this new procedure. We also use economic and business data to compare the model structures selected by this order selection method with those identified in other published studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DNA Microarray is a powerful tool to measure the level of a mixed population of nucleic acids at one time, which has great impact in many aspects of life sciences research. In order to distinguish nucleic acids with very similar composition by hybridization, it is necessary to design microarray probes with high specificities and sensitivities. Highly specific probes correspond to probes having unique DNA sequences; whereas highly sensitive probes correspond to those with melting temperature within a desired range and having no secondary structure. The selection of these probes from a set of functional DNA sequences (exons) constitutes a computationally expensive discrete non-linear search problem. We delegate the search task to a simple yet effective Evolution Strategy algorithm. The computational efficiency is also greatly improved by making use of an available bioinformatics tool.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and Aims The morphogenesis and architecture of a rice plant, Oryza sativa, are critical factors in the yield equation, but they are not well studied because of the lack of appropriate tools for 3D measurement. The architecture of rice plants is characterized by a large number of tillers and leaves. The aims of this study were to specify rice plant architecture and to find appropriate functions to represent the 3D growth across all growth stages. Methods A japonica type rice, 'Namaga', was grown in pots under outdoor conditions. A 3D digitizer was used to measure the rice plant structure at intervals from the young seedling stage to maturity. The L-system formalism was applied to create '3D virtual rice' plants, incorporating models of phenological development and leaf emergence period as a function of temperature and photoperiod, which were used to determine the timing of tiller emergence. Key Results The relationships between the nodal positions and leaf lengths, leaf angles and tiller angles were analysed and used to determine growth functions for the models. The '3D virtual rice' reproduces the structural development of isolated plants and provides a good estimation of the fillering process, and of the accumulation of leaves. Conclusions The results indicated that the '3D virtual rice' has a possibility to demonstrate the differences in the structure and development between cultivars and under different environmental conditions. Future work, necessary to reflect both cultivar and environmental effects on the model performance, and to link with physiological models, is proposed in the discussion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Workflow systems have traditionally focused on the so-called production processes which are characterized by pre-definition, high volume, and repetitiveness. Recently, the deployment of workflow systems in non-traditional domains such as collaborative applications, e-learning and cross-organizational process integration, have put forth new requirements for flexible and dynamic specification. However, this flexibility cannot be offered at the expense of control, a critical requirement of business processes. In this paper, we will present a foundation set of constraints for flexible workflow specification. These constraints are intended to provide an appropriate balance between flexibility and control. The constraint specification framework is based on the concept of pockets of flexibility which allows ad hoc changes and/or building of workflows for highly flexible processes. Basically, our approach is to provide the ability to execute on the basis of a partially specified model, where the full specification of the model is made at runtime, and may be unique to each instance. The verification of dynamically built models is essential. Where as ensuring that the model conforms to specified constraints does not pose great difficulty, ensuring that the constraint set itself does not carry conflicts and redundancy is an interesting and challenging problem. In this paper, we will provide a discussion on both the static and dynamic verification aspects. We will also briefly present Chameleon, a prototype workflow engine that implements these concepts. (c) 2004 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We describe the creation process of the Minimum Information Specification for In Situ Hybridization and Immunohistochemistry Experiments (MISFISHIE). Modeled after the existing minimum information specification for microarray data, we created a new specification for gene expression localization experiments, initially to facilitate data sharing within a consortium. After successful use within the consortium, the specification was circulated to members of the wider biomedical research community for comment and refinement. After a period of acquiring many new suggested requirements, it was necessary to enter a final phase of excluding those requirements that were deemed inappropriate as a minimum requirement for all experiments. The full specification will soon be published as a version 1.0 proposal to the community, upon which a more full discussion must take place so that the final specification may be achieved with the involvement of the whole community. This paper is part of the special issue of OMICS on data standards.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with product performance and specification in new product development. There are many different definitions of performance and specification in the literature. These are reviewed and a new classification scheme for product performance is proposed. The link between performance and specification is discussed in detail using a new model for the new product development process. The new model involves two stages, with each containing three main phases, and is useful for making decisions with regards to product performance and specification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Markov chain Monte Carlo (MCMC) is a methodology that is gaining widespread use in the phylogenetics community and is central to phylogenetic software packages such as MrBayes. An important issue for users of MCMC methods is how to select appropriate values for adjustable parameters such as the length of the Markov chain or chains, the sampling density, the proposal mechanism, and, if Metropolis-coupled MCMC is being used, the number of heated chains and their temperatures. Although some parameter settings have been examined in detail in the literature, others are frequently chosen with more regard to computational time or personal experience with other data sets. Such choices may lead to inadequate sampling of tree space or an inefficient use of computational resources. We performed a detailed study of convergence and mixing for 70 randomly selected, putatively orthologous protein sets with different sizes and taxonomic compositions. Replicated runs from multiple random starting points permit a more rigorous assessment of convergence, and we developed two novel statistics, delta and epsilon, for this purpose. Although likelihood values invariably stabilized quickly, adequate sampling of the posterior distribution of tree topologies took considerably longer. Our results suggest that multimodality is common for data sets with 30 or more taxa and that this results in slow convergence and mixing. However, we also found that the pragmatic approach of combining data from several short, replicated runs into a metachain to estimate bipartition posterior probabilities provided good approximations, and that such estimates were no worse in approximating a reference posterior distribution than those obtained using a single long run of the same length as the metachain. Precision appears to be best when heated Markov chains have low temperatures, whereas chains with high temperatures appear to sample trees with high posterior probabilities only rarely. [Bayesian phylogenetic inference; heating parameter; Markov chain Monte Carlo; replicated chains.]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To investigate the stability of trace reactivation in healthy older adults, 22 older volunteers with no significant neurological history participated in a cross-modal priming task. Whilst both object relative center embedded (ORC) and object relative right branching (ORR) sentences is-ere employed, working memory load was reduced by limiting the number of wordy separating the antecedent front the gap for both sentence types. Analysis of the results did not reveal any significant trace reactivation for the ORC or ORR sentences. The results did reveal, however, a positive correlation between age and semantic printing at the pre-gap position and a negative correlation between age and semantic printing at the gap position for ORC sentences. In contrast, there was no correlation between age and priming effects for the ORR sentences. These results indicated that trace reactivation may be sensitive to a variety of age related factors, including lexical activation and working memory. The implications of these results for sentence processing in the older population arc discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To compare the accuracy, costs and utility of using the National Death Index (NDI) and state-based cancer registries in determining the mortality status of a cohort of women diagnosed with ovarian cancer in the early 1990s. METHODS: As part of a large prognostic study, identifying information on 822 women diagnosed with ovarian cancer between 1990 and 1993, was simultaneously submitted to the NDI and three state-based cancer registries to identify deceased women as of June 30, 1999. This was compared to the gold standard of "definite deaths". A comparative evaluation was also made of the time and costs associated with the two methods. RESULTS: Of the 450 definite deaths in our cohort the NDI correctly identified 417 and all of the 372 women known to be alive (sensitivity 93%, specificity 100%). Inconsistencies in identifiers recorded in our cohort files, particularly names, were responsible for the majority of known deaths not matching with the NDI, and if eliminated would increase the sensitivity to 98%. The cancer registries correctly identified 431 of the 450 definite deaths (sensitivity 96%). The costs associated with the NDI search were the same as the cancer registry searches, but the cancer registries took two months longer to conduct the searches. CONCLUSIONS AND IMPLICATIONS: This study indicates that the cancer registries are valuable, cost effective agencies for follow-up of mortality outcome in cancer cohorts, particularly where cohort members were residents of those states. For following large national cohorts the NDI provides additional information and flexibility when searching for deaths in Australia. This study also shows that women can be followed up for mortality with a high degree of accuracy using either service. Because each service makes a valuable contribution to the identification of deceased cancer subjects, both should be considered for optimal mortality follow-up in studies of cancer patients.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is not surprising that students are unconvinced about the benefits of formal methods if we do not show them how these methods can be integrated with other activities in the software lifecycle. In this paper, we describe an approach to integrating formal specification with more traditional verification and validation techniques in a course that teaches formal specification and specification-based testing. This is accomplished through a series of assignments on a single software component that involves specifying the component in Object-Z, validating that specification using inspection and a specification animation tool, and then testing an implementation of the specification using test cases derived from the formal specification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Achieving consistency between a specification and its implementation is an important part of software development In previous work, we have presented a method and tool support for testing a formal specification using animation and then verifying an implementation of that specification. The method is based on a testgraph, which provides a partial model of the application under test. The testgraph is used in combination with an animator to generate test sequences for testing the formal specification. The same testgraph is used during testing to execute those same sequences on the implementation and to ensure that the implementation conforms to the specification. So far, the method and its tool support have been applied to software components that can be accessed through an application programmer interface (API). In this paper, we use an industrially-based case study to discuss the problems associated with applying the method to a software system with a graphical user interface (GUI). In particular, the lack of a standardised interface, as well as controllability and observability problems, make it difficult to automate the testing of the implementation. The method can still be applied, but the amount of testing that can be carried on the implementation is limited by the manual effort involved.