833 resultados para Robustness


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estimation of population size with missing zero-class is an important problem that is encountered in epidemiological assessment studies. Fitting a Poisson model to the observed data by the method of maximum likelihood and estimation of the population size based on this fit is an approach that has been widely used for this purpose. In practice, however, the Poisson assumption is seldom satisfied. Zelterman (1988) has proposed a robust estimator for unclustered data that works well in a wide class of distributions applicable for count data. In the work presented here, we extend this estimator to clustered data. The estimator requires fitting a zero-truncated homogeneous Poisson model by maximum likelihood and thereby using a Horvitz-Thompson estimator of population size. This was found to work well, when the data follow the hypothesized homogeneous Poisson model. However, when the true distribution deviates from the hypothesized model, the population size was found to be underestimated. In the search of a more robust estimator, we focused on three models that use all clusters with exactly one case, those clusters with exactly two cases and those with exactly three cases to estimate the probability of the zero-class and thereby use data collected on all the clusters in the Horvitz-Thompson estimator of population size. Loss in efficiency associated with gain in robustness was examined based on a simulation study. As a trade-off between gain in robustness and loss in efficiency, the model that uses data collected on clusters with at most three cases to estimate the probability of the zero-class was found to be preferred in general. In applications, we recommend obtaining estimates from all three models and making a choice considering the estimates from the three models, robustness and the loss in efficiency. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A study was undertaken to determine whether cocoa swollen shoot virus is transmitted by seeds, to improve the robustness of quarantine procedures for international exchange and long term conservation of cocoa germplasm. PCR/capillary electrophoresis, using cocoa swollen shoot virus primers designed from the most conserved regions of the six published cocoa genome sequences, allowed the detection of cocoa swollen shoot virus in all the component parts of cocoa seeds from cocoa swollen shoot virus-infected trees. PCR/capillary electrophoresis revealed the presence of cocoa swollen shoot virus in seedlings raised from seeds obtained from cocoa swollen shoot virus-infected trees. The high frequency with which the virus was transmitted through the seedlings suggested that cocoa swollen shoot virus is transmitted by seeds. This has serious implications for cocoa germplasm conservation and distribution. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: We report an analysis of a protein network of functionally linked proteins, identified from a phylogenetic statistical analysis of complete eukaryotic genomes. Phylogenetic methods identify pairs of proteins that co-evolve on a phylogenetic tree, and have been shown to have a high probability of correctly identifying known functional links. Results: The eukaryotic correlated evolution network we derive displays the familiar power law scaling of connectivity. We introduce the use of explicit phylogenetic methods to reconstruct the ancestral presence or absence of proteins at the interior nodes of a phylogeny of eukaryote species. We find that the connectivity distribution of proteins at the point they arise on the tree and join the network follows a power law, as does the connectivity distribution of proteins at the time they are lost from the network. Proteins resident in the network acquire connections over time, but we find no evidence that 'preferential attachment' - the phenomenon of newly acquired connections in the network being more likely to be made to proteins with large numbers of connections - influences the network structure. We derive a 'variable rate of attachment' model in which proteins vary in their propensity to form network interactions independently of how many connections they have or of the total number of connections in the network, and show how this model can produce apparent power-law scaling without preferential attachment. Conclusion: A few simple rules can explain the topological structure and evolutionary changes to protein-interaction networks: most change is concentrated in satellite proteins of low connectivity and small phenotypic effect, and proteins differ in their propensity to form attachments. Given these rules of assembly, power law scaled networks naturally emerge from simple principles of selection, yielding protein interaction networks that retain a high-degree of robustness on short time scales and evolvability on longer evolutionary time scales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To explore the projection efficiency of a design, Tsai, et al [2000. Projective three-level main effects designs robust to model uncertainty. Biometrika 87, 467-475] introduced the Q criterion to compare three-level main-effects designs for quantitative factors that allow the consideration of interactions in addition to main effects. In this paper, we extend their method and focus on the case in which experimenters have some prior knowledge, in advance of running the experiment, about the probabilities of effects being non-negligible. A criterion which incorporates experimenters' prior beliefs about the importance of each effect is introduced to compare orthogonal, or nearly orthogonal, main effects designs with robustness to interactions as a secondary consideration. We show that this criterion, exploiting prior information about model uncertainty, can lead to more appropriate designs reflecting experimenters' prior beliefs. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

MS is an important analytical tool in clinical proteomics, primarily in the disease-specific discovery identification and characterisation of proteomic biomarkers and patterns. MS-based proteomics is increasingly used in clinical validation and diagnostic method development. The latter departs from the typical application of MS-based proteomics by exchanging some of the high performance of analysis for the throughput, robustness and simplicity required for clinical diagnostics. Although conventional MS-based proteomics has become an important field in clinical applications, some of the most recent MS technologies have not yet been extensively applied in clinical proteomics. in this review, we will describe the current state of MS in clinical proteomics and look to the future of this field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Theoretical understanding of the implementation and use of innovations within construction contexts is discussed and developed. It is argued that both the rhetoric of the 'improvement agenda' within construction and theories of innovation fail to account for the complex contexts and disparate perspectives which characterize construction work. To address this, the concept of relative boundedness is offered. Relatively unbounded innovation is characterized by a lack of a coherent central driving force or mediator with the ability to reconcile potential conflicts and overcome resistance to implementation. This is a situation not exclusive to, but certainly indicative of, much construction project work. Drawing on empirical material from the implementation of new design and coordination technologies on a large construction project, the concept is developed, concentrating on the negotiations and translations implementation mobilized. An actor-network theory (ANT) approach is adopted, which emphasizes the roles that both human actors and non-human agents play in the performance and outcomes of these interactions. Three aspects of how relative boundedness is constituted and affected are described; through the robustness of existing practices and expectations, through the delegation of interests on to technological artefacts and through the mobilization of actors and artefacts to constrain and limit the scope of negotiations over new technology implementation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims: To test the possibility that wines available in the marketplace may contain culturable yeasts and to evaluate the 5.8S-ITS rDNA sequence analysis as adequate means for the identification of isolates. Methods and Results: As a case study, typical Greek wines were surveyed. Sequence analysis of the 5.8S-ITS rDNA was tested for its robustness in species or strain identification. Sixteen isolates could be assigned into the species Brettanomyces bruxellensis, Saccharomyces cerevisiae and Rhodotorula pinicola, whereas four isolates could not be safely identified. B. bruxellensis was the dominant species present in house wines, while non-Saccharomyces sp. were viable in aged wines of high alcohol content. Conclusions: Yeast population depends on postfermentation procedures or storage conditions. Although 5.8S-ITS rDNA sequence analysis is generally a rapid method to identify wine yeast isolates at the species level, or even below that, it may not be sufficient for some genera. Significance and Impact of the Study: This is the first report to show that commercial wines may possess diverse and potentially harmful yeast populations. The knowledge of yeasts able to reside in this niche environment is essential towards integrated quality assurance programmes. For selected species, the 5.8S-ITS rDNA sequence analysis is a rapid and accurate means.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Traditionally, applications and tools supporting collaborative computing have been designed only with personal computers in mind and support a limited range of computing and network platforms. These applications are therefore not well equipped to deal with network heterogeneity and, in particular, do not cope well with dynamic network topologies. Progress in this area must be made if we are to fulfil the needs of users and support the diversity, mobility, and portability that are likely to characterise group work in future. This paper describes a groupware platform called Coco that is designed to support collaboration in a heterogeneous network environment. The work demonstrates that progress in the p development of a generic supporting groupware is achievable, even in the context of heterogeneous and dynamic networks. The work demonstrates the progress made in the development of an underlying communications infrastructure, building on peer-to-peer concept and topologies to improve scalability and robustness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness, including three algorithms using combined A- or D-optimality or PRESS statistic (Predicted REsidual Sum of Squares) with regularised orthogonal least squares algorithm respectively. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalisation scheme in orthogonal least squares or regularised orthogonal least squares has been extended such that the new algorithms are computationally efficient. A numerical example is included to demonstrate effectiveness of the algorithms. Copyright (C) 2003 IFAC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper develops fuzzy methods for control of the rotary inverted pendulum, an underactuated mechanical system. Two control laws are presented, one for swing up and another for the stabilization. The pendulum is swung up from the vertical down stable position to the upward unstable position in a controlled trajectory. The rules for the swing up are heuristically written such that each swing results in greater energy build up. The stabilization is achieved by mapping a stabilizing LQR control law to two fuzzy inference engines, which reduces the computational load compared with using a single fuzzy inference engine. The robustness of the balancing control is tested by attaching a bottle of water at the tip of the pendulum.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 3D reconstruction of a Golgi-stained dendritic tree from a serial stack of images captured with a transmitted light bright-field microscope is investigated. Modifications to the bootstrap filter are discussed such that the tree structure may be estimated recursively as a series of connected segments. The tracking performance of the bootstrap particle filter is compared against Differential Evolution, an evolutionary global optimisation method, both in terms of robustness and accuracy. It is found that the particle filtering approach is significantly more robust and accurate for the data considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a fully complex-valued radial basis function (RBF) network for regression application. The locally regularised orthogonal least squares (LROLS) algorithm with the D-optimality experimental design, originally derived for constructing parsimonious real-valued RBF network models, is extended to the fully complex-valued RBF network. Like its real-valued counterpart, the proposed algorithm aims to achieve maximised model robustness and sparsity by combining two effective and complementary approaches. The LROLS algorithm alone is capable of producing a very parsimonious model with excellent generalisation performance while the D-optimality design criterion further enhances the model efficiency and robustness. By specifying an appropriate weighting for the D-optimality cost in the combined model selecting criterion, the entire model construction procedure becomes automatic. An example of identifying a complex-valued nonlinear channel is used to illustrate the regression application of the proposed fully complex-valued RBF network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents the results of the crowd image analysis challenge, as part of the PETS 2009 workshop. The evaluation is carried out using a selection of the metrics available in the Video Analysis and Content Extraction (VACE) program and the CLassification of Events, Activities, and Relationships (CLEAR) consortium. The evaluation highlights the strengths of the authors’ systems in areas such as precision, accuracy and robustness.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A construction algorithm for multioutput radial basis function (RBF) network modelling is introduced by combining a locally regularised orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximised model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious RBF network model with excellent generalisation performance. The D-optimality design criterion enhances the model efficiency and robustness. A further advantage of the combined approach is that the user only needs to specify a weighting for the D-optimality cost in the combined RBF model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.