964 resultados para Robust Performance


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current methods for formation of detected chess-board vertices into a grid structure tend to be weak in situations with a warped grid, and false and missing vertex-features. In this paper we present a highly robust, yet efficient, scheme suitable for inference of regular 2D square mesh structure from vertices recorded both during projection of a chess-board pattern onto 3D objects, and in the more simple case of camera calibration. Examples of the method's performance in a lung function measuring application, observing chess-boards projected on to patients' chests, are given. The method presented is resilient to significant surface deformation, and tolerates inexact vertex-feature detection. This robustness results from the scheme's novel exploitation of feature orientation information. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developing noninvasive and accurate diagnostics that are easily manufactured, robust, and reusable will provide monitoring of high-risk individuals in any clinical or point-of-care environment. We have developed a clinically relevant optical glucose nanosensor that can be reused at least 400 times without a compromise in accuracy. The use of a single 6 ns laser (λ = 532 nm, 200 mJ) pulse rapidly produced off-axis Bragg diffraction gratings consisting of ordered silver nanoparticles embedded within a phenylboronic acid-functionalized hydrogel. This sensor exhibited reversible large wavelength shifts and diffracted the spectrum of narrow-band light over the wavelength range λpeak ≈ 510-1100 nm. The experimental sensitivity of the sensor permits diagnosis of glucosuria in the urine samples of diabetic patients with an improved performance compared to commercial high-throughput urinalysis devices. The sensor response was achieved within 5 min, reset to baseline in ∼10 s. It is anticipated that this sensing platform will have implications for the development of reusable, equipment-free colorimetric point-of-care diagnostic devices for diabetes screening.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents methods for implementing robust hexpod locomotion on an autonomous robot with many sensors and actuators. The controller is based on the Subsumption Architecture and is fully distributed over approximately 1500 simple, concurrent processes. The robot, Hannibal, weighs approximately 6 pounds and is equipped with over 100 physical sensors, 19 degrees of freedom, and 8 on board computers. We investigate the following topics in depth: distributed control of a complex robot, insect-inspired locomotion control for gait generation and rough terrain mobility, and fault tolerance. The controller was implemented, debugged, and tested on Hannibal. Through a series of experiments, we examined Hannibal's gait generation, rough terrain locomotion, and fault tolerance performance. These results demonstrate that Hannibal exhibits robust, flexible, real-time locomotion over a variety of terrain and tolerates a multitude of hardware failures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of a randomized version of the subgraph-exclusion algorithm (called Ramsey) for CLIQUE by Boppana and Halldorsson is studied on very large graphs. We compare the performance of this algorithm with the performance of two common heuristic algorithms, the greedy heuristic and a version of simulated annealing. These algorithms are tested on graphs with up to 10,000 vertices on a workstation and graphs as large as 70,000 vertices on a Connection Machine. Our implementations establish the ability to run clique approximation algorithms on very large graphs. We test our implementations on a variety of different graphs. Our conclusions indicate that on randomly generated graphs minor changes to the distribution can cause dramatic changes in the performance of the heuristic algorithms. The Ramsey algorithm, while not as good as the others for the most common distributions, seems more robust and provides a more even overall performance. In general, and especially on deterministically generated graphs, a combination of simulated annealing with either the Ramsey algorithm or the greedy heuristic seems to perform best. This combined algorithm works particularly well on large Keller and Hamming graphs and has a competitive overall performance on the DIMACS benchmark graphs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We revisit the problem of connection management for reliable transport. At one extreme, a pure soft-state (SS) approach (as in Delta-t [9]) safely removes the state of a connection at the sender and receiver once the state timers expire without the need for explicit removal messages. And new connections are established without an explicit handshaking phase. On the other hand, a hybrid hard-state/soft-state (HS+SS) approach (as in TCP) uses both explicit handshaking as well as timer-based management of the connection’s state. In this paper, we consider the worst-case scenario of reliable single-message communication, and develop a common analytical model that can be instantiated to capture either the SS approach or the HS+SS approach. We compare the two approaches in terms of goodput, message and state overhead. We also use simulations to compare against other approaches, and evaluate them in terms of correctness (with respect to data loss and duplication) and robustness to bad network conditions (high message loss rate and variable channel delays). Our results show that the SS approach is more robust, and has lower message overhead. On the other hand, SS requires more memory to keep connection states, which reduces goodput. Given memories are getting bigger and cheaper, SS presents the best choice over bandwidth-constrained, error-prone networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current Internet transport protocols make end-to-end measurements and maintain per-connection state to regulate the use of shared network resources. When two or more such connections share a common endpoint, there is an opportunity to correlate the end-to-end measurements made by these protocols to better diagnose and control the use of shared resources. We develop packet probing techniques to determine whether a pair of connections experience shared congestion. Correct, efficient diagnoses could enable new techniques for aggregate congestion control, QoS admission control, connection scheduling and mirror site selection. Our extensive simulation results demonstrate that the conditional (Bayesian) probing approach we employ provides superior accuracy, converges faster, and tolerates a wider range of network conditions than recently proposed memoryless (Markovian) probing approaches for addressing this opportunity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The data streaming model provides an attractive framework for one-pass summarization of massive data sets at a single observation point. However, in an environment where multiple data streams arrive at a set of distributed observation points, sketches must be computed remotely and then must be aggregated through a hierarchy before queries may be conducted. As a result, many sketch-based methods for the single stream case do not apply directly, as either the error introduced becomes large, or because the methods assume that the streams are non-overlapping. These limitations hinder the application of these techniques to practical problems in network traffic monitoring and aggregation in sensor networks. To address this, we develop a general framework for evaluating and enabling robust computation of duplicate-sensitive aggregate functions (e.g., SUM and QUANTILE), over data produced by distributed sources. We instantiate our approach by augmenting the Count-Min and Quantile-Digest sketches to apply in this distributed setting, and analyze their performance. We conclude with experimental evaluation to validate our analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In a time-course microarray experiment, the expression level for each gene is observed across a number of time-points in order to characterize the temporal trajectories of the gene-expression profiles. For many of these experiments, the scientific aim is the identification of genes for which the trajectories depend on an experimental or phenotypic factor. There is an extensive recent body of literature on statistical methodology for addressing this analytical problem. Most of the existing methods are based on estimating the time-course trajectories using parametric or non-parametric mean regression methods. The sensitivity of these regression methods to outliers, an issue that is well documented in the statistical literature, should be of concern when analyzing microarray data. RESULTS: In this paper, we propose a robust testing method for identifying genes whose expression time profiles depend on a factor. Furthermore, we propose a multiple testing procedure to adjust for multiplicity. CONCLUSIONS: Through an extensive simulation study, we will illustrate the performance of our method. Finally, we will report the results from applying our method to a case study and discussing potential extensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a framework for robust optimization that relaxes the standard notion of robustness by allowing the decision maker to vary the protection level in a smooth way across the uncertainty set. We apply our approach to the problem of maximizing the expected value of a payoff function when the underlying distribution is ambiguous and therefore robustness is relevant. Our primary objective is to develop this framework and relate it to the standard notion of robustness, which deals with only a single guarantee across one uncertainty set. First, we show that our approach connects closely to the theory of convex risk measures. We show that the complexity of this approach is equivalent to that of solving a small number of standard robust problems. We then investigate the conservatism benefits and downside probability guarantees implied by this approach and compare to the standard robust approach. Finally, we illustrate theme thodology on an asset allocation example consisting of historical market data over a 25-year investment horizon and find in every case we explore that relaxing standard robustness with soft robustness yields a seemingly favorable risk-return trade-off: each case results in a higher out-of-sample expected return for a relatively minor degradation of out-of-sample downside performance. © 2010 INFORMS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods. METHODOLOGY/PRINCIPAL FINDINGS: We searched PubMed and Cochrane databases (2000-2006) for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout) rates being approximated by an exponential decay curve (e(-lambdat)) where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100) and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive. CONCLUSION/SIGNIFICANCE: Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last observation carried forward as the primary method of analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural distributed systems are adaptive, scalable and fault-tolerant. Emergence science describes how higher-level self-regulatory behaviour arises in natural systems from many participants following simple rulesets. Emergence advocates simple communication models, autonomy and independence, enhancing robustness and self-stabilization. High-quality distributed applications such as autonomic systems must satisfy the appropriate nonfunctional requirements which include scalability, efficiency, robustness, low-latency and stability. However the traditional design of distributed applications, especially in terms of the communication strategies employed, can introduce compromises between these characteristics. This paper discusses ways in which emergence science can be applied to distributed computing, avoiding some of the compromises associated with traditionally-designed applications. To demonstrate the effectiveness of this paradigm, an emergent election algorithm is described and its performance evaluated. The design incorporates nondeterministic behaviour. The resulting algorithm has very low communication complexity, and is simultaneously very stable, scalable and robust.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A computational modelling approach integrated with optimisation and statistical methods that can aid the development of reliable and robust electronic packages and systems is presented. The design for reliability methodology is demonstrated for the design of a SiP structure. In this study the focus is on the procedure for representing the uncertainties in the package design parameters, their impact on reliability and robustness of the package design and how these can be included in the design optimisation modelling framework. The analysis of thermo-mechanical behaviour of the package is conducted using non-linear transient finite element simulations. Key system responses of interest, the fatigue life-time of the lead-free solder interconnects and warpage of the package, are predicted and used subsequently for design purposes. The design tasks are to identify the optimal SiP designs by varying several package input parameters so that the reliability and the robustness of the package are improved and in the same time specified performance criteria are also satisfied

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The purpose of this paper is to describe the problems encountered and the solutions developed when using benchmarking and key performance indicators (KPIs) to monitor a major UK social house building innovation (change) programme. The innovation programme sought improvements to both the quality of the house product and the procurement process. Design/methodology/approach: Benchmarking and KPIs were used to quantify performance and in-depth case studies to identify underlying cause and effect relationships within the innovation programme. Findings: The inherent competition between consortium members; the complexity of the relationship between the consortium and its strategic partner; the lack of an authoritative management control structure; and the rapidly changing nature of the UK social housing market all proved problematic to the development of a reliable and robust monitoring system. These problems were overcome by the development of multi-dimensional benchmarking model that balanced the needs and aspirations of the individual organisations with the broader objectives of the consortium. Research limitations/implications: Whilst the research methodology provides insight into the factors that affected the performance of a major innovation programme its findings may not be representative of all projects. Practical implications: The lessons learnt should assist those developing benchmarking models for multi-client consortia. Originality/value: The work reported in this paper describes an inclusive approach to benchmarking in which a multiple client group and their strategic partner sought to work together for shared gain. Very few papers have addressed this issue.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Face recognition with unknown, partial distortion and occlusion is a practical problem, and has a wide range of applications, including security and multimedia information retrieval. The authors present a new approach to face recognition subject to unknown, partial distortion and occlusion. The new approach is based on a probabilistic decision-based neural network, enhanced by a statistical method called the posterior union model (PUM). PUM is an approach for ignoring severely mismatched local features and focusing the recognition mainly on the reliable local features. It thereby improves the robustness while assuming no prior information about the corruption. We call the new approach the posterior union decision-based neural network (PUDBNN). The new PUDBNN model has been evaluated on three face image databases (XM2VTS, AT&T and AR) using testing images subjected to various types of simulated and realistic partial distortion and occlusion. The new system has been compared to other approaches and has demonstrated improved performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Co-localisation is a widely used measurement in immunohistochemical analysis to determine if fluorescently labelled biological entities, such as cells, proteins or molecules share a same location. However the measurement of co-localisation is challenging due to the complex nature of such fluorescent images, especially when multiple focal planes are captured. The current state-of-art co-localisation measurements of 3-dimensional (3D) image stacks are biased by noise and cross-overs from non-consecutive planes.

Method: In this study, we have developed Co-localisation Intensity Coefficients (CICs) and Co-localisation Binary Coefficients (CBCs), which uses rich z-stack data from neighbouring focal planes to identify similarities between image intensities of two and potentially more fluorescently-labelled biological entities. This was developed using z-stack images from murine organotypic slice cultures from central nervous system tissue, and two sets of pseudo-data. A large amount of non-specific cross-over situations are excluded using this method. This proposed method is also proven to be robust in recognising co-localisations even when images are polluted with a range of noises.

Results: The proposed CBCs and CICs produce robust co-localisation measurements which are easy to interpret, resilient to noise and capable of removing a large amount of false positivity, such as non-specific cross-overs. Performance of this method of measurement is significantly more accurate than existing measurements, as determined statistically using pseudo datasets of known values. This method provides an important and reliable tool for fluorescent 3D neurobiological studies, and will benefit other biological studies which measure fluorescence co-localisation in 3D.