220 resultados para Robust estimates
Resumo:
The mining environment presents a challenging prospect for stereo vision. Our objective is to produce a stereo vision sensor suited to close-range scenes consisting mostly of rocks. This sensor should produce a dense depth map within real-time constraints. Speed and robustness are of foremost importance for this application. This paper compares a number of stereo matching algorithms in terms of robustness and suitability to fast implementation. These include traditional area-based algorithms, and algorithms based on non-parametric transforms, notably the rank and census transforms. Our experimental results show that the rank and census transforms are robust with respect to radiometric distortion and introduce less computational complexity than conventional area-based matching techniques.
Resumo:
The mining environment, being complex, irregular and time varying, presents a challenging prospect for stereo vision. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This paper evaluates a number of matching techniques for possible use in a stereo vision sensor for mining automation applications. Area-based techniques have been investigated because they have the potential to yield dense maps, are amenable to fast hardware implementation, and are suited to textured scenes. In addition, two non-parametric transforms, namely, the rank and census, have been investigated. Matching algorithms using these transforms were found to have a number of clear advantages, including reliability in the presence of radiometric distortion, low computational complexity, and amenability to hardware implementation.
Resumo:
This study compared the performance of a local and three robust optimality criteria in terms of the standard error for a one-parameter and a two-parameter nonlinear model with uncertainty in the parameter values. The designs were also compared in conditions where there was misspecification in the prior parameter distribution. The impact of different correlation between parameters on the optimal design was examined in the two-parameter model. The designs and standard errors were solved analytically whenever possible and numerically otherwise.
Resumo:
This paper examines the case of a procurement auction for a single project, in which the breakdown of the winning bid into its component items determines the value of payments subsequently made to bidder as the work progresses. Unbalanced bidding, or bid skewing, involves the uneven distribution of mark-up among the component items in such a way as to attempt to derive increased benefit to the unbalancer but without involving any change in the total bid. One form of unbalanced bidding for example, termed Front Loading (FL), is thought to be widespread in practice. This involves overpricing the work items that occur early in the project and underpricing the work items that occur later in the project in order to enhance the bidder's cash flow. Naturally, auctioners attempt to protect themselves from the effects of unbalancing—typically reserving the right to reject a bid that has been detected as unbalanced. As a result, models have been developed to both unbalance bids and detect unbalanced bids but virtually nothing is known of their use, success or otherwise. This is of particular concern for the detection methods as, without testing, there is no way of knowing the extent to which unbalanced bids are remaining undetected or balanced bids are being falsely detected as unbalanced. This paper reports on a simulation study aimed at demonstrating the likely effects of unbalanced bid detection models in a deterministic environment involving FL unbalancing in a Texas DOT detection setting, in which bids are deemed to be unbalanced if an item exceeds a maximum (or fails to reach a minimum) ‘cut-off’ value determined by the Texas method. A proportion of bids are automatically and maximally unbalanced over a long series of simulated contract projects and the profits and detection rates of both the balancers and unbalancers are compared. The results show that, as expected, the balanced bids are often incorrectly detected as unbalanced, with the rate of (mis)detection increasing with the proportion of FL bidders in the auction. It is also shown that, while the profit for balanced bidders remains the same irrespective of the number of FL bidders involved, the FL bidder's profit increases with the greater proportion of FL bidders present in the auction. Sensitivity tests show the results to be generally robust, with (mis)detection rates increasing further when there are fewer bidders in the auction and when more data are averaged to determine the baseline value, but being smaller or larger with increased cut-off values and increased cost and estimate variability depending on the number of FL bidders involved. The FL bidder's expected benefit from unbalancing, on the other hand, increases, when there are fewer bidders in the auction. It also increases when the cut-off rate and discount rate is increased, when there is less variability in the costs and their estimates, and when less data are used in setting the baseline values.
Resumo:
We find a robust relationship between motor vehicle ownership, its interaction with legal heritage and obesity in OECD countries. Our estimates indicate that an increase of 100 motor vehicles per thousand residents is associated with about a 6% point increase in obesity in common law countries, whereas it has a much smaller or insignificant impact in civil law countries. These relations hold whether we examine trend data and simple correlations, or conduct cross-section or panel data regression analysis. Our results suggest that obesity rises with motor vehicle ownership in countries following a common law tradition where individual liberty is encouraged, whereas the link is small or statistically non-existent in countries with a civil law background where the rights of the individual tend to be circumscribed by the power of the state.
Resumo:
In the field of face recognition, Sparse Representation (SR) has received considerable attention during the past few years. Most of the relevant literature focuses on holistic descriptors in closed-set identification applications. The underlying assumption in SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such assumption is easily violated in the more challenging face verification scenario, where an algorithm is required to determine if two faces (where one or both have not been seen before) belong to the same person. In this paper, we first discuss why previous attempts with SR might not be applicable to verification problems. We then propose an alternative approach to face verification via SR. Specifically, we propose to use explicit SR encoding on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which are then concatenated to form an overall face descriptor. Due to the deliberate loss spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment & various image deformations. Within the proposed framework, we evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN), and an implicit probabilistic technique based on Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the proposed local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, in both verification and closed-set identification problems. The experiments also show that l1-minimisation based encoding has a considerably higher computational than the other techniques, but leads to higher recognition rates.
Resumo:
Background: A random QTL effects model uses a function of probabilities that two alleles in the same or in different animals at a particular genomic position are identical by descent (IBD). Estimates of such IBD probabilities and therefore, modeling and estimating QTL variances, depend on marker polymorphism, strength of linkage and linkage disequilibrium of markers and QTL, and the relatedness of animals in the pedigree. The effect of relatedness of animals in a pedigree on IBD probabilities and their characteristics was examined in a simulation study. Results: The study based on nine multi-generational family structures, similar to a pedigree structure of a real dairy population, distinguished by an increased level of inbreeding from zero to 28 % across the studied population. Highest inbreeding level in the pedigree, connected with highest relatedness, was accompanied by highest IBD probabilities of two alleles at the same locus, and by lower relative variation coefficients. Profiles of correlation coefficients of IBD probabilities along the marked chromosomal segment with those at the true QTL position were steepest when the inbreeding coefficient in the pedigree was highest. Precision of estimated QTL location increased with increasing inbreeding and pedigree relatedness. A method to assess the optimum level of inbreeding for QTL detection is proposed, depending on population parameters. Conclusions: An increased overall relationship in a QTL mapping design has positive effects on precision of QTL position estimates. But the relationship of inbreeding level and the capacity for QTL detection depending on the recombination rate of QTL and adjacent informative marker is not linear. © 2010 Freyer et al., licensee BioMed Central Ltd.
Resumo:
Several approaches have been introduced in the literature for active noise control (ANC) systems. Since the filtered-x least-mean-square (FxLMS) algorithm appears to be the best choice as a controller filter, researchers tend to improve performance of ANC systems by enhancing and modifying this algorithm. This paper proposes a new version of the FxLMS algorithm, as a first novelty. In many ANC applications, an on-line secondary path modeling method using white noise as a training signal is required to ensure convergence of the system. As a second novelty, this paper proposes a new approach for on-line secondary path modeling on the basis of a new variable-step-size (VSS) LMS algorithm in feed forward ANC systems. The proposed algorithm is designed so that the noise injection is stopped at the optimum point when the modeling accuracy is sufficient. In this approach, a sudden change in the secondary path during operation makes the algorithm reactivate injection of the white noise to re-adjust the secondary path estimate. Comparative simulation results shown in this paper indicate the effectiveness of the proposed approach in reducing both narrow-band and broad-band noise. In addition, the proposed ANC system is robust against sudden changes of the secondary path model.
Resumo:
Emerging sciences, such as conceptual cost estimating, seem to have to go through two phases. The first phase involves reducing the field of study down to its basic ingredients - from systems development to technological development (techniques) to theoretical development. The second phase operates in the direction in building up techniques from theories, and systems from techniques. Cost estimating is clearly and distinctly still in the first phase. A great deal of effort has been put into the development of both manual and computer based cost estimating systems during this first phase and, to a lesser extent, the development of a range of techniques that can be used (see, for instance, Ashworth & Skitmore, 1986). Theoretical developments have not, as yet, been forthcoming. All theories need the support of some observational data and cost estimating is not likely to be an exception. These data do not need to be complete in order to build theories. As it is possible to construct an image of a prehistoric animal such as the brontosaurus from only a few key bones and relics, so a theory of cost estimating may possibly be found on a few factual details. The eternal argument of empiricists and deductionists is that, as theories need factual support, so do we need theories in order to know what facts to collect. In cost estimating, the basic facts of interest concern accuracy, the cost of achieving this accuracy, and the trade off between the two. When cost estimating theories do begin to emerge, it is highly likely that these relationships will be central features. This paper presents some of the facts we have been able to acquire regarding one part of this relationship - accuracy, and its influencing factors. Although some of these factors, such as the amount of information used in preparing the estimate, will have cost consequences, we have not yet reached the stage of quantifying these costs. Indeed, as will be seen, many of the factors do not involve any substantial cost considerations. The absence of any theory is reflected in the arbitrary manner in which the factors are presented. Rather, the emphasis here is on the consideration of purely empirical data concerning estimating accuracy. The essence of good empirical research is to .minimize the role of the researcher in interpreting the results of the study. Whilst space does not allow a full treatment of the material in this manner, the principle has been adopted as closely as possible to present results in an uncleaned and unbiased way. In most cases the evidence speaks for itself. The first part of the paper reviews most of the empirical evidence that we have located to date. Knowledge of any work done, but omitted here would be most welcome. The second part of the paper presents an analysis of some recently acquired data pertaining to this growing subject.
Resumo:
X-ray microtomography (micro-CT) with micron resolution enables new ways of characterizing microstructures and opens pathways for forward calculations of multiscale rock properties. A quantitative characterization of the microstructure is the first step in this challenge. We developed a new approach to extract scale-dependent characteristics of porosity, percolation, and anisotropic permeability from 3-D microstructural models of rocks. The Hoshen-Kopelman algorithm of percolation theory is employed for a standard percolation analysis. The anisotropy of permeability is calculated by means of the star volume distribution approach. The local porosity distribution and local percolation probability are obtained by using the local porosity theory. Additionally, the local anisotropy distribution is defined and analyzed through two empirical probability density functions, the isotropy index and the elongation index. For such a high-resolution data set, the typical data sizes of the CT images are on the order of gigabytes to tens of gigabytes; thus an extremely large number of calculations are required. To resolve this large memory problem parallelization in OpenMP was used to optimally harness the shared memory infrastructure on cache coherent Non-Uniform Memory Access architecture machines such as the iVEC SGI Altix 3700Bx2 Supercomputer. We see adequate visualization of the results as an important element in this first pioneering study.
Resumo:
In this paper, the shape design optimisation using morphing aerofoil/wing techniques, namely the leading and/or trailing edge deformation of a natural laminar flow RAE 5243 aerofoil is investigated to reduce transonic drag without taking into account of the piezo actuator mechanism. Two applications using a Multi-Objective Genetic Algorithm (MOGA)coupled with Euler and boundary analyser (MSES) are considered: the first example minimises the total drag with a lift constraint by optimising both the trailing edge actuator position and trailing edge deformation angle at a constant transonic Mach number (M! = 0.75)and boundary layer transition position (xtr = 45%c). The second example consists of finding reliable designs that produce lower mean total drag (μCd) and drag sensitivity ("Cd) at different uncertainty flight conditions based on statistical information. Numerical results illustrate how the solution quality in terms of mean drag and its sensitivity can be improved using MOGA software coupled with a robust design approach taking account of uncertainties (lift and boundary transition positions) and also how transonic flow over aerofoil/wing can be controlled to the best advantage using morphing techniques.
Resumo:
Previous research employing indirect measures of arch structure, such as those derived from footprints, have indicated that obesity results in a “flatter” foot type. In the absence of radiographic measures, however, definitive conclusions regarding the osseous alignment of the foot cannot be made. We determined the effect of body mass index (BMI) on radiographic and footprint‐based measures of arch structure. The research was a cross‐sectional study in which radiographic and footprint‐based measures of foot structure were made in 30 subjects (10 males, 20 female) in addition to standard anthropometric measures of height, weight, and BMI. Multiple (univariate) regression analysis demonstrated that both BMI ( β = 0.39, t 26 = 2.12, p = 0.04) and radiographic arch alignment ( β = 0.51, t 26 = 3.32, p < 0.01) were significant predictors of footprint‐based measures of arch height after controlling for all variables in the model ( R 2 = 0.59, F 3,26 = 12.3, p < 0.01). In contrast, radiographic arch alignment was not significantly associated with BMI ( β = −0.03, t 26 = −0.13, p = 0.89) when Arch Index and age were held constant ( R 2 = 0.52, F 3,26 = 9.3, p < 0.01). Adult obesity does not influence osseous alignment of the medial longitudinal arch, but selectively distorts footprint‐based measures of arch structure. Footprint‐based measures of arch structure should be interpreted with caution when comparing groups of varying body composition.
Resumo:
Purpose: To present the results of a mixed-method study comparing the level of agreement of a two-phased, nurse-administered Comprehensive Geriatric Assessment (CGA) with current methods that assess the fitness for chemotherapy of older cancer patients. A nurse-led model of multidisciplinary cancer care based on the results is also described. Methods: The two phases comprised initial screening by a nurse with the Vulnerable Elders Survey-13 [VES-13], followed by nurse administration of a detailed CGA. Both phases were linked to a computerised algorithm categorising the patient as ‘fit’, ‘vulnerable’ or ‘frail’. The study determined the level of agreement between VES-13- and CGA-determined categories; and between the CGA and the physicians’ assessments. It also compared the CGA’s predictive abilities in terms of subsequent treatment toxicity; while interviews determined the acceptability of the nurse-led procedure from key stakeholders' perspectives. Results: Data collection was completed in December 2011. The results will be presented at the conference. A consecutive-series n=170 will be enrolled, 33% of whom are ‘fit’; 33% ‘vulnerable’; and 33% ‘too frail’ for treatment. This sample can detect, with 90% power, kappa coefficients of agreement of ≥ 0.70 or higher (“substantial agreement”). Fitness sub-group comparisons of agreement between the medical oncologist and the nurse assessments can detect kappa estimates of Κ ≥ 0.80 with the same power. Conclusion: The results have informed a nurse-led model of cancer care. It meets a clear need to develop, implement and test a nurse-led, robust, evidence-based, clinically-justifiable and economically-feasible CGA process that has relevance in national and international contexts.
Resumo:
This research makes a major contribution which enables efficient searching and indexing of large archives of spoken audio based on speaker identity. It introduces a novel technique dubbed as “speaker attribution” which is the task of automatically determining ‘who spoke when?’ in recordings and then automatically linking the unique speaker identities within each recording across multiple recordings. The outcome of the research will also have significant impact in improving the performance of automatic speech recognition systems through the extracted speaker identities.