324 resultados para robust extended Kalman filter
Resumo:
Novel, highly chlorinated surface coatings were produced via a one-step plasma polymerization (pp) of 1,1,1-trichloroethane (TCE), exhibiting excellent antimicrobial properties against the vigorously biofilm-forming bacterium Staphylococcus epidermidis.
Resumo:
In the field of face recognition, sparse representation (SR) has received considerable attention during the past few years, with a focus on holistic descriptors in closed-set identification applications. The underlying assumption in such SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such an assumption is easily violated in the face verification scenario, where the task is to determine if two faces (where one or both have not been seen before) belong to the same person. In this study, the authors propose an alternative approach to SR-based face verification, where SR encoding is performed on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which then form an overall face descriptor. Owing to the deliberate loss of spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment and various image deformations. Within the proposed framework, they evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN) and an implicit probabilistic technique based on Gaussian mixture models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, on both the traditional closed-set identification task and the more applicable face verification task. The experiments also show that l1-minimisation-based encoding has a considerably higher computational cost when compared with SANN-based and probabilistic encoding, but leads to higher recognition rates.
Resumo:
Population size is crucial when estimating population-normalized drug consumption (PNDC) from wastewater-based drug epidemiology (WBDE). Three conceptually different population estimates can be used: de jure (common census, residence), de facto (all persons within a sewer catchment), and chemical loads (contributors to the sampled wastewater). De facto and chemical loads will be the same where all households contribute to a central sewer system without wastewater loss. This study explored the feasibility of determining a de facto population and its effect on estimating PNDC in an urban community over an extended period. Drugs and other chemicals were analyzed in 311 daily composite wastewater samples. The daily estimated de facto population (using chemical loads) was on average 32% higher than the de jure population. Consequently, using the latter would systemically overestimate PNDC by 22%. However, the relative day-to-day pattern of drug consumption was similar regardless of the type of normalization as daily illicit drug loads appeared to vary substantially more than the population. Using chemical loads population, we objectively quantified the total methodological uncertainty of PNDC and reduced it by a factor of 2. Our study illustrated the potential benefits of using chemical loads population for obtaining more robust PNDC data in WBDE.
Resumo:
Analysing wastewater samples is an innovative approach that overcomes many limitations of traditional surveys to identify and measure a range of chemicals that were consumed by or exposed to people living in a sewer catchment area. First conceptualised in 2001, much progress has been made to make wastewater analysis (WWA) a reliable and robust tool for measuring chemical consumption and/or exposure. At the moment, the most popular application of WWA, sometimes referred as sewage epidemiology, is to monitor the consumption of illicit drugs in communities around the globe, including China. The approach has been largely adopted by law enforcement agencies as a device to monitor the temporal and geographical patterns of drug consumption. In the future, the methodology can be extended to other chemicals including biomarkers of population health (e.g. environmental or oxidative stress biomarkers, lifestyle indicators or medications that are taken by different demographic groups) and pollutants that people are exposed to (e.g. polycyclic aromatic hydrocarbons, perfluorinated chemicals, and toxic pesticides). The extension of WWA to a huge range of chemicals may give rise to a field called sewage chemical-information mining (SCIM) with unexplored potentials. China has many densely populated cities with thousands of sewage treatment plants which are favourable for applying WWA/SCIM in order to help relevant authorities gather information about illicit drug consumption and population health status. However, there are some prerequisites and uncertainties of the methodology that should be addressed for SCIM to reach its full potential in China.
Resumo:
This thesis investigates the use of fusion techniques and mathematical modelling to increase the robustness of iris recognition systems against iris image quality degradation, pupil size changes and partial occlusion. The proposed techniques improve recognition accuracy and enhance security. They can be further developed for better iris recognition in less constrained environments that do not require user cooperation. A framework to analyse the consistency of different regions of the iris is also developed. This can be applied to improve recognition systems using partial iris images, and cancelable biometric signatures or biometric based cryptography for privacy protection.
Resumo:
This thesis has investigated how to cluster a large number of faces within a multi-media corpus in the presence of large session variation. Quality metrics are used to select the best faces to represent a sequence of faces; and session variation modelling improves clustering performance in the presence of wide variations across videos. Findings from this thesis contribute to improving the performance of both face verification systems and the fully automated clustering of faces from a large video corpus.
Resumo:
There are numerous load estimation methods available, some of which are captured in various online tools. However, most estimators are subject to large biases statistically, and their associated uncertainties are often not reported. This makes interpretation difficult and the estimation of trends or determination of optimal sampling regimes impossible to assess. In this paper, we first propose two indices for measuring the extent of sampling bias, and then provide steps for obtaining reliable load estimates by minimizing the biases and making use of possible predictive variables. The load estimation procedure can be summarized by the following four steps: - (i) output the flow rates at regular time intervals (e.g. 10 minutes) using a time series model that captures all the peak flows; - (ii) output the predicted flow rates as in (i) at the concentration sampling times, if the corresponding flow rates are not collected; - (iii) establish a predictive model for the concentration data, which incorporates all possible predictor variables and output the predicted concentrations at the regular time intervals as in (i), and; - (iv) obtain the sum of all the products of the predicted flow and the predicted concentration over the regular time intervals to represent an estimate of the load. The key step to this approach is in the development of an appropriate predictive model for concentration. This is achieved using a generalized regression (rating-curve) approach with additional predictors that capture unique features in the flow data, namely the concept of the first flush, the location of the event on the hydrograph (e.g. rise or fall) and cumulative discounted flow. The latter may be thought of as a measure of constituent exhaustion occurring during flood events. The model also has the capacity to accommodate autocorrelation in model errors which are the result of intensive sampling during floods. Incorporating this additional information can significantly improve the predictability of concentration, and ultimately the precision with which the pollutant load is estimated. We also provide a measure of the standard error of the load estimate which incorporates model, spatial and/or temporal errors. This method also has the capacity to incorporate measurement error incurred through the sampling of flow. We illustrate this approach using the concentrations of total suspended sediment (TSS) and nitrogen oxide (NOx) and gauged flow data from the Burdekin River, a catchment delivering to the Great Barrier Reef. The sampling biases for NOx concentrations range from 2 to 10 times indicating severe biases. As we expect, the traditional average and extrapolation methods produce much higher estimates than those when bias in sampling is taken into account.
Resumo:
Robust estimation often relies on a dispersion function that is more slowly varying at large values than the square function. However, the choice of tuning constant in dispersion functions may impact the estimation efficiency to a great extent. For a given family of dispersion functions such as the Huber family, we suggest obtaining the "best" tuning constant from the data so that the asymptotic efficiency is maximized. This data-driven approach can automatically adjust the value of the tuning constant to provide the necessary resistance against outliers. Simulation studies show that substantial efficiency can be gained by this data-dependent approach compared with the traditional approach in which the tuning constant is fixed. We briefly illustrate the proposed method using two datasets.
Resumo:
Robust methods are useful in making reliable statistical inferences when there are small deviations from the model assumptions. The widely used method of the generalized estimating equations can be "robustified" by replacing the standardized residuals with the M-residuals. If the Pearson residuals are assumed to be unbiased from zero, parameter estimators from the robust approach are asymptotically biased when error distributions are not symmetric. We propose a distribution-free method for correcting this bias. Our extensive numerical studies show that the proposed method can reduce the bias substantially. Examples are given for illustration.
Resumo:
During the past few decades, developing efficient methods to solve dynamic facility layout problems has been focused on significantly by practitioners and researchers. More specifically meta-heuristic algorithms, especially genetic algorithm, have been proven to be increasingly helpful to generate sub-optimal solutions for large-scale dynamic facility layout problems. Nevertheless, the uncertainty of the manufacturing factors in addition to the scale of the layout problem calls for a mixed genetic algorithm–robust approach that could provide a single unlimited layout design. The present research aims to devise a customized permutation-based robust genetic algorithm in dynamic manufacturing environments that is expected to be generating a unique robust layout for all the manufacturing periods. The numerical outcomes of the proposed robust genetic algorithm indicate significant cost improvements compared to the conventional genetic algorithm methods and a selective number of other heuristic and meta-heuristic techniques.
Resumo:
This study examined an aspect of adolescent writing development, specifically whether teaching secondary school students to use strategies to enhance succinctness in their essays changed the grammatical sophistication of their sentences. A quasi-experimental intervention was used to compare changes in syntactic complexity and lexical density between one-draft and polished essays. No link was demonstrated between the intervention and the changes. A thematic analysis of teacher interviews explored links between changes to student texts and teaching approaches. The study has implications for making syntactic complexity an explicit goal of student drafting.
Resumo:
Sustainable implementation of new workforce redesign initiatives requires strategies that minimize barriers and optimize supports. Such strategies could be provided by a set of guiding principles. A broad understanding of the concerns of all the key stakeholder groups is required before effective strategies and initiatives are developed. Many new workforce redesign initiatives are not underpinned by prior planning, and this threatens their uptake and sustainability. This study reports on a cross-sectional qualitative study that sought the perspectives of representatives of key stakeholders in a new workforce redesign initiative (extended-scope-of-practice physiotherapy) in one Australian tertiary hospital. The key stakeholder groups were those that had been involved in some way in the development, management, training, funding, and/or delivery of the initiative. Data were collected using semistructured questions, answered individually by interview or in writing. Responses were themed collaboratively, using descriptive analysis. Key identified themes comprised: the importance of service marketing; proactively addressing barriers; using readily understood nomenclature; demonstrating service quality and safety, monitoring adverse events, measuring health and cost outcomes; legislative issues; registration; promoting viable career pathways; developing, accrediting, and delivering a curriculum supporting physiotherapists to work outside of the usual scope; and progression from "a good idea" to established service. Health care facilities planning to implement new workforce initiatives that extend scope of usual practice should consider these issues before instigating workforce/model of care changes. © 2014 Morris et al.
Resumo:
Inspired by high porosity, absorbency, wettability and hierarchical ordering on the micrometer and nanometer scale of cotton fabrics, a facile strategy is developed to coat visible light active metal nanostructures of copper and silver on cotton fabric substrates. The fabrication of nanostructured Ag and Cu onto interwoven threads of a cotton fabric by electroless deposition creates metal nanostructures that show a localized surface plasmon resonance (LSPR) effect. The micro/nanoscale hierarchical ordering of the cotton fabrics allows access to catalytically active sites to participate in heterogeneous catalysis with high efficiency. The ability of metals to absorb visible light through LSPR further enhances the catalytic reaction rates under photoexcitation conditions. Understanding the mode of electron transfer during visible light illumination in Ag@Cotton and Cu@Cotton through electrochemical measurements provides mechanistic evidence on the influence of light in promoting electron transfer during heterogeneous catalysis for the first time. The outcomes presented in this work will be helpful in designing new multifunctional fabrics with the ability to absorb visible light and thereby enhance light-activated catalytic processes.
Resumo:
The family of location and scale mixtures of Gaussians has the ability to generate a number of flexible distributional forms. The family nests as particular cases several important asymmetric distributions like the Generalized Hyperbolic distribution. The Generalized Hyperbolic distribution in turn nests many other well known distributions such as the Normal Inverse Gaussian. In a multivariate setting, an extension of the standard location and scale mixture concept is proposed into a so called multiple scaled framework which has the advantage of allowing different tail and skewness behaviours in each dimension with arbitrary correlation between dimensions. Estimation of the parameters is provided via an EM algorithm and extended to cover the case of mixtures of such multiple scaled distributions for application to clustering. Assessments on simulated and real data confirm the gain in degrees of freedom and flexibility in modelling data of varying tail behaviour and directional shape.
Resumo:
We propose a family of multivariate heavy-tailed distributions that allow variable marginal amounts of tailweight. The originality comes from introducing multidimensional instead of univariate scale variables for the mixture of scaled Gaussian family of distributions. In contrast to most existing approaches, the derived distributions can account for a variety of shapes and have a simple tractable form with a closed-form probability density function whatever the dimension. We examine a number of properties of these distributions and illustrate them in the particular case of Pearson type VII and t tails. For these latter cases, we provide maximum likelihood estimation of the parameters and illustrate their modelling flexibility on simulated and real data clustering examples.