906 resultados para State estimation
Resumo:
This study examined the technical efficiency in artisanal fisheries in Lagos State of Nigeria. The study employed a two stage random sampling procedure for the selection of 120 respondents. The analytical techniques involved descriptive statistics and estimation of technical efficiency following maximum likelihood estimation (MLE) procedure available in FRONTIER 4.1. The MLE result of the stochastic frontier production function showed that hired labour, cost of repair and capital items are critical factors that influences productivity of artisanal fishermen with the coefficient of hired labour being highly elastic. This implies that employing more labour will significantly increase the catch in the study area. The predicted farm efficiency with an average value of 0.92 showed that there is a marginal potential of about 8 percent to increase the catch, hence the income of the fishermen. The study further examined the factors that influence productivity of fishermen in the study area. Year of education, mode of operation and frequency of fishing have important implication on the technical efficiency of fishermen in the study area.
Resumo:
In this paper, a strategy for min-max Moving Horizon Estimation (MHE) of a class of uncertain hybrid systems is proposed. The class of hybrid systems being considered are Piecewise Affine systems (PWA) with both continuous valued and logic components. Furthermore, we consider the case when there is a (possibly structured) norm bounded uncertainty in each subsystem. Sufficient conditions on the time horizon and the penalties on the state at the beginning of the estimation horizon to guarantee convergence of the MHE scheme will be provided. The MHE scheme will be implemented as a mixed integer semidefinite optimisation for which an efficient algorithm was recently introduced.
Resumo:
Reinforcement techniques have been successfully used to maximise the expected cumulative reward of statistical dialogue systems. Typically, reinforcement learning is used to estimate the parameters of a dialogue policy which selects the system's responses based on the inferred dialogue state. However, the inference of the dialogue state itself depends on a dialogue model which describes the expected behaviour of a user when interacting with the system. Ideally the parameters of this dialogue model should be also optimised to maximise the expected cumulative reward. This article presents two novel reinforcement algorithms for learning the parameters of a dialogue model. First, the Natural Belief Critic algorithm is designed to optimise the model parameters while the policy is kept fixed. This algorithm is suitable, for example, in systems using a handcrafted policy, perhaps prescribed by other design considerations. Second, the Natural Actor and Belief Critic algorithm jointly optimises both the model and the policy parameters. The algorithms are evaluated on a statistical dialogue system modelled as a Partially Observable Markov Decision Process in a tourist information domain. The evaluation is performed with a user simulator and with real users. The experiments indicate that model parameters estimated to maximise the expected reward function provide improved performance compared to the baseline handcrafted parameters. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
We present a new haplotype-based approach for inferring local genetic ancestry of individuals in an admixed population. Most existing approaches for local ancestry estimation ignore the latent genetic relatedness between ancestral populations and treat them as independent. In this article, we exploit such information by building an inheritance model that describes both the ancestral populations and the admixed population jointly in a unified framework. Based on an assumption that the common hypothetical founder haplotypes give rise to both the ancestral and the admixed population haplotypes, we employ an infinite hidden Markov model to characterize each ancestral population and further extend it to generate the admixed population. Through an effective utilization of the population structural information under a principled nonparametric Bayesian framework, the resulting model is significantly less sensitive to the choice and the amount of training data for ancestral populations than state-of-the-art algorithms. We also improve the robustness under deviation from common modeling assumptions by incorporating population-specific scale parameters that allow variable recombination rates in different populations. Our method is applicable to an admixed population from an arbitrary number of ancestral populations and also performs competitively in terms of spurious ancestry proportions under a general multiway admixture assumption. We validate the proposed method by simulation under various admixing scenarios and present empirical analysis results from a worldwide-distributed dataset from the Human Genome Diversity Project.
Resumo:
Manual inspection is required to determine the condition of damaged buildings after an earthquake. The lack of available inspectors, when combined with the large volume of inspection work, makes such inspection subjective and time-consuming. Completing the required inspection takes weeks to complete, which has adverse economic and societal impacts on the affected population. This paper proposes an automated framework for rapid post-earthquake building evaluation. Under the framework, the visible damage (cracks and buckling) inflicted on concrete columns is first detected. The damage properties are then measured in relation to the column's dimensions and orientation, so that the column's load bearing capacity can be approximated as a damage index. The column damage index supplemented with other building information (e.g. structural type and columns arrangement) is then used to query fragility curves of similar buildings, constructed from the analyses of existing and on-going experimental data. The query estimates the probability of the building being in different damage states. The framework is expected to automate the collection of building damage data, to provide a quantitative assessment of the building damage state, and to estimate the vulnerability of the building to collapse in the event of an aftershock. Videos and manual assessments of structures after the 2009 earthquake in Haiti are used to test the parts of the framework.
Resumo:
Obtaining accurate confidence measures for automatic speech recognition (ASR) transcriptions is an important task which stands to benefit from the use of multiple information sources. This paper investigates the application of conditional random field (CRF) models as a principled technique for combining multiple features from such sources. A novel method for combining suitably defined features is presented, allowing for confidence annotation using lattice-based features of hypotheses other than the lattice 1-best. The resulting framework is applied to different stages of a state-of-the-art large vocabulary speech recognition pipeline, and consistent improvements are shown over a sophisticated baseline system. Copyright © 2011 ISCA.
Resumo:
Quantile regression refers to the process of estimating the quantiles of a conditional distribution and has many important applications within econometrics and data mining, among other domains. In this paper, we show how to estimate these conditional quantile functions within a Bayes risk minimization framework using a Gaussian process prior. The resulting non-parametric probabilistic model is easy to implement and allows non-crossing quantile functions to be enforced. Moreover, it can directly be used in combination with tools and extensions of standard Gaussian Processes such as principled hyperparameter estimation, sparsification, and quantile regression with input-dependent noise rates. No existing approach enjoys all of these desirable properties. Experiments on benchmark datasets show that our method is competitive with state-of-the-art approaches. © 2009 IEEE.
Resumo:
The task of word-level confidence estimation (CE) for automatic speech recognition (ASR) systems stands to benefit from the combination of suitably defined input features from multiple information sources. However, the information sources of interest may not necessarily operate at the same level of granularity as the underlying ASR system. The research described here builds on previous work on confidence estimation for ASR systems using features extracted from word-level recognition lattices, by incorporating information at the sub-word level. Furthermore, the use of Conditional Random Fields (CRFs) with hidden states is investigated as a technique to combine information for word-level CE. Performance improvements are shown using the sub-word-level information in linear-chain CRFs with appropriately engineered feature functions, as well as when applying the hidden-state CRF model at the word level.
Resumo:
Hip fracture is the leading cause of acute orthopaedic hospital admission amongst the elderly, with around a third of patients not surviving one year post-fracture. Although various preventative therapies are available, patient selection is difficult. The current state-of-the-art risk assessment tool (FRAX) ignores focal structural defects, such as cortical bone thinning, a critical component in characterizing hip fragility. Cortical thickness can be measured using CT, but this is expensive and involves a significant radiation dose. Instead, Dual-Energy X-ray Absorptiometry (DXA) is currently the preferred imaging modality for assessing hip fracture risk and is used routinely in clinical practice. Our ambition is to develop a tool to measure cortical thickness using multi-view DXA instead of CT. In this initial study, we work with digitally reconstructed radiographs (DRRs) derived from CT data as a surrogate for DXA scans: this enables us to compare directly the thickness estimates with the gold standard CT results. Our approach involves a model-based femoral shape reconstruction followed by a data-driven algorithm to extract numerous cortical thickness point estimates. In a series of experiments on the shaft and trochanteric regions of 48 proximal femurs, we validated our algorithm and established its performance limits using 20 views in the range 0°-171°: estimation errors were 0:19 ± 0:53mm (mean +/- one standard deviation). In a more clinically viable protocol using four views in the range 0°-51°, where no other bony structures obstruct the projection of the femur, measurement errors were -0:07 ± 0:79 mm. © 2013 SPIE.
Resumo:
This work addresses the challenging problem of unconstrained 3D human pose estimation (HPE) from a novel perspective. Existing approaches struggle to operate in realistic applications, mainly due to their scene-dependent priors, such as background segmentation and multi-camera network, which restrict their use in unconstrained environments. We therfore present a framework which applies action detection and 2D pose estimation techniques to infer 3D poses in an unconstrained video. Action detection offers spatiotemporal priors to 3D human pose estimation by both recognising and localising actions in space-time. Instead of holistic features, e.g. silhouettes, we leverage the flexibility of deformable part model to detect 2D body parts as a feature to estimate 3D poses. A new unconstrained pose dataset has been collected to justify the feasibility of our method, which demonstrated promising results, significantly outperforming the relevant state-of-the-arts. © 2013 IEEE.
Resumo:
Traditional approaches to upper body pose estimation using monocular vision rely on complex body models and a large variety of geometric constraints. We argue that this is not ideal and somewhat inelegant as it results in large processing burdens, and instead attempt to incorporate these constraints through priors obtained directly from training data. A prior distribution covering the probability of a human pose occurring is used to incorporate likely human poses. This distribution is obtained offline, by fitting a Gaussian mixture model to a large dataset of recorded human body poses, tracked using a Kinect sensor. We combine this prior information with a random walk transition model to obtain an upper body model, suitable for use within a recursive Bayesian filtering framework. Our model can be viewed as a mixture of discrete Ornstein-Uhlenbeck processes, in that states behave as random walks, but drift towards a set of typically observed poses. This model is combined with measurements of the human head and hand positions, using recursive Bayesian estimation to incorporate temporal information. Measurements are obtained using face detection and a simple skin colour hand detector, trained using the detected face. The suggested model is designed with analytical tractability in mind and we show that the pose tracking can be Rao-Blackwellised using the mixture Kalman filter, allowing for computational efficiency while still incorporating bio-mechanical properties of the upper body. In addition, the use of the proposed upper body model allows reliable three-dimensional pose estimates to be obtained indirectly for a number of joints that are often difficult to detect using traditional object recognition strategies. Comparisons with Kinect sensor results and the state of the art in 2D pose estimation highlight the efficacy of the proposed approach.
Resumo:
For steady-state heat conduction a new variational functional for a unit cell of composites with periodic microstructures is constructed by considering the quasi-periodicity of the temperature field and in the periodicity of the heat flux fields. Then by combining with the eigenfunction expansion of complex potential which satisfies the fiber-matrix interface conditions, an eigenfunction expansion-variational method (EEVM) based on a unit cell is developed. The effective transverse thermal conductivities of doubly-periodic fiber reinforced composites are calculated, and the first-order approximation formula for the square and hexagonal arrays is presented,which is convenient for engineering application. The numerical results show a good convergency of the presented method, even through the fiber volume fraction is relatively high. Comparisons with the existing analytical and experimental results are made to demonstrate the accuracy and validity of the first-order approximation formula for the hexagonal array.
Resumo:
We show that diffusion can play an important role in protein-folding kinetics. We explicitly calculate the diffusion coefficient of protein folding in a lattice model. We found that diffusion typically is configuration- or reaction coordinate-dependent. The diffusion coefficient is found to be decreasing with respect to the progression of folding toward the native state, which is caused by the collapse to a compact state constraining the configurational space for exploration. The configuration- or position-dependent diffusion coefficient has a significant contribution to the kinetics in addition to the thermodynamic free-energy barrier. It effectively changes (increases in this case) the kinetic barrier height as well as the position of the corresponding transition state and therefore modifies the folding kinetic rates as well as the kinetic routes. The resulting folding time, by considering both kinetic diffusion and the thermodynamic folding free-energy profile, thus is slower than the estimation from the thermodynamic free-energy barrier with constant diffusion but is consistent with the results from kinetic simulations. The configuration- or coordinate-dependent diffusion is especially important with respect to fast folding, when there is a small or no free-energy barrier and kinetics is controlled by diffusion.Including the configurational dependence will challenge the transition state theory of protein folding.
Resumo:
The three scaling parameters described in Sanchez-Lacombe lattice fluid theory (SLLFT), T*, P* and rho* of pure polystyrene (PS), pure poly(2,6-dimethyl-1,4-phenylene oxide) (PPO) and their mixtures are obtained by fitting corresponding experimental pressure volume-temperature data with equation-of-state of SLLFT. A modified combining rule in SLLFT used to match the volume per mer, v* of the PS/PPO mixtures was advanced and the enthalpy of mixing and Flory-Huggins (FH) interaction parameter were calculated using the new rule. It is found that the difference between the new rule and the old one presented by Sanchez and Lacombe is quite small in the calculation of the enthalpy of mixing and FH interaction parameter and the effect of volume-combining rule on the calculation of thermodynamic properties is much smaller than that of energy-combining rule. But the relative value of interaction parameter changes much due to the new volume-based combining rule. This effect can affect the position of phase diagram very much, which is reported elsewhere [Macromolecules 34 (2001) 6291]
Resumo:
The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. We ask a fundamental question: What is the basic predictive power of TCP of network state, including wireless error conditions? The goal is to improve or readily exploit this predictive power to enable TCP (or variants) to perform well in generalized network settings. To that end, we use Maximum Likelihood Ratio tests to evaluate TCP as a detector/estimator. We quantify how well network state can be estimated, given network response such as distributions of packet delays or TCP throughput that are conditioned on the type of packet loss. Using our model-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient detector can be built; distributions of network loads can provide effective means for estimating packet loss type; and packet delay is a better signal of network state than short-term throughput. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect estimation.