916 resultados para approximated inference
Resumo:
The spatial and temporal distributions of larval prawns of penaeids and non-penaeids in the estuarine waters of Mangalore were studied. Larvae appear to be passively brought in by the incoming flood tides to the estuary, enjoy a wider distribution throughout the estuarine complex with abundance towards the mouth. The distribution of larval prawns was more in Nethravati than in the Gurupur stretch. The influence of temperature, hydrogen-ion-concentration, salinity and dissolved oxygen on the distribution of larvae in the estuaries is discussed. Inference on spawning seasons of commercially important prawns in the neighbouring waters has been arrived at based on their larval abundance.
Resumo:
We conducted phylogenetic analyses to identify the closest related living relatives of the Xizang and Sichuan hot-spring snakes (T baileyi and T. zhaoermii) endemic to the Tibetan Plateau, using mitochondrial DNA sequences (cyt b, ND4) from eight specimen
Resumo:
Height-length relationship in Crassostrea madrasensis (Preston) showed an exponential trend and relation in the form, H=ALᴮ. Deviations of actual values from the mean values consequent to the increase in size were noticed. Height and length approximated in oysters of less than 3.5cm in height resulting in orbicular shape. In oyster of shell height 3.5cm to 8cm, increase in height is faster leading to an oval shape. Above 8cm in height, the oysters become further elongated. Height-length relation is non-linear with an index (B value) of 1.1156. A linear relationship also holds good as the B value is not very much different from unity (H=-2.5424+2.0036L).
Restoration of images and 3D data to higher resolution by deconvolution with sparsity regularization
Resumo:
Image convolution is conventionally approximated by the LTI discrete model. It is well recognized that the higher the sampling rate, the better is the approximation. However sometimes images or 3D data are only available at a lower sampling rate due to physical constraints of the imaging system. In this paper, we model the under-sampled observation as the result of combining convolution and subsampling. Because the wavelet coefficients of piecewise smooth images tend to be sparse and well modelled by tree-like structures, we propose the L0 reweighted-L2 minimization (L0RL2 ) algorithm to solve this problem. This promotes model-based sparsity by minimizing the reweighted L2 norm, which approximates the L0 norm, and by enforcing a tree model over the weights. We test the algorithm on 3 examples: a simple ring, the cameraman image and a 3D microscope dataset; and show that good results can be obtained. © 2010 IEEE.
Resumo:
In this study various scalar dissipation rates and their modelling in the context of partially premixed flame are investigated. A DNS dataset of the near field of a turbulent hydrogen lifted jet flame is processed to analyse the mixture fraction and progress variable dissipation rates and their cross dissipation rate at several axial positions. It is found that the classical model for the passive scalar dissipation rate ε{lunate}̃ZZ gives good agreement with the DNS, while models developed based on premixed flames for the reactive scalar dissipation rate ε{lunate}̃cc only qualitatively capture the correct trend. The cross dissipation rate ε{lunate}̃cZ is mostly negative and can be reasonably approximated at downstream positions once ε{lunate}̃ZZ and ε{lunate}̃cc are known, although the sign cannot be determined. This approach gives better results than one employing a constant ratio of turbulent timescale and the scalar covariance c'Z'̃. The statistics of scalar gradients are further examined and lognormal distributions are shown to be very good approximations for the passive scalar and acceptable for the reactive scalar. The correlation between the two gradients increases downstream as the partially premixed flame in the near field evolves ultimately to a diffusion flame in the far field. A bivariate lognormal distribution is tested and found to be a reasonable approximation for the joint PDF of the two scalar gradients. © 2011 The Combustion Institute.
Resumo:
Reinforcement techniques have been successfully used to maximise the expected cumulative reward of statistical dialogue systems. Typically, reinforcement learning is used to estimate the parameters of a dialogue policy which selects the system's responses based on the inferred dialogue state. However, the inference of the dialogue state itself depends on a dialogue model which describes the expected behaviour of a user when interacting with the system. Ideally the parameters of this dialogue model should be also optimised to maximise the expected cumulative reward. This article presents two novel reinforcement algorithms for learning the parameters of a dialogue model. First, the Natural Belief Critic algorithm is designed to optimise the model parameters while the policy is kept fixed. This algorithm is suitable, for example, in systems using a handcrafted policy, perhaps prescribed by other design considerations. Second, the Natural Actor and Belief Critic algorithm jointly optimises both the model and the policy parameters. The algorithms are evaluated on a statistical dialogue system modelled as a Partially Observable Markov Decision Process in a tourist information domain. The evaluation is performed with a user simulator and with real users. The experiments indicate that model parameters estimated to maximise the expected reward function provide improved performance compared to the baseline handcrafted parameters. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
We present a video-based system which interactively captures the geometry of a 3D object in the form of a point cloud, then recognizes and registers known objects in this point cloud in a matter of seconds (fig. 1). In order to achieve interactive speed, we exploit both efficient inference algorithms and parallel computation, often on a GPU. The system can be broken down into two distinct phases: geometry capture, and object inference. We now discuss these in further detail. © 2011 IEEE.
Resumo:
External, prestressed carbon fiber reinforced polymer (CFRP) straps can be used to enhance the shear strength of existing reinforced concrete beams. In order to effectively design a strengthening system, a rational predictive theory is required. The current work investigates the ability of the modified compression field theory (MCFT) to predict the behavior of rectangular strap strengthened beams where the discrete CFRP strap forces are approximated as a uniform vertical stress. An unstrengthened control beam and two strengthened beams were tested to verify the predictions. The experimental results suggest that the MCFT could predict the general response of a strengthened beam with a uniform strap spacing < 0.9d. However, whereas the strengthened beams failed in shear, the MCFT predicted flexural failures. It is proposed that a different compression softening model or the inclusion of a crack width limit is required to reflect the onset of shear failures in the strengthened beams.
Resumo:
A mixture of Gaussians fit to a single curved or heavy-tailed cluster will report that the data contains many clusters. To produce more appropriate clusterings, we introduce a model which warps a latent mixture of Gaussians to produce nonparametric cluster shapes. The possibly low-dimensional latent mixture model allows us to summarize the properties of the high-dimensional clusters (or density manifolds) describing the data. The number of manifolds, as well as the shape and dimension of each manifold is automatically inferred. We derive a simple inference scheme for this model which analytically integrates out both the mixture parameters and the warping function. We show that our model is effective for density estimation, performs better than infinite Gaussian mixture models at recovering the true number of clusters, and produces interpretable summaries of high-dimensional datasets.
Resumo:
A number of recent scientific and engineering problems require signals to be decomposed into a product of a slowly varying positive envelope and a quickly varying carrier whose instantaneous frequency also varies slowly over time. Although signal processing provides algorithms for so-called amplitude-and frequency-demodulation (AFD), there are well known problems with all of the existing methods. Motivated by the fact that AFD is ill-posed, we approach the problem using probabilistic inference. The new approach, called probabilistic amplitude and frequency demodulation (PAFD), models instantaneous frequency using an auto-regressive generalization of the von Mises distribution, and the envelopes using Gaussian auto-regressive dynamics with a positivity constraint. A novel form of expectation propagation is used for inference. We demonstrate that although PAFD is computationally demanding, it outperforms previous approaches on synthetic and real signals in clean, noisy and missing data settings.
Resumo:
This paper tackles the novel challenging problem of 3D object phenotype recognition from a single 2D silhouette. To bridge the large pose (articulation or deformation) and camera viewpoint changes between the gallery images and query image, we propose a novel probabilistic inference algorithm based on 3D shape priors. Our approach combines both generative and discriminative learning. We use latent probabilistic generative models to capture 3D shape and pose variations from a set of 3D mesh models. Based on these 3D shape priors, we generate a large number of projections for different phenotype classes, poses, and camera viewpoints, and implement Random Forests to efficiently solve the shape and pose inference problems. By model selection in terms of the silhouette coherency between the query and the projections of 3D shapes synthesized using the galleries, we achieve the phenotype recognition result as well as a fast approximate 3D reconstruction of the query. To verify the efficacy of the proposed approach, we present new datasets which contain over 500 images of various human and shark phenotypes and motions. The experimental results clearly show the benefits of using the 3D priors in the proposed method over previous 2D-based methods. © 2011 IEEE.
Resumo:
In this paper, we consider Bayesian interpolation and parameter estimation in a dynamic sinusoidal model. This model is more flexible than the static sinusoidal model since it enables the amplitudes and phases of the sinusoids to be time-varying. For the dynamic sinusoidal model, we derive a Bayesian inference scheme for the missing observations, hidden states and model parameters of the dynamic model. The inference scheme is based on a Markov chain Monte Carlo method known as Gibbs sampler. We illustrate the performance of the inference scheme to the application of packet-loss concealment of lost audio and speech packets. © EURASIP, 2010.
Resumo:
Pavement condition assessment is essential when developing road network maintenance programs. In practice, the data collection process is to a large extent automated. However, pavement distress detection (cracks, potholes, etc.) is mostly performed manually, which is labor-intensive and time-consuming. Existing methods either rely on complete 3D surface reconstruction, which comes along with high equipment and computation costs, or make use of acceleration data, which can only provide preliminary and rough condition surveys. In this paper we present a method for automated pothole detection in asphalt pavement images. In the proposed method an image is first segmented into defect and non-defect regions using histogram shape-based thresholding. Based on the geometric properties of a defect region the potential pothole shape is approximated utilizing morphological thinning and elliptic regression. Subsequently, the texture inside a potential defect shape is extracted and compared with the texture of the surrounding non-defect pavement in order to determine if the region of interest represents an actual pothole. This methodology has been implemented in a MATLAB prototype, trained and tested on 120 pavement images. The results show that this method can detect potholes in asphalt pavement images with reasonable accuracy.
Resumo:
Choosing a project manager for a construction project—particularly, large projects—is a critical project decision. The selection process involves different criteria and should be in accordance with company policies and project specifications. Traditionally, potential candidates are interviewed and the most qualified are selected in compliance with company priorities and project conditions. Precise computing models that could take various candidates’ information into consideration and then pinpoint the most qualified person with a high degree of accuracy would be beneficial. On the basis of the opinions of experienced construction company managers, this paper, through presenting a fuzzy system, identifies the important criteria in selecting a project manager. The proposed fuzzy system is based on IF-THEN rules; a genetic algorithm improves the overall accuracy as well as the functions used by the fuzzy system to make initial estimates of the cluster centers for fuzzy c-means clustering. Moreover, a back-propagation neutral network method was used to train the system. The optimal measures of the inference parameters were identified by calculating the system’s output error and propagating this error within the system. After specifying the system parameters, the membership function parameters—which by means of clustering and projection were approximated—were tuned with the genetic algorithm. Results from this system in selecting project managers show its high capability in making high-quality personnel predictions
Resumo:
Manual inspection is required to determine the condition of damaged buildings after an earthquake. The lack of available inspectors, when combined with the large volume of inspection work, makes such inspection subjective and time-consuming. Completing the required inspection takes weeks to complete, which has adverse economic and societal impacts on the affected population. This paper proposes an automated framework for rapid post-earthquake building evaluation. Under the framework, the visible damage (cracks and buckling) inflicted on concrete columns is first detected. The damage properties are then measured in relation to the column's dimensions and orientation, so that the column's load bearing capacity can be approximated as a damage index. The column damage index supplemented with other building information (e.g. structural type and columns arrangement) is then used to query fragility curves of similar buildings, constructed from the analyses of existing and on-going experimental data. The query estimates the probability of the building being in different damage states. The framework is expected to automate the collection of building damage data, to provide a quantitative assessment of the building damage state, and to estimate the vulnerability of the building to collapse in the event of an aftershock. Videos and manual assessments of structures after the 2009 earthquake in Haiti are used to test the parts of the framework.