919 resultados para Gaussian quadrature formulas.
Resumo:
Transport processes within heterogeneous media may exhibit non-classical diffusion or dispersion; that is, not adequately described by the classical theory of Brownian motion and Fick's law. We consider a space fractional advection-dispersion equation based on a fractional Fick's law. The equation involves the Riemann-Liouville fractional derivative which arises from assuming that particles may make large jumps. Finite difference methods for solving this equation have been proposed by Meerschaert and Tadjeran. In the variable coefficient case, the product rule is first applied, and then the Riemann-Liouville fractional derivatives are discretised using standard and shifted Grunwald formulas, depending on the fractional order. In this work, we consider a finite volume method that deals directly with the equation in conservative form. Fractionally-shifted Grunwald formulas are used to discretise the fractional derivatives at control volume faces. We compare the two methods for several case studies from the literature, highlighting the convenience of the finite volume approach.
Resumo:
A quasi-maximum likelihood procedure for estimating the parameters of multi-dimensional diffusions is developed in which the transitional density is a multivariate Gaussian density with first and second moments approximating the true moments of the unknown density. For affine drift and diffusion functions, the moments are exactly those of the true transitional density and for nonlinear drift and diffusion functions the approximation is extremely good and is as effective as alternative methods based on likelihood approximations. The estimation procedure generalises to models with latent factors. A conditioning procedure is developed that allows parameter estimation in the absence of proxies.
Resumo:
The promise of metabonomics, a new "omics" technique, to validate Chinese medicines and the compatibility of Chinese formulas has been appreciated. The present study was undertaken to explore the excretion pattern of low molecular mass metabolites in the male Wistar-derived rat model of kidney yin deficiency induced with thyroxine and reserpine as well as the therapeutic effect of Liu Wei Di Huang Wan (LW) and its separated prescriptions, a classic traditional Chinese medicine formula for treating kidney yin deficiency in China. The study utilized ultra-performance liquid chromatography/electrospray ionization synapt high definition mass spectrometry (UPLC/ESI-SYNAPT-HDMS) in both negative and positive electrospray ionization (ESI). At the same time, blood biochemistry was examined to identify specific changes in the kidney yin deficiency. Distinct changes in the pattern of metabolites, as a result of daily administration of thyroxine and reserpine, were observed by UPLC-HDMS combined with a principal component analysis (PCA). The changes in metabolic profiling were restored to their baseline values after treatment with LW according to the PCA score plots. Altogether, the current metabonomic approach based on UPLC-HDMS and orthogonal projection to latent structures discriminate analysis (OPLS-DA) indicated 20 ions (14 in the negative mode, 8 in the positive mode, and 2 in both) as "differentiating metabolites".
Resumo:
This paper develops analytical distributions of temperature indices on which temperature derivatives are written. If the deviations of daily temperatures from their expected values are modelled as an Ornstein-Uhlenbeck process with timevarying variance, then the distributions of the temperature index on which the derivative is written is the sum of truncated, correlated Gaussian deviates. The key result of this paper is to provide an analytical approximation to the distribution of this sum, thus allowing the accurate computation of payoffs without the need for any simulation. A data set comprising average daily temperature spanning over a hundred years for four Australian cities is used to demonstrate the efficacy of this approach for estimating the payoffs to temperature derivatives. It is demonstrated that expected payoffs computed directly from historical records are a particularly poor approach to the problem when there are trends in underlying average daily temperature. It is shown that the proposed analytical approach is superior to historical pricing.
Resumo:
A significant amount of speech data is required to develop a robust speaker verification system, but it is difficult to find enough development speech to match all expected conditions. In this paper we introduce a new approach to Gaussian probabilistic linear discriminant analysis (GPLDA) to estimate reliable model parameters as a linearly weighted model taking more input from the large volume of available telephone data and smaller proportional input from limited microphone data. In comparison to a traditional pooled training approach, where the GPLDA model is trained over both telephone and microphone speech, this linear-weighted GPLDA approach is shown to provide better EER and DCF performance in microphone and mixed conditions in both the NIST 2008 and NIST 2010 evaluation corpora. Based upon these results, we believe that linear-weighted GPLDA will provide a better approach than pooled GPLDA, allowing for the further improvement of GPLDA speaker verification in conditions with limited development data.
Resumo:
Automated crowd counting has become an active field of computer vision research in recent years. Existing approaches are scene-specific, as they are designed to operate in the single camera viewpoint that was used to train the system. Real world camera networks often span multiple viewpoints within a facility, including many regions of overlap. This paper proposes a novel scene invariant crowd counting algorithm that is designed to operate across multiple cameras. The approach uses camera calibration to normalise features between viewpoints and to compensate for regions of overlap. This compensation is performed by constructing an 'overlap map' which provides a measure of how much an object at one location is visible within other viewpoints. An investigation into the suitability of various feature types and regression models for scene invariant crowd counting is also conducted. The features investigated include object size, shape, edges and keypoints. The regression models evaluated include neural networks, K-nearest neighbours, linear and Gaussian process regresion. Our experiments demonstrate that accurate crowd counting was achieved across seven benchmark datasets, with optimal performance observed when all features were used and when Gaussian process regression was used. The combination of scene invariance and multi camera crowd counting is evaluated by training the system on footage obtained from the QUT camera network and testing it on three cameras from the PETS 2009 database. Highly accurate crowd counting was observed with a mean relative error of less than 10%. Our approach enables a pre-trained system to be deployed on a new environment without any additional training, bringing the field one step closer toward a 'plug and play' system.
Resumo:
Our results demonstrate that photorefractive residual amplitude modulation (RAM) noise in electro-optic modulators (EOMs) can be reduced by modifying the incident beam intensity distribution. Here we report an order of magnitude reduction in RAM when beams with uniform intensity (flat-top) profiles, generated with an LCOS-SLM, are used instead of the usual fundamental Gaussian mode (TEM00). RAM arises from the photorefractive amplified scatter noise off the defects and impurities within the crystal. A reduction in RAM is observed with increasing intensity uniformity (flatness), which is attributed to a reduction in space charge field on the beam axis. The level of RAM reduction that can be achieved is physically limited by clipping at EOM apertures, with the observed results agreeing well with a simple model. These results are particularly important in applications where the reduction of residual amplitude modulation to 10^-6 is essential.
Resumo:
Over the past decade, most Australian universities have moved increasingly towards online course delivery for both undergraduate and graduate programs. In almost all cases, elements of online teaching are part of routine teaching loads. Yet detailed and accurate workload data are not readily available. As a result, institutional policies on academic staff workload are often guided more by untested assumptions about reduction of costs per student unit, rather than being evidence-based, with the result that implementation of new technologies for online teaching has resulted in poorly defined workload expectations. While the academics in this study often revealed a limited understanding of their institutional workload formulas, which in Australia are negotiated between management and the national union through their local branches, the costs of various types of teaching delivery have become a critical issue in a time of increasing student numbers, declining funding, pressures to increase quality and introduce minimum standards of teaching and curriculum, and substantial expenditure on technologies to support e-learning. There have been relatively few studies on the costs associated with workload for online teaching, and even fewer on the more ubiquitous ‘blended’, ‘hybrid’ or ‘flexible’ modes, in which face-to-face teaching is supplemented by online resources and activities. With this in mind the research reported here has attempted to answer the following question: What insights currently inform Australian universities about staff workload when teaching online?
Resumo:
Discretization of a geographical region is quite common in spatial analysis. There have been few studies into the impact of different geographical scales on the outcome of spatial models for different spatial patterns. This study aims to investigate the impact of spatial scales and spatial smoothing on the outcomes of modelling spatial point-based data. Given a spatial point-based dataset (such as occurrence of a disease), we study the geographical variation of residual disease risk using regular grid cells. The individual disease risk is modelled using a logistic model with the inclusion of spatially unstructured and/or spatially structured random effects. Three spatial smoothness priors for the spatially structured component are employed in modelling, namely an intrinsic Gaussian Markov random field, a second-order random walk on a lattice, and a Gaussian field with Matern correlation function. We investigate how changes in grid cell size affect model outcomes under different spatial structures and different smoothness priors for the spatial component. A realistic example (the Humberside data) is analyzed and a simulation study is described. Bayesian computation is carried out using an integrated nested Laplace approximation. The results suggest that the performance and predictive capacity of the spatial models improve as the grid cell size decreases for certain spatial structures. It also appears that different spatial smoothness priors should be applied for different patterns of point data.
Resumo:
The huge amount of CCTV footage available makes it very burdensome to process these videos manually through human operators. This has made automated processing of video footage through computer vision technologies necessary. During the past several years, there has been a large effort to detect abnormal activities through computer vision techniques. Typically, the problem is formulated as a novelty detection task where the system is trained on normal data and is required to detect events which do not fit the learned ‘normal’ model. There is no precise and exact definition for an abnormal activity; it is dependent on the context of the scene. Hence there is a requirement for different feature sets to detect different kinds of abnormal activities. In this work we evaluate the performance of different state of the art features to detect the presence of the abnormal objects in the scene. These include optical flow vectors to detect motion related anomalies, textures of optical flow and image textures to detect the presence of abnormal objects. These extracted features in different combinations are modeled using different state of the art models such as Gaussian mixture model(GMM) and Semi- 2D Hidden Markov model(HMM) to analyse the performances. Further we apply perspective normalization to the extracted features to compensate for perspective distortion due to the distance between the camera and objects of consideration. The proposed approach is evaluated using the publicly available UCSD datasets and we demonstrate improved performance compared to other state of the art methods.
Resumo:
An important aspect of robotic path planning for is ensuring that the vehicle is in the best location to collect the data necessary for the problem at hand. Given that features of interest are dynamic and move with oceanic currents, vehicle speed is an important factor in any planning exercises to ensure vehicles are at the right place at the right time. Here, we examine different Gaussian process models to find a suitable predictive kinematic model that enable the speed of an underactuated, autonomous surface vehicle to be accurately predicted given a set of input environmental parameters.
Resumo:
A novel method of matching stiffness and continuous variable damping of an ECAS (electronically controlled air suspension) based on LQG (linear quadratic Gaussian) control was proposed to simultaneously improve the road-friendliness and ride comfort of a two-axle school bus. Taking account of the suspension nonlinearities and target-height-dependent variation in suspension characteristics, a stiffness model of the ECAS mounted on the drive axle of the bus was developed based on thermodynamics and the key parameters were obtained through field tests. By determining the proper range of the target height for the ECAS of the fully-loaded bus based on the design requirements of vehicle body bounce frequency, the control algorithm of the target suspension height (i.e., stiffness) was derived according to driving speed and road roughness. Taking account of the nonlinearities of a continuous variable semi-active damper, the damping force was obtained through the subtraction of the air spring force from the optimum integrated suspension force, which was calculated based on LQG control. Finally, a GA (genetic algorithm)-based matching method between stepped variable damping and stiffness was employed as a benchmark to evaluate the effectiveness of the LQG-based matching method. Simulation results indicate that compared with the GA-based matching method, both dynamic tire force and vehicle body vertical acceleration responses are markedly reduced around the vehicle body bounce frequency employing the LQG-based matching method, with peak values of the dynamic tire force PSD (power spectral density) decreased by 73.6%, 60.8% and 71.9% in the three cases, and corresponding reduction are 71.3%, 59.4% and 68.2% for the vehicle body vertical acceleration. A strong robustness to variation of driving speed and road roughness is also observed for the LQG-based matching method.
Resumo:
The study of the relationship between macroscopic traffic parameters, such as flow, speed and travel time, is essential to the understanding of the behaviour of freeway and arterial roads. However, the temporal dynamics of these parameters are difficult to model, especially for arterial roads, where the process of traffic change is driven by a variety of variables. The introduction of the Bluetooth technology into the transportation area has proven exceptionally useful for monitoring vehicular traffic, as it allows reliable estimation of travel times and traffic demands. In this work, we propose an approach based on Bayesian networks for analyzing and predicting the complex dynamics of flow or volume, based on travel time observations from Bluetooth sensors. The spatio-temporal relationship between volume and travel time is captured through a first-order transition model, and a univariate Gaussian sensor model. The two models are trained and tested on travel time and volume data, from an arterial link, collected over a period of six days. To reduce the computational costs of the inference tasks, volume is converted into a discrete variable. The discretization process is carried out through a Self-Organizing Map. Preliminary results show that a simple Bayesian network can effectively estimate and predict the complex temporal dynamics of arterial volumes from the travel time data. Not only is the model well suited to produce posterior distributions over single past, current and future states; but it also allows computing the estimations of joint distributions, over sequences of states. Furthermore, the Bayesian network can achieve excellent prediction, even when the stream of travel time observation is partially incomplete.
Resumo:
A new community and communication type of social networks - online dating - are gaining momentum. With many people joining in the dating network, users become overwhelmed by choices for an ideal partner. A solution to this problem is providing users with partners recommendation based on their interests and activities. Traditional recommendation methods ignore the users’ needs and provide recommendations equally to all users. In this paper, we propose a recommendation approach that employs different recommendation strategies to different groups of members. A segmentation method using the Gaussian Mixture Model (GMM) is proposed to customize users’ needs. Then a targeted recommendation strategy is applied to each identified segment. Empirical results show that the proposed approach outperforms several existing recommendation methods.
Resumo:
This paper analyses the probabilistic linear discriminant analysis (PLDA) speaker verification approach with limited development data. This paper investigates the use of the median as the central tendency of a speaker’s i-vector representation, and the effectiveness of weighted discriminative techniques on the performance of state-of-the-art length-normalised Gaussian PLDA (GPLDA) speaker verification systems. The analysis within shows that the median (using a median fisher discriminator (MFD)) provides a better representation of a speaker when the number of representative i-vectors available during development is reduced, and that further, usage of the pair-wise weighting approach in weighted LDA and weighted MFD provides further improvement in limited development conditions. Best performance is obtained using a weighted MFD approach, which shows over 10% improvement in EER over the baseline GPLDA system on mismatched and interview-interview conditions.