975 resultados para Numerical experiments


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bayesian nonparametric models, such as the Gaussian process and the Dirichlet process, have been extensively applied for target kinematics modeling in various applications including environmental monitoring, traffic planning, endangered species tracking, dynamic scene analysis, autonomous robot navigation, and human motion modeling. As shown by these successful applications, Bayesian nonparametric models are able to adjust their complexities adaptively from data as necessary, and are resistant to overfitting or underfitting. However, most existing works assume that the sensor measurements used to learn the Bayesian nonparametric target kinematics models are obtained a priori or that the target kinematics can be measured by the sensor at any given time throughout the task. Little work has been done for controlling the sensor with bounded field of view to obtain measurements of mobile targets that are most informative for reducing the uncertainty of the Bayesian nonparametric models. To present the systematic sensor planning approach to leaning Bayesian nonparametric models, the Gaussian process target kinematics model is introduced at first, which is capable of describing time-invariant spatial phenomena, such as ocean currents, temperature distributions and wind velocity fields. The Dirichlet process-Gaussian process target kinematics model is subsequently discussed for modeling mixture of mobile targets, such as pedestrian motion patterns.

Novel information theoretic functions are developed for these introduced Bayesian nonparametric target kinematics models to represent the expected utility of measurements as a function of sensor control inputs and random environmental variables. A Gaussian process expected Kullback Leibler divergence is developed as the expectation of the KL divergence between the current (prior) and posterior Gaussian process target kinematics models with respect to the future measurements. Then, this approach is extended to develop a new information value function that can be used to estimate target kinematics described by a Dirichlet process-Gaussian process mixture model. A theorem is proposed that shows the novel information theoretic functions are bounded. Based on this theorem, efficient estimators of the new information theoretic functions are designed, which are proved to be unbiased with the variance of the resultant approximation error decreasing linearly as the number of samples increases. Computational complexities for optimizing the novel information theoretic functions under sensor dynamics constraints are studied, and are proved to be NP-hard. A cumulative lower bound is then proposed to reduce the computational complexity to polynomial time.

Three sensor planning algorithms are developed according to the assumptions on the target kinematics and the sensor dynamics. For problems where the control space of the sensor is discrete, a greedy algorithm is proposed. The efficiency of the greedy algorithm is demonstrated by a numerical experiment with data of ocean currents obtained by moored buoys. A sweep line algorithm is developed for applications where the sensor control space is continuous and unconstrained. Synthetic simulations as well as physical experiments with ground robots and a surveillance camera are conducted to evaluate the performance of the sweep line algorithm. Moreover, a lexicographic algorithm is designed based on the cumulative lower bound of the novel information theoretic functions, for the scenario where the sensor dynamics are constrained. Numerical experiments with real data collected from indoor pedestrians by a commercial pan-tilt camera are performed to examine the lexicographic algorithm. Results from both the numerical simulations and the physical experiments show that the three sensor planning algorithms proposed in this dissertation based on the novel information theoretic functions are superior at learning the target kinematics with

little or no prior knowledge

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.

While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.

For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 2006, a large and prolonged bloom of the dinoflagellate Karenia mikimotoi occurred in Scottish coastal waters, causing extensive mortalities of benthic organisms including annelids and molluscs and some species of fish ( Davidson et al., 2009). A coupled hydrodynamic-algal transport model was developed to track the progression of the bloom around the Scottish coast during June–September 2006 and hence investigate the processes controlling the bloom dynamics. Within this individual-based model, cells were capable of growth, mortality and phototaxis and were transported by physical processes of advection and turbulent diffusion, using current velocities extracted from operational simulations of the MRCS ocean circulation model of the North-west European continental shelf. Vertical and horizontal turbulent diffusion of cells are treated using a random walk approach. Comparison of model output with remotely sensed chlorophyll concentrations and cell counts from coastal monitoring stations indicated that it was necessary to include multiple spatially distinct seed populations of K. mikimotoi at separate locations on the shelf edge to capture the qualitative pattern of bloom transport and development. We interpret this as indicating that the source population was being transported northwards by the Hebridean slope current from where colonies of K. mikimotoi were injected onto the continental shelf by eddies or other transient exchange processes. The model was used to investigate the effects on simulated K. mikimotoi transport and dispersal of: (1) the distribution of the initial seed population; (2) algal growth and mortality; (3) water temperature; (4) the vertical movement of particles by diurnal migration and eddy diffusion; (5) the relative role of the shelf edge and coastal currents; (6) the role of wind forcing. The numerical experiments emphasized the requirement for a physiologically based biological model and indicated that improved modelling of future blooms will potentially benefit from better parameterisation of temperature dependence of both growth and mortality and finer spatial and temporal hydrodynamic resolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 2006, a large and prolonged bloom of the dinoflagellate Karenia mikimotoi occurred in Scottish coastal waters, causing extensive mortalities of benthic organisms including annelids and molluscs and some species of fish ( Davidson et al., 2009). A coupled hydrodynamic-algal transport model was developed to track the progression of the bloom around the Scottish coast during June–September 2006 and hence investigate the processes controlling the bloom dynamics. Within this individual-based model, cells were capable of growth, mortality and phototaxis and were transported by physical processes of advection and turbulent diffusion, using current velocities extracted from operational simulations of the MRCS ocean circulation model of the North-west European continental shelf. Vertical and horizontal turbulent diffusion of cells are treated using a random walk approach. Comparison of model output with remotely sensed chlorophyll concentrations and cell counts from coastal monitoring stations indicated that it was necessary to include multiple spatially distinct seed populations of K. mikimotoi at separate locations on the shelf edge to capture the qualitative pattern of bloom transport and development. We interpret this as indicating that the source population was being transported northwards by the Hebridean slope current from where colonies of K. mikimotoi were injected onto the continental shelf by eddies or other transient exchange processes. The model was used to investigate the effects on simulated K. mikimotoi transport and dispersal of: (1) the distribution of the initial seed population; (2) algal growth and mortality; (3) water temperature; (4) the vertical movement of particles by diurnal migration and eddy diffusion; (5) the relative role of the shelf edge and coastal currents; (6) the role of wind forcing. The numerical experiments emphasized the requirement for a physiologically based biological model and indicated that improved modelling of future blooms will potentially benefit from better parameterisation of temperature dependence of both growth and mortality and finer spatial and temporal hydrodynamic resolution.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-07

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract The ultimate problem considered in this thesis is modeling a high-dimensional joint distribution over a set of discrete variables. For this purpose, we consider classes of context-specific graphical models and the main emphasis is on learning the structure of such models from data. Traditional graphical models compactly represent a joint distribution through a factorization justi ed by statements of conditional independence which are encoded by a graph structure. Context-speci c independence is a natural generalization of conditional independence that only holds in a certain context, speci ed by the conditioning variables. We introduce context-speci c generalizations of both Bayesian networks and Markov networks by including statements of context-specific independence which can be encoded as a part of the model structures. For the purpose of learning context-speci c model structures from data, we derive score functions, based on results from Bayesian statistics, by which the plausibility of a structure is assessed. To identify high-scoring structures, we construct stochastic and deterministic search algorithms designed to exploit the structural decomposition of our score functions. Numerical experiments on synthetic and real-world data show that the increased exibility of context-specific structures can more accurately emulate the dependence structure among the variables and thereby improve the predictive accuracy of the models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In many mathematical models for pattern formation, a regular hexagonal pattern is stable in an infinite region. However, laboratory and numerical experiments are carried out in finite domains, and this imposes certain constraints on the possible patterns. In finite rectangular domains, it is shown that a regular hexagonal pattern cannot occur if the aspect ratio is rational. In practice, it is found experimentally that in a rectangular region, patterns of irregular hexagons are often observed. This work analyses the geometry and dynamics of irregular hexagonal patterns. These patterns occur in two different symmetry types, either with a reflection symmetry, involving two wavenumbers, or without symmetry, involving three different wavenumbers. The relevant amplitude equations are studied to investigate the detailed bifurcation structure in each case. It is shown that hexagonal patterns can bifurcate subcritically either from the trivial solution or from a pattern of rolls. Numerical simulations of a model partial differential equation are also presented to illustrate the behaviour.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we establish, from extensive numerical experiments, that the two dimensional stochastic fire-diffuse-fire model belongs to the directed percolation universality class. This model is an idealized model of intracellular calcium release that retains the both the discrete nature of calcium stores and the stochastic nature of release. It is formed from an array of noisy threshold elements that are coupled only by a diffusing signal. The model supports spontaneous release events that can merge to form spreading circular and spiral waves of activity. The critical level of noise required for the system to exhibit a non-equilibrium phase-transition between propagating and non-propagating waves is obtained by an examination of the \textit{local slope} $\delta(t)$ of the survival probability, $\Pi(t) \propto \exp(- \delta(t))$, for a wave to propagate for a time $t$.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We develop the a posteriori error estimation of interior penalty discontinuous Galerkin discretizations for H(curl)-elliptic problems that arise in eddy current models. Computable upper and lower bounds on the error measured in terms of a natural (mesh-dependent) energy norm are derived. The proposed a posteriori error estimator is validated by numerical experiments, illustrating its reliability and efficiency for a range of test problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this article we consider the development of discontinuous Galerkin finite element methods for the numerical approximation of the compressible Navier-Stokes equations. For the discretization of the leading order terms, we propose employing the generalization of the symmetric version of the interior penalty method, originally developed for the numerical approximation of linear self-adjoint second-order elliptic partial differential equations. In order to solve the resulting system of nonlinear equations, we exploit a (damped) Newton-GMRES algorithm. Numerical experiments demonstrating the practical performance of the proposed discontinuous Galerkin method with higher-order polynomials are presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this article we consider the application of the generalization of the symmetric version of the interior penalty discontinuous Galerkin finite element method to the numerical approximation of the compressible Navier--Stokes equations. In particular, we consider the a posteriori error analysis and adaptive mesh design for the underlying discretization method. Indeed, by employing a duality argument (weighted) Type I a posteriori bounds are derived for the estimation of the error measured in terms of general target functionals of the solution; these error estimates involve the product of the finite element residuals with local weighting terms involving the solution of a certain dual problem that must be numerically approximated. This general approach leads to the design of economical finite element meshes specifically tailored to the computation of the target functional of interest, as well as providing efficient error estimation. Numerical experiments demonstrating the performance of the proposed approach will be presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper is concerned with the discontinuous Galerkin approximation of the Maxwell eigenproblem. After reviewing the theory developed in [5], we present a set of numerical experiments which both validate the theory, and provide further insight regarding the practical performance of discontinuous Galerkin methods, particularly in the case when non-conforming meshes, characterized by the presence of hanging nodes, are employed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The main aim of this study was to evaluate the impact of the urban pollution plume from the city of Manaus by emissions from mobile and stationary sources in the atmospheric pollutants concentrations of the Amazon region, by using The Weather Research and Forecasting with Chemistry (WRF-Chem) model. The air pollutants analyzed were CO, NOx, SO2, O3, PM2.5, PM10 and VOCs. The model simulations have been configured with a grid spacing of 3 km, with 190 x and 136 y grid points in horizontal spacing, centered in the city of Manaus during the period of 17 and 18 of March 2014. The anthropogenic emissions inventories have gathered from mobile sources that were estimated the emissions of light and heavy-duty vehicles classes. In addition, the stationary sources have considered the thermal power plants by the type of energy sources used in the region as well as the emissions from the refinery located in Manaus. Various scenarios have been defined with numerical experiments that considered only emissions by biogenic, mobile and stationary sources, and replacement fuel from thermal power plant, along with a future scenario consisting with twice as much anthropogenic emissions. A qualitative assessment of simulation with base scenario has also been carried out, which represents the conditions of the region in its current state, where several statistical methods were used in order to compare the results of air pollutants and meteorological fields with observed ground-based data located in various points in the study grid. The qualitative analysis showed that the model represents satisfactorily the variables analyzed from the point of view of the adopted parameters. Regarding the simulations, defined from the base scenarios, the numerical experiments indicate relevant results such as: it was found that the stationary sources scenario, where the thermal power plants are predominant, resulted in the highest concentrations, for all air pollutants evaluated, except for carbon monoxide when compared to the vehicle emissions scenario; The replacement of the energy matrix of current thermal power plants for natural gas have showed significant reductions in pollutants analyzed, for instance, 63% reductions of NOx in the contribution of average concentration in the study grid; A significant increase in the concentrations of chemical species was observed in a futuristic scenario, reaching up to a 81% increase in peak concentrations of SO2 in the study area. The spatial distributions of the scenarios have showed that the air pollution plume from Manaus is predominantly west and southwest, where it can reach hundreds of kilometers to areas dominated by original soil covering.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a detailed analysis of the application of a multi-scale Hierarchical Reconstruction method for solving a family of ill-posed linear inverse problems. When the observations on the unknown quantity of interest and the observation operators are known, these inverse problems are concerned with the recovery of the unknown from its observations. Although the observation operators we consider are linear, they are inevitably ill-posed in various ways. We recall in this context the classical Tikhonov regularization method with a stabilizing function which targets the specific ill-posedness from the observation operators and preserves desired features of the unknown. Having studied the mechanism of the Tikhonov regularization, we propose a multi-scale generalization to the Tikhonov regularization method, so-called the Hierarchical Reconstruction (HR) method. First introduction of the HR method can be traced back to the Hierarchical Decomposition method in Image Processing. The HR method successively extracts information from the previous hierarchical residual to the current hierarchical term at a finer hierarchical scale. As the sum of all the hierarchical terms, the hierarchical sum from the HR method provides an reasonable approximate solution to the unknown, when the observation matrix satisfies certain conditions with specific stabilizing functions. When compared to the Tikhonov regularization method on solving the same inverse problems, the HR method is shown to be able to decrease the total number of iterations, reduce the approximation error, and offer self control of the approximation distance between the hierarchical sum and the unknown, thanks to using a ladder of finitely many hierarchical scales. We report numerical experiments supporting our claims on these advantages the HR method has over the Tikhonov regularization method.