982 resultados para Pelczynski`s decomposition method


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The contributions of this dissertation are in the development of two new interrelated approaches to video data compression: (1) A level-refined motion estimation and subband compensation method for the effective motion estimation and motion compensation. (2) A shift-invariant sub-decimation decomposition method in order to overcome the deficiency of the decimation process in estimating motion due to its shift-invariant property of wavelet transform. ^ The enormous data generated by digital videos call for an intense need of efficient video compression techniques to conserve storage space and minimize bandwidth utilization. The main idea of video compression is to reduce the interpixel redundancies inside and between the video frames by applying motion estimation and motion compensation (MEMO) in combination with spatial transform coding. To locate the global minimum of the matching criterion function reasonably, hierarchical motion estimation by coarse to fine resolution refinements using discrete wavelet transform is applied due to its intrinsic multiresolution and scalability natures. ^ Due to the fact that most of the energies are concentrated in the low resolution subbands while decreased in the high resolution subbands, a new approach called level-refined motion estimation and subband compensation (LRSC) method is proposed. It realizes the possible intrablocks in the subbands for lower entropy coding while keeping the low computational loads of motion estimation as the level-refined method, thus to achieve both temporal compression quality and computational simplicity. ^ Since circular convolution is applied in wavelet transform to obtain the decomposed subframes without coefficient expansion, symmetric-extended wavelet transform is designed on the finite length frame signals for more accurate motion estimation without discontinuous boundary distortions. ^ Although wavelet transformed coefficients still contain spatial domain information, motion estimation in wavelet domain is not as straightforward as in spatial domain due to the shift variance property of the decimation process of the wavelet transform. A new approach called sub-decimation decomposition method is proposed, which maintains the motion consistency between the original frame and the decomposed subframes, improving as a consequence the wavelet domain video compressions by shift invariant motion estimation and compensation. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Self-assembly of nanoparticles is a promising route to form complex, nanostructured materials with functional properties. Nanoparticle assemblies characterized by a crystallographic alignment of the nanoparticles on the atomic scale, i.e. mesocrystals, are commonly found in nature with outstanding functional and mechanical properties. This thesis aims to investigate and understand the formation mechanisms of mesocrystals formed by self-assembling iron oxide nanocubes. We have used the thermal decomposition method to synthesize monodisperse, oleate-capped iron oxide nanocubes with average edge lengths between 7 nm and 12 nm and studied the evaporation-induced self-assembly in dilute toluene-based nanocube dispersions. The influence of packing constraints on the alignment of the nanocubes in nanofluidic containers has been investigated with small and wide angle X-ray scattering (SAXS and WAXS, respectively). We found that the nanocubes preferentially orient one of their {100} faces with the confining channel wall and display mesocrystalline alignment irrespective of the channel widths.  We manipulated the solvent evaporation rate of drop-cast dispersions on fluorosilane-functionalized silica substrates in a custom-designed cell. The growth stages of the assembly process were investigated using light microscopy and quartz crystal microbalance with dissipation monitoring (QCM-D). We found that particle transport phenomena, e.g. the coffee ring effect and Marangoni flow, result in complex-shaped arrays near the three-phase contact line of a drying colloidal drop when the nitrogen flow rate is high. Diffusion-driven nanoparticle assembly into large mesocrystals with a well-defined morphology dominates at much lower nitrogen flow rates. Analysis of the time-resolved video microscopy data was used to quantify the mesocrystal growth and establish a particle diffusion-based, three-dimensional growth model. The dissipation obtained from the QCM-D signal reached its maximum value when the microscopy-observed lateral growth of the mesocrystals ceased, which we address to the fluid-like behavior of the mesocrystals and their weak binding to the substrate. Analysis of electron microscopy images and diffraction patterns showed that the formed arrays display significant nanoparticle ordering, regardless of the distinctive formation process.  We followed the two-stage formation mechanism of mesocrystals in levitating colloidal drops with real-time SAXS. Modelling of the SAXS data with the square-well potential together with calculations of van der Waals interactions suggests that the nanocubes initially form disordered clusters, which quickly transform into an ordered phase.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes an parallel semi-Lagrangian finite difference approach to the pricing of early exercise Asian Options on assets with a stochastic volatility. A multigrid procedure is described for the fast iterative solution of the discrete linear complementarity problems that result. The accuracy and performance of this approach is improved considerably by a strike-price related analytic transformation of asset prices. Asian options are contingent claims with payoffs that depend on the average price of an asset over some time interval. The payoff may depend on this average and a fixed strike price (Fixed Strike Asians) or it may depend on the average and the asset price (Floating Strike Asians). The option may also permit early exercise (American contract) or confine the holder to a fixed exercise date (European contract). The Fixed Strike Asian with early exercise is considered here where continuous arithmetic averaging has been used. Pricing such an option where the asset price has a stochastic volatility leads to the requirement to solve a tri-variate partial differential inequation in the three state variables of asset price, average price and volatility (or equivalently, variance). The similarity transformations [6] used with Floating Strike Asian options to reduce the dimensionality of the problem are not applicable to Fixed Strikes and so the numerical solution of a tri-variate problem is necessary. The computational challenge is to provide accurate solutions sufficiently quickly to support realtime trading activities at a reasonable cost in terms of hardware requirements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Wydział Fizyki

Relevância:

80.00% 80.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Agronomia e Medicina Veterinária, Programa de Pós-Graduação em Agronegócios, 2016.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The seismic processing technique has the main objective to provide adequate picture of geological structures from subsurface of sedimentary basins. Among the key steps of this process is the enhancement of seismic reflections by filtering unwanted signals, called seismic noise, the improvement of signals of interest and the application of imaging procedures. The seismic noise may appear random or coherent. This dissertation will present a technique to attenuate coherent noise, such as ground roll and multiple reflections, based on Empirical Mode Decomposition method. This method will be applied to decompose the seismic trace into Intrinsic Mode Functions. These functions have the properties of being symmetric, with local mean equals zero and the same number of zero-crossing and extremes. The developed technique was tested on synthetic and real data, and the results were considered encouraging

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Climate change is a severe threat to human development. Environmental protection and economic growth are two significant dimensions of promoting sustainable global development. In this research, a two-step procedure has been applied to investigate carbon productivity, which is deemed an appropriate indicator to measure sustainable development in conjunction with carbon reduction and production advancement. A decomposition method with the Log Mean Divisia Index has been applied to explore the factors influencing carbon productivity change, including technological innovation and regional adjustment. The carbon productivity of the Australian construction industry from 1990 to 2012 was then investigated. Research results indicate that carbon productivity in Australian construction had increased significantly and could be further improved. Technological innovation has played an important role in promoting carbon productivity, while regional adjustment has remained roughly steady. Based on correlation analyses, scale of the construction market and stock of machinery and equipment had shown weak correlations with carbon productivity changes, and it was clear that improvement in carbon productivity could benefit capital productivity and investment return. The research has systematically defined carbon productivity and for the first time measured it for the construction industry. The results are expected to assist construction industries worldwide to investigate productivity performance and to identify the influencing factors for improving development sustainability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Long-term, off-site human monitoring systems are emerging with respect to the skyrocketing expenditures engaged with rehabilitation therapies for neurological diseases. Inertial/magnetic sensor modules are well known as a worthy solution for this problem. Much attention and effort are being paid for minimizing drift problem of angular rates, yet the rest of kinematic measurements (earth’s magnetic field and gravitational orientation) are only themselves capable enough to track movements applying the theory for solving historicalWahbas Problem. Further, these solutions give a closed form solution which makes it mostly suitable for real time Mo-Cap systems. This paper examines the feasibility of some typical solutions of Wahba’s Problem named TRIAD method, Davenport’s q method, Singular Value Decomposition method and QUEST algorithm upon current inertial/magnetic sensor measurements for tracking human arm movements. Further, the theoretical assertions are compared through controlled experiments with both simulated and actual accelerometer and magnetometer measurements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dada la persistencia de las diferencias en ingresos laborales por regiones en Colombia, el presente artículo propone cuantificar la magnitud de este diferencial que es atribuida a la diferencia en estructuras de mercado laboral, entendiendo esta última como la diferencia en los retornos a las características de la fuerza laboral. Para ello se propone el uso de un método de descomposición del tipo Oaxaca- Blinder y se compara a Bogotá –la ciudad con mayores ingresos laborales- con otras ciudades principales. Los resultados obtenidos al conducir el ejercicio de descomposición muestran que las diferencias en estructura están a favor de Bogotá y que estas explican más de la mitad de la diferencia total, indicando que si se quieren reducir las disparidades de ingresos laborales entre ciudades no es suficiente con calificar la fuerza laboral y que es necesario indagar por las causas que hacen que los retornos a las características difieran entre ciudades.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The discrete vortex method is not capable of precisely predicting the bluff body flow separation and the fine structure of flow field in the vicinity of the body surface. In order to make a theoretical improvement over the method and to reduce the difficulty in finite-difference solution of N-S equations at high Reynolds number, in the present paper, we suggest a new numerical simulation model and a theoretical method for domain decomposition hybrid combination of finite-difference method and vortex method. Specifically, the full flow. field is decomposed into two domains. In the region of O(R) near the body surface (R is the characteristic dimension of body), we use the finite-difference method to solve the N-S equations and in the exterior domain, we take the Lagrange-Euler vortex method. The connection and coupling conditions for flow in the two domains are established. The specific numerical scheme of this theoretical model is given. As a preliminary application, some numerical simulations for flows at Re=100 and Re-1000 about a circular cylinder are made, and compared with the finite-difference solution of N-S equations for full flow field and experimental results, and the stability of the solution against the change of the interface between the two domains is examined. The results show that the method of the present paper has the advantage of finite-difference solution for N-S equations in precisely predicting the fine structure of flow field, as well as the advantage of vortex method in efficiently computing the global characteristics of the separated flow. It saves computer time and reduces the amount of computation, as compared with pure N-S equation solution. The present method can be used for numerical simulation of bluff body flow at high Reynolds number and would exhibit even greater merit in that case.