931 resultados para Variational Convergence
Resumo:
GDP has usually been used as a proxy for human well-being. Nevertheless, other social aspects should also be considered, such as life expectancy, infant mortality, educational enrolment and crime issues. With this paper we investigate not only economic convergence but also social convergence between regions in a developing country, Colombia, in the period 1975-2005. We consider several techniques in our analysis: sigma convergence, stochastic kernel estimations, and also several empirical models to find out the beta convergence parameter (cross section and panel estimates, with and without spatial dependence). The main results confirm that we can talk about convergence in Colombia in key social variables, although not in the classic economic variable, GDP per capita. We have also found that spatial autocorrelation reinforces convergence processes through deepening market and social factors, while isolation condemns regions to nonconvergence.
Resumo:
The Extended Kalman Filter (EKF) and four dimensional assimilation variational method (4D-VAR) are both advanced data assimilation methods. The EKF is impractical in large scale problems and 4D-VAR needs much effort in building the adjoint model. In this work we have formulated a data assimilation method that will tackle the above difficulties. The method will be later called the Variational Ensemble Kalman Filter (VEnKF). The method has been tested with the Lorenz95 model. Data has been simulated from the solution of the Lorenz95 equation with normally distributed noise. Two experiments have been conducted, first with full observations and the other one with partial observations. In each experiment we assimilate data with three-hour and six-hour time windows. Different ensemble sizes have been tested to examine the method. There is no strong difference between the results shown by the two time windows in either experiment. Experiment I gave similar results for all ensemble sizes tested while in experiment II, higher ensembles produce better results. In experiment I, a small ensemble size was enough to produce nice results while in experiment II the size had to be larger. Computational speed is not as good as we would want. The use of the Limited memory BFGS method instead of the current BFGS method might improve this. The method has proven succesful. Even if, it is unable to match the quality of analyses of EKF, it attains significant skill in forecasts ensuing from the analysis it has produced. It has two advantages over EKF; VEnKF does not require an adjoint model and it can be easily parallelized.
Resumo:
This paper analyses the differential impact of human capital, in terms of different levels of schooling, on regional productivity and convergence. The potential existence of geographical spillovers of human capital is also considered by applying spatial panel data techniques. The empirical analysis of Spanish provinces between 1980 and 2007 confirms the positive impact of human capital on regional productivity and convergence, but reveals no evidence of any positive geographical spillovers of human capital. In fact, in some specifications the spatial lag presented by tertiary studies has a negative effect on the variables under consideration.
Resumo:
The stochastic convergence amongst Mexican Federal entities is analyzed in panel data framework. The joint consideration of cross-section dependence and multiple structural breaks is required to ensure that the statistical inference is based on statistics with good statistical properties. Once these features are accounted for, evidence in favour of stochastic convergence is found. Since stochastic convergence is a necessary, yet insufficient condition for convergence as predicted by economic growth models, the paper also investigates whether-convergence process has taken place. We found that the Mexican states have followed either heterogeneous convergence patterns or divergence process throughout the analyzed period.
Resumo:
This thesis studies properties of transforms based on parabolic scaling, like Curvelet-, Contourlet-, Shearlet- and Hart-Smith-transform. Essentially, two di erent questions are considered: How these transforms can characterize H older regularity and how non-linear approximation of a piecewise smooth function converges. In study of Hölder regularities, several theorems that relate regularity of a function f : R2 → R to decay properties of its transform are presented. Of particular interest is the case where a function has lower regularity along some line segment than elsewhere. Theorems that give estimates for direction and location of this line, and regularity of the function are presented. Numerical demonstrations suggest also that similar theorems would hold for more general shape of segment of low regularity. Theorems related to uniform and pointwise Hölder regularity are presented as well. Although none of the theorems presented give full characterization of regularity, the su cient and necessary conditions are very similar. Another theme of the thesis is the study of convergence of non-linear M ─term approximation of functions that have discontinuous on some curves and otherwise are smooth. With particular smoothness assumptions, it is well known that squared L2 approximation error is O(M-2(logM)3) for curvelet, shearlet or contourlet bases. Here it is shown that assuming higher smoothness properties, the log-factor can be removed, even if the function still is discontinuous.
Resumo:
Aktörer inom telekommunikationsbranschen i Finland har genomgått en intensiv förändring under de senaste 25 åren, från 1980-talets självständiga företag till företag beroende av varandra, och även av aktörer inom närliggande branscher. I dag skapas telekommunikationsmarknaden inte endast av operatörerna, utan också av mediebolag (t.ex. MTV Media) och IT-företag (t.ex. TietoEnator). Gränserna mellan olika industrier håller därmed på att suddas ut - ett fenomen som allmänt benämns som teknologisk konvergens. Konvergens innebär att någonting integreras; det kan handla om t.ex. teknologier (telefoni och Internet), företag (AOL och Time Warner), industrier (telekom, media och IT-branscherna), tjänster (mobilt TV), produkter (PDA) osv. Detta innebär att ytterst få telekomaktörer ensamma kan vidareutveckla marknaden och tekniska lösningar. Samarbete mellan aktörer krävs; mobiltelefontillverkare, innehållsproducenter, operatörer osv. bör intesifiera sitt samarbete för att kunna erbjuda attraktiva tjänster och produkter till kunder och slutanvändare. Avhandlingen fokuserar speciellt på affärsnätverk och samarbetsmönster mellan nätverksaktörer som medel för att få tillgång till resurser som krävs i en konvergenskarakteriserad affärsomgivning. Avhandlingen lyfter fram vad den teknologiska konvergensen har inneburit för telekomaktörer, dvs. att företag tvingats förändra sina strategier och verksamhetsmodeller. För många företag i branschen har anpassningen till konvergenstänkande varit utmanande, och i vissa fall kan man till och med tala om att företagen upplevt en identitetskris. Den utförda forskningen visar att konvergens uppfattas på marknaden som en pågående förändringsprocess, där varje telekomaktör är tvungen att utvärdera sin roll och position i relation till andra aktörer inom branschen. Konvergensprocesser forsätter i framtiden med ökad intensitet. Aktörerna skapar medvetet sin omgivning genom att agera i olika roller, som kan sträcka sig över industrigränser. Avhandlingen påvisar även att externa händelser och industrikontexten påverkar dynamiken i ett affärsnätverk.
Resumo:
The study of convergence and divergence in global economy and social development utilises comparative indicators to investigate the contents of economic and social development policy and their effects on the global samples that represent the rich industrial, semi-industrial and the poor developing nations. The study searchesfor answers to questions such as "what are the objectives of economic growth policies in globalisation under the imperatives of convergence and divergence, and how do these affect human well-being in consideration to the objectives of social policy in various nations?" The empirical verification of data utilises the concepts of the `logic of industrialism´ for comparative analysis that focuses mainly on identifying the levels of well-being in world nations after the Second World War. The perspectives of convergence and divergence in global economy and social development critically examine the stages of early development processes in global economy, distinguish the differences between economy and social development, illustrate the contents of economic and social development policies, their effects on rich and poor countries, and the nature of convergence and divergence in propelling economic growth and unequal social development in world nations. The measurement of convergence and divergence in global economy and social development utilised both economic and social data that were combined into an index that measures the precise levels of the effects of economic and social development policies on human well-being in the rich and poor nations. The task of finding policy solutions to resolve the controversies are reviewed through empirical investigations and the analyses of trends indicated within economic and social indicators and data. These revealed how the adoption of social policy measures in translating the gains from economic growth, towards promoting education, public health, and equity, generate social progress and longer life expectancy, higher economic growth, and sustain more stable macro economy for the nations. Social policy is concerned with the translation of benefits from objectives of global economic growth policies, to objectives of social development policy in nation states. Social policy, therefore, represents an open door whereby benefits of economic growth policies are linked with the broader objectives of social development policy, thereby enhancing the possibility of extending benefits from economic growth to all human being in every nation.
Resumo:
The objective of the thesis is to enhance understanding of the evolution of convergence. Previous research has shown that the technological interfaces between distinct industries are one of the major sources of new radical cross-industry innovations. Despite the fact that convergence in industry evolution has attracted a substantial managerial interest, the conceptual confusion within the field of convergence exists. Firstly, this study clarifies the convergence phenomenon and its impact to industry evolution. Secondly, the study creates novel patent analysis methods to analyze technological convergence and provide tools for anticipating the early stages of convergence. Overall the study combines the industry evolution perspective and the convergence view of industrial evolution. The theoretical background for the study consists of the industry life cycle theories, technology evolution, and technological trajectories. The study links several important concepts in analyzing industry evolution, technological discontinuities, path-dependency, technological interfaces as a source of industry transformation, and the evolutionary stagesof convergence. Based on reviewing the literature a generic understanding of industry transformation and industrial dynamics was generated. In the convergence studies, the theoretical basis is in the discussion of different convergence types and their impacts on industry evolution, and in anticipating and monitoring the stages of convergence. The study is divided in two parts. The first part gives a general overview, and the second part comprises eight research publications. Our case study is based historically on two very distinct industries of the paper and electronics companies as a test environment to evaluate the importance of emerging business sectors and technological convergence as a source of industry transformation. Both qualitative and quantitative research methodology are utilized. The results of this study reveal that technological convergence and complementary innovations from different fields have significant effect to the emerging new business sector formation. The patent-based indicators in the analysis of technological convergence can be utilized on analyzing technology competition, capability and competence development, knowledge accumulation, knowledge spill-overs, and technology-based industry transformation. The patent-based indicators can provide insights to the future competitive environment. Results and conclusions from empirical part seem not be in conflict with real observations in the industry.
Resumo:
Finansanalytiker har en stor betydelse för finansmarknaderna, speciellt igenom att förmedla information genom resultatprognoser. Typiskt är att analytiker i viss grad är oeniga i sina resultatprognoser, och det är just denna oenighet analytiker emellan som denna avhandling studerar. Då ett företag rapporterar förluster tenderar oenigheten gällande ett företags framtid att öka. På ett intuitivt plan är det lätt att tolka detta som ökad osäkerhet. Det är även detta man finner då man studerar analytikerrapporter - analytiker ser ut att bli mer osäkra då företag börjar gå med förlust, och det är precis då som även oenigheten mellan analytikerna ökar. De matematisk-teoretiska modeller som beskriver analytikers beslutsprocesser har däremot en motsatt konsekvens - en ökad oenighet analytiker emellan kan endast uppkomma ifall analytikerna blir säkrare på ett individuellt plan, där den drivande kraften är asymmetrisk information. Denna avhandling löser motsägelsen mellan ökad säkerhet/osäkerhet som drivkraft bakom spridningen i analytikerprognoser. Genom att beakta mängden publik information som blir tillgänglig via resultatrapporter är det inte möjligt för modellerna för analytikers beslutsprocesser att ge upphov till de nivåer av prognosspridning som kan observeras i data. Slutsatsen blir därmed att de underliggande teoretiska modellerna för prognosspridning är delvis bristande och att spridning i prognoser istället mer troligt följer av en ökad osäkerhet bland analytikerna, i enlighet med vad analytiker de facto nämner i sina rapporter. Resultaten är viktiga eftersom en förståelse av osäkerhet runt t.ex. resultatrapportering bidrar till en allmän förståelse för resultatrapporteringsmiljön som i sin tur är av ytterst stor betydelse för prisbildning på finansmarknader. Vidare används typiskt ökad prognosspridning som en indikation på ökad informationsasymmetri i redovisningsforskning, ett fenomen som denna avhandling därmed ifrågasätter.
Resumo:
This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.
Resumo:
The current thesis manuscript studies the suitability of a recent data assimilation method, the Variational Ensemble Kalman Filter (VEnKF), to real-life fluid dynamic problems in hydrology. VEnKF combines a variational formulation of the data assimilation problem based on minimizing an energy functional with an Ensemble Kalman filter approximation to the Hessian matrix that also serves as an approximation to the inverse of the error covariance matrix. One of the significant features of VEnKF is the very frequent re-sampling of the ensemble: resampling is done at every observation step. This unusual feature is further exacerbated by observation interpolation that is seen beneficial for numerical stability. In this case the ensemble is resampled every time step of the numerical model. VEnKF is implemented in several configurations to data from a real laboratory-scale dam break problem modelled with the shallow water equations. It is also tried in a two-layer Quasi- Geostrophic atmospheric flow problem. In both cases VEnKF proves to be an efficient and accurate data assimilation method that renders the analysis more realistic than the numerical model alone. It also proves to be robust against filter instability by its adaptive nature.
Resumo:
All-electron partitioning of wave functions into products ^core^vai of core and valence parts in orbital space results in the loss of core-valence antisymmetry, uncorrelation of motion of core and valence electrons, and core-valence overlap. These effects are studied with the variational Monte Carlo method using appropriately designed wave functions for the first-row atoms and positive ions. It is shown that the loss of antisymmetry with respect to interchange of core and valence electrons is a dominant effect which increases rapidly through the row, while the effect of core-valence uncorrelation is generally smaller. Orthogonality of the core and valence parts partially substitutes the exclusion principle and is absolutely necessary for meaningful calculations with partitioned wave functions. Core-valence overlap may lead to nonsensical values of the total energy. It has been found that even relatively crude core-valence partitioned wave functions generally can estimate ionization potentials with better accuracy than that of the traditional, non-partitioned ones, provided that they achieve maximum separation (independence) of core and valence shells accompanied by high internal flexibility of ^core and Wvai- Our best core-valence partitioned wave function of that kind estimates the IP's with an accuracy comparable to the most accurate theoretical determinations in the literature.
Resumo:
Optimization of wave functions in quantum Monte Carlo is a difficult task because the statistical uncertainty inherent to the technique makes the absolute determination of the global minimum difficult. To optimize these wave functions we generate a large number of possible minima using many independently generated Monte Carlo ensembles and perform a conjugate gradient optimization. Then we construct histograms of the resulting nominally optimal parameter sets and "filter" them to identify which parameter sets "go together" to generate a local minimum. We follow with correlated-sampling verification runs to find the global minimum. We illustrate this technique for variance and variational energy optimization for a variety of wave functions for small systellls. For such optimized wave functions we calculate the variational energy and variance as well as various non-differential properties. The optimizations are either on par with or superior to determinations in the literature. Furthermore, we show that this technique is sufficiently robust that for molecules one may determine the optimal geometry at tIle same time as one optimizes the variational energy.