116 resultados para computation
Resumo:
A new algorithm called the parameterized expectations approach(PEA) for solving dynamic stochastic models under rational expectationsis developed and its advantages and disadvantages are discussed. Thisalgorithm can, in principle, approximate the true equilibrium arbitrarilywell. Also, this algorithm works from the Euler equations, so that theequilibrium does not have to be cast in the form of a planner's problem.Monte--Carlo integration and the absence of grids on the state variables,cause the computation costs not to go up exponentially when the numberof state variables or the exogenous shocks in the economy increase. \\As an application we analyze an asset pricing model with endogenousproduction. We analyze its implications for time dependence of volatilityof stock returns and the term structure of interest rates. We argue thatthis model can generate hump--shaped term structures.
Resumo:
In moment structure analysis with nonnormal data, asymptotic valid inferences require the computation of a consistent (under general distributional assumptions) estimate of the matrix $\Gamma$ of asymptotic variances of sample second--order moments. Such a consistent estimate involves the fourth--order sample moments of the data. In practice, the use of fourth--order moments leads to computational burden and lack of robustness against small samples. In this paper we show that, under certain assumptions, correct asymptotic inferences can be attained when $\Gamma$ is replaced by a matrix $\Omega$ that involves only the second--order moments of the data. The present paper extends to the context of multi--sample analysis of second--order moment structures, results derived in the context of (simple--sample) covariance structure analysis (Satorra and Bentler, 1990). The results apply to a variety of estimation methods and general type of statistics. An example involving a test of equality of means under covariance restrictions illustrates theoretical aspects of the paper.
Resumo:
A Method is offered that makes it possible to apply generalized canonicalcorrelations analysis (CANCOR) to two or more matrices of different row and column order. The new method optimizes the generalized canonical correlationanalysis objective by considering only the observed values. This is achieved byemploying selection matrices. We present and discuss fit measures to assessthe quality of the solutions. In a simulation study we assess the performance of our new method and compare it to an existing procedure called GENCOM,proposed by Green and Carroll. We find that our new method outperforms the GENCOM algorithm both with respect to model fit and recovery of the truestructure. Moreover, as our new method does not require any type of iteration itis easier to implement and requires less computation. We illustrate the methodby means of an example concerning the relative positions of the political parties inthe Netherlands based on provincial data.
Resumo:
We construct and calibrate a general equilibrium business cycle model with unemployment and precautionary saving. We compute the cost of business cycles and locate the optimum in a set of simple cyclical fiscal policies. Our economy exhibits productivity shocks, giving firms an incentive to hire more when productivity is high. However, business cycles make workers' income riskier, both by increasing the unconditional probability of unusuallylong unemployment spells, and by making wages more variable, and therefore they decrease social welfare by around one-fourth or one-third of 1% of consumption. Optimal fiscal policy offsets the cycle, holding unemployment benefits constant but varying the tax rate procyclically to smooth hiring. By running a deficit of 4% to 5% of output in recessions, the government eliminates half the variation in the unemployment rate, most of the variation in workers'aggregate consumption, and most of the welfare cost of business cycles.
Resumo:
The Treatise on Quadrature of Fermat (c. 1659), besides containing the first known proof of the computation of the area under a higher parabola, R x+m/n dx, or under a higher hyperbola, R x-m/n dx with the appropriate limits of integration in each case , has a second part which was not understood by Fermat s contemporaries. This second part of the Treatise is obscure and difficult to read and even the great Huygens described it as'published with many mistakes and it is so obscure (with proofs redolent of error) that I have been unable to make any sense of it'. Far from the confusion that Huygens attributes to it, in this paper we try to prove that Fermat, in writing the Treatise, had a very clear goal in mind and he managed to attain it by means of a simple and original method. Fermat reduced the quadrature of a great number of algebraic curves to the quadrature of known curves: the higher parabolas and hyperbolas of the first part of the paper. Others, he reduced to the quadrature of the circle. We shall see how the clever use of two procedures, quite novel at the time: the change of variables and a particular case of the formulaof integration by parts, provide Fermat with the necessary tools to square very easily curves as well-known as the folium of Descartes, the cissoid of Diocles or the witch of Agnesi.
Resumo:
This paper surveys asset allocation methods that extend the traditional approach. An important feature of the the traditional approach is that measures the risk and return tradeoff in terms of mean and variance of final wealth. However, there are also other important features that are not always made explicit in terms of investor s wealth, information, and horizon: The investor makes a single portfolio choice based only on the mean and variance of her final financial wealth and she knows the relevant parameters in that computation. First, the paper describes traditional portfolio choice based on four basic assumptions, while the rest of the sections extend those assumptions. Each section will describe the corresponding equilibrium implications in terms of portfolio advice and asset pricing.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
Remote sensing spatial, spectral, and temporal resolutions of images, acquired over a reasonably sized image extent, result in imagery that can be processed to represent land cover over large areas with an amount of spatial detail that is very attractive for monitoring, management, and scienti c activities. With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms, at all levels of integration and programming to achieve higher performance and energy e ciency. Being the geometric calibration process one of the most time consuming processes when using remote sensing images, the aim of this work is to accelerate this process by taking advantage of new computing architectures and technologies, specially focusing in exploiting computation over shared memory multi-threading hardware. A parallel implementation of the most time consuming process in the remote sensing geometric correction has been implemented using OpenMP directives. This work compares the performance of the original serial binary versus the parallelized implementation, using several multi-threaded modern CPU architectures, discussing about the approach to nd the optimum hardware for a cost-e ective execution.
Resumo:
We present formulas for computing the resultant of sparse polyno- mials as a quotient of two determinants, the denominator being a minor of the numerator. These formulas extend the original formulation given by Macaulay for homogeneous polynomials.
Resumo:
Weather radar observations are currently the most reliable method for remote sensing of precipitation. However, a number of factors affect the quality of radar observations and may limit seriously automated quantitative applications of radar precipitation estimates such as those required in Numerical Weather Prediction (NWP) data assimilation or in hydrological models. In this paper, a technique to correct two different problems typically present in radar data is presented and evaluated. The aspects dealt with are non-precipitating echoes - caused either by permanent ground clutter or by anomalous propagation of the radar beam (anaprop echoes) - and also topographical beam blockage. The correction technique is based in the computation of realistic beam propagation trajectories based upon recent radiosonde observations instead of assuming standard radio propagation conditions. The correction consists of three different steps: 1) calculation of a Dynamic Elevation Map which provides the minimum clutter-free antenna elevation for each pixel within the radar coverage; 2) correction for residual anaprop, checking the vertical reflectivity gradients within the radar volume; and 3) topographical beam blockage estimation and correction using a geometric optics approach. The technique is evaluated with four case studies in the region of the Po Valley (N Italy) using a C-band Doppler radar and a network of raingauges providing hourly precipitation measurements. The case studies cover different seasons, different radio propagation conditions and also stratiform and convective precipitation type events. After applying the proposed correction, a comparison of the radar precipitation estimates with raingauges indicates a general reduction in both the root mean squared error and the fractional error variance indicating the efficiency and robustness of the procedure. Moreover, the technique presented is not computationally expensive so it seems well suited to be implemented in an operational environment.
Resumo:
A frequency-dependent compact model for inductors in high ohmic substrates, which is based on an energy point-of-view, is developed. This approach enables the description of the most important coupling phenomena that take place inside the device. Magnetically induced losses are quite accurately calculated and coupling between electric and magnetic fields is given by means of a delay constant. The later coupling phenomenon provides a modified procedure for the computation of the fringing capacitance value, when the self-resonance frequency of the inductor is used as a fitting parameter. The model takes into account the width of every metal strip and the pitch between strips. This enables the description of optimized layout inductors. Data from experiments and electromagnetic simulators are presented to test the accuracy of the model.
Resumo:
[spa] La participación del trabajo en la renta nacional es constante bajo los supuestos de una función de producción Cobb-Douglas y competencia perfecta. En este artículo se relajan estos supuestos y se investiga si el comportamiento no constante de la participación del trabajo en la renta nacional se explica por (i) una elasticidad de sustitución entre capital y trabajo no unitaria y (ii) competencia no perfecta en el mercado de producto. Nos centramos en España y los U.S. y estimamos una función de producción con elasticidad de sustitución constante y competencia imperfecta en el mercado de producto. El grado de competencia imperfecta se mide a través del cálculo del price markup basado en laaproximación dual. Mostramos que la elasticidad de sustitución es mayor que uno en España y menor que uno en los US. También mostramos que el price markup aleja la elasticidad de sustitución de uno, lo aumenta en España, lo reduce en los U.S. Estos resultados se utilizan para explicar la senda decreciente de la participación del trabajo en la renta nacional, común a ambas economías, y sus contrastadas sendas de capital.
Resumo:
An extension of the self-consistent field approach formulation by Cohen in the preceding paper is proposed in order to include the most general kind of two-body interactions, i.e., interactions depending on position, momenta, spin, isotopic spin, etc. The dielectric function is replaced by a dielectric matrix. The evaluation of the energies involves the computation of a matrix inversion and trace.
Resumo:
In this paper we examine in detail the implementation, with its associated difficulties, of the Killing conditions and gauge fixing into the variational principle formulation of Bianchi-type cosmologies. We address problems raised in the literature concerning the Lagrangian and the Hamiltonian formulations: We prove their equivalence, make clear the role of the homogeneity preserving diffeomorphisms in the phase space approach, and show that the number of physical degrees of freedom is the same in the Hamiltonian and Lagrangian formulations. Residual gauge transformations play an important role in our approach, and we suggest that Poincaré transformations for special relativistic systems can be understood as residual gauge transformations. In the Appendixes, we give the general computation of the equations of motion and the Lagrangian for any Bianchi-type vacuum metric and for spatially homogeneous Maxwell fields in a nondynamical background (with zero currents). We also illustrate our counting of degrees of freedom in an appendix.
Resumo:
We obtain the next-to-next-to-leading-logarithmic renormalization-group improvement of the spectrum of hydrogenlike atoms with massless fermions by using potential NRQED. These results can also be applied to the computation of the muonic hydrogen spectrum where we are able to reproduce some known double logarithms at O(m¿s6). We compare with other formalisms dealing with logarithmic resummation available in the literature.