119 resultados para Ephemeral Computation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

El projecte realitzat se situa en el marc de la història contemporània, i s’ha centrat en primer lloc, en l’anàlisi, des d’una perspectiva comparativa, del desenvolupament dels discursos de gènere a Catalunya durant la Dictadura Franquista i a la Irlanda postcolonial. Mitjançant l’anàlisi del discurs, s’han estudiat els models de feminitat imposats pel Franquisme i les seves bases ideològiques com són el valors catòlics i l’antindividualisme. En el cas irlandès, s’ha analitzat com, a través de determinades institucions gestionades per l’Església Catòlica, es controlaven aquelles dones que es desviaven del model de gènere que propugnava l’Estat Irlandès, molt similar al proposat pel Franquisme i també basat en els catolicisme. De la mateix manera, s’ha estudiat com el feminisme Català i irlandès dels anys 1970 i 1980 van contrarestar aquests models de gènere imposats, a través de l’anàlisi d’un conjunt d’expressions culturals produïdes per ambdós moviments feministes. La perspectiva comparativa del projecte ha permès: El coneixement dels mecanismes culturals de repressió de les dones així com la seva institucionalització. Revelant els paral•lelismes pel que fa a les polítiques de gènere entre els dos casos estudiats malgrat diferències significatives entre els dos contextos (Catalunya es troba sota una dictadura, Irlanda és un Estat democràtic). La importància de l’agència de les dones i les seves diverses estratègies de resistència, especialment a través d’expressions culturals més efímeres o considerades frívoles que, malgrat el poc reconeixement que han obtingut, són molt eficaces en la deconstrucció de discursos de gènere repressius envers les dones. Ha posat de manifest, també, la importància de l’experiència i les pràctiques personals i íntimes com a pràctiques de resistència. Així mateix, ha visibilitzat les dinàmiques pròpies de moviments feministes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new algorithm called the parameterized expectations approach(PEA) for solving dynamic stochastic models under rational expectationsis developed and its advantages and disadvantages are discussed. Thisalgorithm can, in principle, approximate the true equilibrium arbitrarilywell. Also, this algorithm works from the Euler equations, so that theequilibrium does not have to be cast in the form of a planner's problem.Monte--Carlo integration and the absence of grids on the state variables,cause the computation costs not to go up exponentially when the numberof state variables or the exogenous shocks in the economy increase. \\As an application we analyze an asset pricing model with endogenousproduction. We analyze its implications for time dependence of volatilityof stock returns and the term structure of interest rates. We argue thatthis model can generate hump--shaped term structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In moment structure analysis with nonnormal data, asymptotic valid inferences require the computation of a consistent (under general distributional assumptions) estimate of the matrix $\Gamma$ of asymptotic variances of sample second--order moments. Such a consistent estimate involves the fourth--order sample moments of the data. In practice, the use of fourth--order moments leads to computational burden and lack of robustness against small samples. In this paper we show that, under certain assumptions, correct asymptotic inferences can be attained when $\Gamma$ is replaced by a matrix $\Omega$ that involves only the second--order moments of the data. The present paper extends to the context of multi--sample analysis of second--order moment structures, results derived in the context of (simple--sample) covariance structure analysis (Satorra and Bentler, 1990). The results apply to a variety of estimation methods and general type of statistics. An example involving a test of equality of means under covariance restrictions illustrates theoretical aspects of the paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Method is offered that makes it possible to apply generalized canonicalcorrelations analysis (CANCOR) to two or more matrices of different row and column order. The new method optimizes the generalized canonical correlationanalysis objective by considering only the observed values. This is achieved byemploying selection matrices. We present and discuss fit measures to assessthe quality of the solutions. In a simulation study we assess the performance of our new method and compare it to an existing procedure called GENCOM,proposed by Green and Carroll. We find that our new method outperforms the GENCOM algorithm both with respect to model fit and recovery of the truestructure. Moreover, as our new method does not require any type of iteration itis easier to implement and requires less computation. We illustrate the methodby means of an example concerning the relative positions of the political parties inthe Netherlands based on provincial data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We construct and calibrate a general equilibrium business cycle model with unemployment and precautionary saving. We compute the cost of business cycles and locate the optimum in a set of simple cyclical fiscal policies. Our economy exhibits productivity shocks, giving firms an incentive to hire more when productivity is high. However, business cycles make workers' income riskier, both by increasing the unconditional probability of unusuallylong unemployment spells, and by making wages more variable, and therefore they decrease social welfare by around one-fourth or one-third of 1% of consumption. Optimal fiscal policy offsets the cycle, holding unemployment benefits constant but varying the tax rate procyclically to smooth hiring. By running a deficit of 4% to 5% of output in recessions, the government eliminates half the variation in the unemployment rate, most of the variation in workers'aggregate consumption, and most of the welfare cost of business cycles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Treatise on Quadrature of Fermat (c. 1659), besides containing the first known proof of the computation of the area under a higher parabola, R x+m/n dx, or under a higher hyperbola, R x-m/n dx with the appropriate limits of integration in each case , has a second part which was not understood by Fermat s contemporaries. This second part of the Treatise is obscure and difficult to read and even the great Huygens described it as'published with many mistakes and it is so obscure (with proofs redolent of error) that I have been unable to make any sense of it'. Far from the confusion that Huygens attributes to it, in this paper we try to prove that Fermat, in writing the Treatise, had a very clear goal in mind and he managed to attain it by means of a simple and original method. Fermat reduced the quadrature of a great number of algebraic curves to the quadrature of known curves: the higher parabolas and hyperbolas of the first part of the paper. Others, he reduced to the quadrature of the circle. We shall see how the clever use of two procedures, quite novel at the time: the change of variables and a particular case of the formulaof integration by parts, provide Fermat with the necessary tools to square very easily curves as well-known as the folium of Descartes, the cissoid of Diocles or the witch of Agnesi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper surveys asset allocation methods that extend the traditional approach. An important feature of the the traditional approach is that measures the risk and return tradeoff in terms of mean and variance of final wealth. However, there are also other important features that are not always made explicit in terms of investor s wealth, information, and horizon: The investor makes a single portfolio choice based only on the mean and variance of her final financial wealth and she knows the relevant parameters in that computation. First, the paper describes traditional portfolio choice based on four basic assumptions, while the rest of the sections extend those assumptions. Each section will describe the corresponding equilibrium implications in terms of portfolio advice and asset pricing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Remote sensing spatial, spectral, and temporal resolutions of images, acquired over a reasonably sized image extent, result in imagery that can be processed to represent land cover over large areas with an amount of spatial detail that is very attractive for monitoring, management, and scienti c activities. With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms, at all levels of integration and programming to achieve higher performance and energy e ciency. Being the geometric calibration process one of the most time consuming processes when using remote sensing images, the aim of this work is to accelerate this process by taking advantage of new computing architectures and technologies, specially focusing in exploiting computation over shared memory multi-threading hardware. A parallel implementation of the most time consuming process in the remote sensing geometric correction has been implemented using OpenMP directives. This work compares the performance of the original serial binary versus the parallelized implementation, using several multi-threaded modern CPU architectures, discussing about the approach to nd the optimum hardware for a cost-e ective execution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We summarize the progress in whole-genome sequencing and analyses of primate genomes. These emerging genome datasets have broadened our understanding of primate genome evolution revealing unexpected and complex patterns of evolutionary change. This includes the characterization of genome structural variation, episodic changes in the repeat landscape, differences in gene expression, new models regarding speciation, and the ephemeral nature of the recombination landscape. The functional characterization of genomic differences important in primate speciation and adaptation remains a significant challenge. Limited access to biological materials, the lack of detailed phenotypic data and the endangered status of many critical primate species have significantly attenuated research into the genetic basis of primate evolution. Next-generation sequencing technologies promise to greatly expand the number of available primate genome sequences; however, such draft genome sequences will likely miss critical genetic differences within complex genomic regions unless dedicated efforts are put forward to understand the full spectrum of genetic variation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present formulas for computing the resultant of sparse polyno- mials as a quotient of two determinants, the denominator being a minor of the numerator. These formulas extend the original formulation given by Macaulay for homogeneous polynomials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Weather radar observations are currently the most reliable method for remote sensing of precipitation. However, a number of factors affect the quality of radar observations and may limit seriously automated quantitative applications of radar precipitation estimates such as those required in Numerical Weather Prediction (NWP) data assimilation or in hydrological models. In this paper, a technique to correct two different problems typically present in radar data is presented and evaluated. The aspects dealt with are non-precipitating echoes - caused either by permanent ground clutter or by anomalous propagation of the radar beam (anaprop echoes) - and also topographical beam blockage. The correction technique is based in the computation of realistic beam propagation trajectories based upon recent radiosonde observations instead of assuming standard radio propagation conditions. The correction consists of three different steps: 1) calculation of a Dynamic Elevation Map which provides the minimum clutter-free antenna elevation for each pixel within the radar coverage; 2) correction for residual anaprop, checking the vertical reflectivity gradients within the radar volume; and 3) topographical beam blockage estimation and correction using a geometric optics approach. The technique is evaluated with four case studies in the region of the Po Valley (N Italy) using a C-band Doppler radar and a network of raingauges providing hourly precipitation measurements. The case studies cover different seasons, different radio propagation conditions and also stratiform and convective precipitation type events. After applying the proposed correction, a comparison of the radar precipitation estimates with raingauges indicates a general reduction in both the root mean squared error and the fractional error variance indicating the efficiency and robustness of the procedure. Moreover, the technique presented is not computationally expensive so it seems well suited to be implemented in an operational environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A frequency-dependent compact model for inductors in high ohmic substrates, which is based on an energy point-of-view, is developed. This approach enables the description of the most important coupling phenomena that take place inside the device. Magnetically induced losses are quite accurately calculated and coupling between electric and magnetic fields is given by means of a delay constant. The later coupling phenomenon provides a modified procedure for the computation of the fringing capacitance value, when the self-resonance frequency of the inductor is used as a fitting parameter. The model takes into account the width of every metal strip and the pitch between strips. This enables the description of optimized layout inductors. Data from experiments and electromagnetic simulators are presented to test the accuracy of the model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[spa] La participación del trabajo en la renta nacional es constante bajo los supuestos de una función de producción Cobb-Douglas y competencia perfecta. En este artículo se relajan estos supuestos y se investiga si el comportamiento no constante de la participación del trabajo en la renta nacional se explica por (i) una elasticidad de sustitución entre capital y trabajo no unitaria y (ii) competencia no perfecta en el mercado de producto. Nos centramos en España y los U.S. y estimamos una función de producción con elasticidad de sustitución constante y competencia imperfecta en el mercado de producto. El grado de competencia imperfecta se mide a través del cálculo del price markup basado en laaproximación dual. Mostramos que la elasticidad de sustitución es mayor que uno en España y menor que uno en los US. También mostramos que el price markup aleja la elasticidad de sustitución de uno, lo aumenta en España, lo reduce en los U.S. Estos resultados se utilizan para explicar la senda decreciente de la participación del trabajo en la renta nacional, común a ambas economías, y sus contrastadas sendas de capital.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An extension of the self-consistent field approach formulation by Cohen in the preceding paper is proposed in order to include the most general kind of two-body interactions, i.e., interactions depending on position, momenta, spin, isotopic spin, etc. The dielectric function is replaced by a dielectric matrix. The evaluation of the energies involves the computation of a matrix inversion and trace.