962 resultados para electricity consumption per floor area
Resumo:
This paper analyzes empirically the volatility of consumption-based stochastic discount factors as a measure of implicit economic fears by studying its relationship with future economic and stock market cycles. Time-varying economic fears seem to be well captured by the volatility of stochastic discount factors. In particular, the volatility of recursive utility-based stochastic discount factor with contemporaneous growth explains between 9 and 34 percent of future changes in industrial production at short and long horizons respectively. They also explain ex-ante uncertainty and risk aversion. However, future stock market cycles are better explained by a similar stochastic discount factor with long-run consumption growth. This specification of the stochastic discount factor presents higher volatility and lower pricing errors than the specification with contemporaneous consumption growth.
Resumo:
In this paper, we discuss pros and cons ofdifferent models for financial market regulationand supervision and we present a proposal forthe re-organisation of regulatory and supervisoryagencies in the Euro Area. Our arguments areconsistent with both new theories and effectivebehaviour of financial intermediaries inindustrialized countries. Our proposed architecturefor financial market regulation is based on theassignment of different objectives or "finalities"to different authorities, both at the domesticand the European level. According to thisperspective, the three objectives of supervision- microeconomic stability, investor protectionand proper behaviour, efficiency and competition- should be assigned to three distinct Europeanauthorities, each one at the centre of a Europeansystem of financial regulators and supervisorsspecialized in overseeing the entire financialmarket with respect to a single regulatoryobjective and regardless of the subjective natureof the intermediaries. Each system should bestructured and organized similarly to the EuropeanSystem of Central Banks and work in connectionwith the central bank which would remain theinstitution responsible for price and macroeconomicstability. We suggest a plausible path to buildour 4-peak regulatory architecture in the Euro area.
Resumo:
This paper generalizes the original random matching model of money byKiyotaki and Wright (1989) (KW) in two aspects: first, the economy ischaracterized by an arbitrary distribution of agents who specialize in producing aparticular consumption good; and second, these agents have preferences suchthat they want to consume any good with some probability. The resultsdepend crucially on the size of the fraction of producers of each goodand the probability with which different agents want to consume eachgood. KW and other related models are shown to be parameterizations ofthis more general one.
Resumo:
We use CEX repeated cross-section data on consumption and income, to evaluate the nature of increased income inequality in the 1980s and 90s. We decompose unexpected changes in family income into transitory and permanent, and idiosyncratic and aggregate components, and estimate the contribution of each component to total inequality. The model we use is a linearized incomplete markets model, enriched to incorporate risk-sharing while maintaining tractability. Our estimates suggest that taking risk sharing into account is important for the model fit; that the increase in inequality in the 1980s was mainly permanent; and that inequality is driven almost entirely by idiosyncratic income risk. In addition we find no evidence for cyclical behavior of consumption risk, casting doubt on Constantinides and Duffie s (1995) explanation for the equity premium puzzle.
Resumo:
Was the increase in income inequality in the US due to permanent shocks or merely to an increase in the variance of transitory shocks? The implications for consumption and welfare depend crucially on the answer to this question. We use CEX repeated cross-section data on consumption and income to decompose idiosyncratic changes in income into predictable life-cycle changes, transitory and permanent shocks and estimate the contribution of each to total inequality. Our model fits the joint evolution of consumption and income inequality well and delivers two main results. First, we find that permanent changes in income explain all of the increase in inequality in the 1980s and 90s. Second, we reconcile this finding with the fact that consumption inequality did not increase much over this period. Our results support the view that many permanent changes in income are predictable for consumers, even if they look unpredictable to the econometrician, consistent withmodels of heterogeneous income profiles.
Resumo:
This paper presents and estimates a dynamic choice model in the attribute space considering rational consumers. In light of the evidence of several state-dependence patterns, the standard attribute-based model is extended by considering a general utility function where pure inertia and pure variety-seeking behaviors can be explained in the model as particular linear cases. The dynamics of the model are fully characterized by standard dynamic programming techniques. The model presents a stationary consumption pattern that can be inertial, where the consumer only buys one product, or a variety-seeking one, where the consumer shifts among varied products.We run some simulations to analyze the consumption paths out of the steady state. Underthe hybrid utility assumption, the consumer behaves inertially among the unfamiliar brandsfor several periods, eventually switching to a variety-seeking behavior when the stationary levels are approached. An empirical analysis is run using scanner databases for three different product categories: fabric softener, saltine cracker, and catsup. Non-linear specifications provide the best fit of the data, as hybrid functional forms are found in all the product categories for most attributes and segments. These results reveal the statistical superiority of the non-linear structure and confirm the gradual trend to seek variety as the level of familiarity with the purchased items increases.
Resumo:
This paper aims to study the distribution of natural nests of Xylocopa ordinaria and characterize its nesting habits in the restinga of Grussai/Iquipari (RJ), supporting future studies on the pollinators management in the northern Rio de Janeiro state. The data obtained from Aug/2003 to Dec/2004, in an area of 11.6ha, were related to the nest distribution, substrate identification and dimensions, emergence, sex ratio, nest structure (n= 23 nests) and pollen content analysis of provisioning masses and feces. X. ordinaria nests were abundant and presented a clustered distribution. These bees do not present taxonomical affinity for nesting substrates, but preferences for wood availability and characteristics, being Pera glabrata the main substrate. X. ordinaria is a multivoltine species that tolerates co-specifics in their nests. These bees were generalist on their nectar and pollen consumption, but presented floral constancy while provisioning brood cells. These behaviors, activity along the year, flights throughout the day, and legitimate visits to flowers indicate the importance of X. ordinaria on the pollination of plants in the restinga.
Resumo:
In this article we propose using small area estimators to improve the estimatesof both the small and large area parameters. When the objective is to estimateparameters at both levels accurately, optimality is achieved by a mixed sampledesign of fixed and proportional allocations. In the mixed sample design, oncea sample size has been determined, one fraction of it is distributedproportionally among the different small areas while the rest is evenlydistributed among them. We use Monte Carlo simulations to assess theperformance of the direct estimator and two composite covariant-freesmall area estimators, for different sample sizes and different sampledistributions. Performance is measured in terms of Mean Squared Errors(MSE) of both small and large area parameters. It is found that the adoptionof small area composite estimators open the possibility of 1) reducingsample size when precision is given, or 2) improving precision for a givensample size.
Resumo:
The following main lithostratigraphic units have been distinguished in the Domes Area. The Kibaran basement complex composed of gneisses, migmatites with amphibolite bands and metagranites is exposed in dome structures; metamorphic features of Kibaran age have been almost completely obliterated by extensive Lufilian reactivation. The post-Kibaran cover sequence is subdivided into the Lower Roan Group consisting of well-preserved quartzites with high Mg content, talc-bearing, extremely foliated schists intercalated with pseudo-conglomerates of tectonic origin and the Upper Roan Group including dolomitic marbles with rare stromatolites, metapelites and a sequence of detrital metasediments, with local volcano-sedimentary components and interlayered banded ironstones. The sediments of the Lower Roan Group are interpreted as continental to lagoonal-evaporitic deposits partly converted into the talc-kyanite + garnet assemblage characteristic of ``white schists''. The dolomites and metapelites of the Upper Roan Group are attributed to a carbonate platform sequence progressively subsiding under terrigenous deposits, whilst the detrital metasediments and BIF may be interpreted as a basinal sequence, probably deposited on oceanic crust grading laterally into marbles. Metagabbros and metabasalts are considered as remnants of an ocean-floor-type crustal unit probably related to small basins. Alkaline stocks of Silurian age intruded the post-Kibaran cover. Significant ancestral tectonic discontinuities promoted the development of a nappe pile that underwent high-pressure metamorphism during the Lufilian orogeny and all lithostratigraphic units. Rb-Sr and K-Ar and U-Pb data indicate an age of 700 Ma for the highest grade metamorphism and 500 Ma for blocking of the K-Ar and Rb-Sr system in micas, corresponding to the time when the temperature dropped below 350-degrees-400-degrees-C and to an age of about 400 Ma for the emplacement of hypabyssal syenitic bodies. A first phase of crustal shortening by decoupling of basement and cover slices along shallow shear zones has been recognized. Fluid-rich tectonic slabs of cover sediments were thus able to transport fluids into the anhydrous metamorphic basement or mafic units. During the subsequent metamorphic re-equilibration stage of high pressure, pre-existing thrusts horizons were converted into recrystallized mylonites. Due to uplift, rocks were re-equilibrated into assemblages compatible with lower pressures and slightly lower temperatures. This stage occurs under a decompressional (nearly adiabatic) regime, with P(fluid) almost-equal-to P(lithostatic). It is accompanied by metasomatic development of minerals, activated by injection of hot fluids. New or reactivated shear zones and mylonitic belts were the preferred conduits of fluids. The most evident regional-scale effect of these processes is the intense metasomatic scapolitization of formerly plagioclase-rich lithologies. Uraninite mineralization can probably be assigned to the beginning of the decompressional stage. A third regional deformation phase characterized by open folds and local foliation is not accompanied by significant growth of new minerals. However, pitchblende mineralization can be ascribed to this phase as late-stage, short-range remobilization of previously existing deposits. Finally, shallow alkaline massifs were emplaced when the level of the Domes Area now exposed was already subjected to exchange with meteoric circuits, activated by residual geothermal gradients generally related to intrusions or rifting. Most of the superficial U-showings with U-oxidation products were probably generated during this relatively recent phase.
Resumo:
Most methods for small-area estimation are based on composite estimators derived from design- or model-based methods. A composite estimator is a linear combination of a direct and an indirect estimator with weights that usually depend on unknown parameters which need to be estimated. Although model-based small-area estimators are usually based on random-effects models, the assumption of fixed effects is at face value more appropriate.Model-based estimators are justified by the assumption of random (interchangeable) area effects; in practice, however, areas are not interchangeable. In the present paper we empirically assess the quality of several small-area estimators in the setting in which the area effects are treated as fixed. We consider two settings: one that draws samples from a theoretical population, and another that draws samples from an empirical population of a labor force register maintained by the National Institute of Social Security (NISS) of Catalonia. We distinguish two types of composite estimators: a) those that use weights that involve area specific estimates of bias and variance; and, b) those that use weights that involve a common variance and a common squared bias estimate for all the areas. We assess their precision and discuss alternatives to optimizing composite estimation in applications.
Resumo:
Agent-based computational economics is becoming widely used in practice. This paperexplores the consistency of some of its standard techniques. We focus in particular on prevailingwholesale electricity trading simulation methods. We include different supply and demandrepresentations and propose the Experience-Weighted Attractions method to include severalbehavioural algorithms. We compare the results across assumptions and to economic theorypredictions. The match is good under best-response and reinforcement learning but not underfictitious play. The simulations perform well under flat and upward-slopping supply bidding,and also for plausible demand elasticity assumptions. Learning is influenced by the number ofbids per plant and the initial conditions. The overall conclusion is that agent-based simulationassumptions are far from innocuous. We link their performance to underlying features, andidentify those that are better suited to model wholesale electricity markets.
Resumo:
We examine monetary policy in the Euro area from both theoretical and empirical perspectives. We discuss what theory tells us the strategy of Central banks should be and contrasts it with the one employed by the ECB. We review accomplishments (and failures) of monetary policy in the Euro area and suggest changes that would increase the correlation between words and actions; streamline the understanding that markets have of the policy process; and anchor expectation formation more strongly. We examine the transmission of monetary policy shocks in the Euro area and in some potential member countries and try to infer the likely effects occurring when Turkey joins the EU first and the Euro area later. Much of the analysis here warns against having too high expectations of the economic gains that membership to the EU and Euro club will produce.
Resumo:
This paper evaluates new evidence on price setting practices and inflation persistence in the euro area with respect to its implications for macro modelling. It argues that several of the most commonly used assumptions in micro-founded macro models are seriously challenged by the new findings.
Resumo:
We use a simulation model to study how the diversification of electricity generation portfoliosinfluences wholesale prices. We find that technological diversification generally leads to lower market prices but that the relationship is mediated by the supply to demand ratio. In each demand case there is a threshold where pivotal dynamics change. Pivotal dynamics pre- and post-threshold are the cause of non-linearities in the influence of diversification on market prices. The findings are robust to our choice of behavioural parameters and match close-form solutions where those are available.
Resumo:
A class of composite estimators of small area quantities that exploit spatial (distancerelated)similarity is derived. It is based on a distribution-free model for the areas, but theestimators are aimed to have optimal design-based properties. Composition is applied alsoto estimate some of the global parameters on which the small area estimators depend.It is shown that the commonly adopted assumption of random effects is not necessaryfor exploiting the similarity of the districts (borrowing strength across the districts). Themethods are applied in the estimation of the mean household sizes and the proportions ofsingle-member households in the counties (comarcas) of Catalonia. The simplest version ofthe estimators is more efficient than the established alternatives, even though the extentof spatial similarity is quite modest.