896 resultados para Aggregate Claim Amount
Resumo:
For decades regulators in the energy sector have focused on facilitating the maximisation of energy supply in order to meet demand through liberalisation and removal of market barriers. The debate on climate change has emphasised a new type of risk in the balance between energy demand and supply: excessively high energy demand brings about significantly negative environmental and economic impacts. This is because if a vast number of users is consuming electricity at the same time, energy suppliers have to activate dirty old power plants with higher greenhouse gas emissions and higher system costs. The creation of a Europe-wide electricity market requires a systematic investigation into the risk of aggregate peak demand. This paper draws on the e-Living Time-Use Survey database to assess the risk of aggregate peak residential electricity demand for European energy markets. Findings highlight in which countries and for what activities the risk of aggregate peak demand is greater. The discussion highlights which approaches energy regulators have started considering to convince users about the risks of consuming too much energy during peak times. These include ‘nudging’ approaches such as the roll-out of smart meters, incentives for shifting the timing of energy consumption, differentiated time-of-use tariffs, regulatory financial incentives and consumption data sharing at the community level.
Resumo:
We present a model that describes features common to many famines: (i) a famine may occur without a substantial decline in aggregate food availability; (ii) famines often have a very uneven impact on different groups of population; and (iii) expectations about future food markets affect current market behaviour and result in starvation for certain groups of population. We consider an exchange economy with two types of agents, food producers and non-food producers. An agent starves if his consumption of food falls below the minimum subsistence level. We show that non-food producers are more vulnerable to starvation than food producers, and may fail to survive even when the aggregate amount of food in the economy is enough to guarantee survival of all agents. In an economy with government procurement and public distribution, we show that the government policy may become unsustainable if the food producers condition their expectations about future public sales on current public stock level.
Resumo:
A discrete element model is used to study shear rupture of sea ice under convergent wind stresses. The model includes compressive, tensile, and shear rupture of viscous elastic joints connecting floes that move under the action of the wind stresses. The adopted shear rupture is governed by Coulomb’s criterion. The ice pack is a 400 km long square domain consisting of 4 km size floes. In the standard case with tensile strength 10 times smaller than the compressive strength, under uniaxial compression the failure regime is mainly shear rupture with the most probable scenario corresponding to that with the minimum failure work. The orientation of cracks delineating formed aggregates is bimodal with the peaks around the angles given by the wing crack theory determining diamond-shaped blocks. The ice block (floe aggregate) size decreases as the wind stress gradient increases since the elastic strain energy grows faster leading to a higher speed of crack propagation. As the tensile strength grows, shear rupture becomes harder to attain and compressive failure becomes equally important leading to elongation of blocks perpendicular to the compression direction and the blocks grow larger. In the standard case, as the wind stress confinement ratio increases the failure mode changes at a confinement ratio within 0.2–0.4, which corresponds to the analytical critical confinement ratio of 0.32. Below this value, the cracks are bimodal delineating diamond shape aggregates, while above this value failure becomes isotropic and is determined by small-scale stress anomalies due to irregularities in floe shape.
Resumo:
The effects of different water application rates (3, 10, 15 and 30 mm/h) and of topsoil removal on the rate of downward water movement through the cryoturbated chalk zone in southern England were investigated in situ. During and after each application of water, changes in water content and matric potential of the profile were monitored and percolate was collected in troughs. The measured water breakthrough time showed that water moved to 1.2 m depth quickly (in 8.2 h) even with application rate as low as 3 mm/h and that the time was only 3 h when water was applied at a rate of 15 mm/ h. These breakthrough times were about 150 and 422 fold shorter, respectively, than those expected if the water had been conducted by the matrix alone. Percolate was collected in troughs within 3.5 h at 1.2 m depth when water was applied at 30 mm/h and the quantity collected indicated that a significant amount of the surface applied water moved downward through inter-aggregate pores. The small increase in volumetric water content (about 3%) in excess of matrix water content resulted in a large increase in pore water velocities, from 0.20 to 5.3 m/d. The presence of soil layer had effect on the time taken for water to travel through the cryoturbated chalk layer and in the soil layer, water took about 1-2 h to pass thorough, depending on the intensity.
Resumo:
Retributivists believe that punishment can be deserved, and that deserved punishment is intrinsically good. They also believe that certain crimes deserve certain quantities of punishment. On the plausible assumption that the overall amount of any given punishment is a function of its severity and duration, we might think that retributivists (qua retributivists) would be indifferent as to whether a punishment were long and light or short and sharp, provided the offender gets the overall amount of punishment he deserves. In this paper I argue against this, showing that retributivists should actually prefer shorter and more severe punishments to longer, gentler options. I show this by focusing on, and developing a series of interpretations of, the retributivist claim that not punishing the guilty is bad, focussing on the relationship between that badness and time. I then show that each interpretation leads to a preference for shorter over longer punishment.
Resumo:
Contemporary acquisition theorizing has placed a considerable amount of attention on interfaces, points at which different linguistic modules interact. The claim is that vulnerable interfaces cause particular difficulties in L1, bilingual and adult L2 acquisition (e.g. Platzack, 2001; Montrul, 2004; Müller and Hulk, 2001; Sorace, 2000, 2003, 2004, 2005). Accordingly, it is possible that deficits at the syntax–pragmatics interface cause what appears to be particular non-target-like syntactic behavior in L2 performance. This syntax-before-discourse hypothesis is examined in the present study by analyzing null vs. overt subject pronoun distribution in L2 Spanish of English L1 learners. As ultimately determined by L2 knowledge of the Overt Pronoun Constraint (OPC) (Montalbetti, 1984), the data indicate that L2 learners at the intermediate and advanced levels reset the Null Subject Parameter (NSP), but only advanced learners have acquired a more or less target null/overt subject distribution. Against the predictions of Sorace (2004) and in line with Montrul and Rodríguez-Louro (2006), the data indicate an overuse of both overt and null subject pronouns. As a result, this behavior cannot be from L1 interference alone, suggesting that interface-conditioned properties are simply more complex and therefore, harder to acquire. Furthermore, the data from the advanced learners demonstrate that the syntax–pragmatics interface is not a predetermined locus for fossilization (in contra e.g. Valenzuela, 2006).
Resumo:
This paper examines the cyclical regularities of macroeconomic, financial and property market aggregates in relation to the property stock price cycle in the UK. The Hodrick Prescott filter is employed to fit a long-term trend to the raw data, and to derive the short-term cycles of each series. It is found that the cycles of consumer expenditure, total consumption per capita, the dividend yield and the long-term bond yield are moderately correlated, and mainly coincident, with the property price cycle. There is also evidence that the nominal and real Treasury Bill rates and the interest rate spread lead this cycle by one or two quarters, and therefore that these series can be considered leading indicators of property stock prices. This study recommends that macroeconomic and financial variables can provide useful information to explain and potentially to forecast movements of property-backed stock returns in the UK.
Resumo:
The Sustainable Value approach integrates the efficiency with regard to environmental, social and economic resources into a monetary indicator. It gained significant popularity as evidenced by diverse applications at the corporate level. However, its introduction as a measure adhering to the strong sustainability paradigm sparked an ardent debate. This study explores its validity as a macroeconomic strong sustainability measure by applying the Sustainable Value approach to the EU-15 countries. Concretely, we assessed environmental, social and economic resources in combination with the GDP for all EU-15 countries from 1995 to 2006 for three benchmark alternatives. The results show that several countries manage to adequately delink resource use from GDP growth. Furthermore, the remarkable difference in outcome between the national and EU-15 benchmark indicates a possible inefficiency of the current allocation of national resource ceilings imposed by the European institutions. Additionally, by using an effects model we argue that the service degree of the economy and governmental expenditures on social protection and research and development are important determinants of overall resource efficiency. Finally, we sketch out three necessary conditions to link the Sustainable Value approach to the strong sustainability paradigm.
Resumo:
Radar reflectivity measurements from three different wavelengths are used to retrieve information about the shape of aggregate snowflakes in deep stratiform ice clouds. Dual-wavelength ratios are calculated for different shape models and compared to observations at 3, 35 and 94 GHz. It is demonstrated that many scattering models, including spherical and spheroidal models, do not adequately describe the aggregate snowflakes that are observed. The observations are consistent with fractal aggregate geometries generated by a physically-based aggregation model. It is demonstrated that the fractal dimension of large aggregates can be inferred directly from the radar data. Fractal dimensions close to 2 are retrieved, consistent with previous theoretical models and in-situ observations.
Resumo:
In this paper an equation is derived for the mean backscatter cross section of an ensemble of snowflakes at centimeter and millimeter wavelengths. It uses the Rayleigh–Gans approximation, which has previously been found to be applicable at these wavelengths due to the low density of snow aggregates. Although the internal structure of an individual snowflake is random and unpredictable, the authors find from simulations of the aggregation process that their structure is “self-similar” and can be described by a power law. This enables an analytic expression to be derived for the backscatter cross section of an ensemble of particles as a function of their maximum dimension in the direction of propagation of the radiation, the volume of ice they contain, a variable describing their mean shape, and two variables describing the shape of the power spectrum. The exponent of the power law is found to be −. In the case of 1-cm snowflakes observed by a 3.2-mm-wavelength radar, the backscatter is 40–100 times larger than that of a homogeneous ice–air spheroid with the same mass, size, and aspect ratio.
Resumo:
We report a longitudinal comprehension study of (long) passive constructions in two native-Spanish child groups differing by age of initial exposure to L2 English (young group: 3;0-4;0 years; older group: 6;0-7;0 years); where amount of input, L2 exposure environment, and socio-economic status are controlled. Data from a forced-choice task show that both groups comprehend active sentences, not passives, initially (after 3.6 years of exposure). One year later, both groups improve, but only the older group reaches ceiling on both actives and passives. Two years from initial testing, the younger group catches up. Input alone cannot explain why the younger group takes 5 years to accomplish what the older group does in 4. We claim that some properties take longer to acquire at certain ages because language development is partially constrained by general cognitive and linguistic development (e.g. de Villiers, 2007; Long & Rothman, 2014; Paradis, 2008, 2010, 2011; Tsimpli, 2014).