888 resultados para intra-household allocation
Resumo:
Asset allocation is concerned with the development of multi--‐asset portfolio strategies that are likely to meet an investor’s objectives based on the interaction of expected returns, risk, correlation and implementation from a range of distinct asset classes or beta sources. Challenges associated with the discipline are often particularly significant in private markets. Specifically, composition differences between the ‘index’ or ‘benchmark’ universe and the investible universe mean that there can often be substantial and meaningful deviations between the investment characteristics implied in asset allocation decisions and those delivered by investment teams. For example, while allocation decisions are often based on relatively low--‐risk diversified real estate ‘equity’ exposure, implementation decisions frequently include exposure to higher risk forms of the asset class as well as investments in debt based instruments. These differences can have a meaningful impact on the contribution of the asset class to the overall portfolio and, therefore, lead to a potential misalignment between asset allocation decisions and implementation. Despite this, the key conclusion from this paper is not that real estate investors should become slaves to a narrowly defined mandate based on IPD / NCREIF or other forms of benchmark replication. The discussion suggests that such an approach would likely lead to the underutilization of real estate in multi--‐asset portfolio strategies. Instead, it is that to achieve asset allocation alignment, real estate exposure should be divided into multiple pools representing distinct forms of the asset class. In addition, the paper suggests that associated investment guidelines and processes should be collaborative and reflect the portfolio wide asset allocation objectives of each pool. Further, where appropriate they should specifically target potential for ‘additional’ beta or, more marginally, ‘alpha’.
Resumo:
We study a two-way relay network (TWRN), where distributed space-time codes are constructed across multiple relay terminals in an amplify-and-forward mode. Each relay transmits a scaled linear combination of its received symbols and their conjugates,with the scaling factor chosen based on automatic gain control. We consider equal power allocation (EPA) across the relays, as well as the optimal power allocation (OPA) strategy given access to instantaneous channel state information (CSI). For EPA, we derive an upper bound on the pairwise-error-probability (PEP), from which we prove that full diversity is achieved in TWRNs. This result is in contrast to one-way relay networks, in which case a maximum diversity order of only unity can be obtained. When instantaneous CSI is available at the relays, we show that the OPA which minimizes the conditional PEP of the worse link can be cast as a generalized linear fractional program, which can be solved efficiently using the Dinkelback-type procedure.We also prove that, if the sum-power of the relay terminals is constrained, then the OPA will activate at most two relays.
Resumo:
We present an efficient graph-based algorithm for quantifying the similarity of household-level energy use profiles, using a notion of similarity that allows for small time–shifts when comparing profiles. Experimental results on a real smart meter data set demonstrate that in cases of practical interest our technique is far faster than the existing method for computing the same similarity measure. Having a fast algorithm for measuring profile similarity improves the efficiency of tasks such as clustering of customers and cross-validation of forecasting methods using historical data. Furthermore, we apply a generalisation of our algorithm to produce substantially better household-level energy use forecasts from historical smart meter data.
Resumo:
As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.
Resumo:
Serial sampling and stable isotope analysis performed along the growth axis of vertebrate tooth enamel records differences attributed to seasonal variation in diet, climate or animal movement. Because several months are required to obtain mature enamel in large mammals, modifications in the isotopic composition of environmental parameters are not instantaneously recorded, and stable isotope analysis of tooth enamel returns a time-averaged signal attenuated in its amplitude relative to the input signal. For convenience, stable isotope profiles are usually determined on the side of the tooth where enamel is thickest. Here we investigate the possibility of improving the time resolution by targeting the side of the tooth where enamel is thinnest. Observation of developing third molars (M3) in sheep shows that the tooth growth rate is not constant but decreases exponentially, while the angle between the first layer of enamel deposited and the enamel–dentine junction increases as a tooth approaches its maximal length. We also noted differences in thickness and geometry of enamel growth between the mesial side (i.e., the side facing the M2) and the buccal side (i.e., the side facing the cheek) of the M3. Carbon and oxygen isotope variations were measured along the M3 teeth from eight sheep raised under controlled conditions. Intra-tooth variability was systematically larger along the mesial side and the difference in amplitude between the two sides was proportional to the time of exposure to the input signal. Although attenuated, the mesial side records variations in the environmental signal more faithfully than the buccal side. This approach can be adapted to other mammals whose teeth show lateral variation in enamel thickness and could potentially be used as an internal check for diagenesis.
Resumo:
Plants constantly sense the changes in their environment; when mineral elements are scarce, they often allocate a greater proportion of their biomass to the root system. This acclimatory response is a consequence of metabolic changes in the shoot and an adjustment of carbohydrate transport to the root. It has long been known that deficiencies of essential macronutrients (nitrogen, phosphorus, potassium and magnesium) result in an accumulation of carbohydrates in leaves and roots, and modify the shoot-to-root biomass ratio. Here, we present an update on the effects of mineral deficiencies on the expression of genes involved in primary metabolism in the shoot, the evidence for increased carbohydrate concentrations and altered biomass allocation between shoot and root, and the consequences of these changes on the growth and morphology of the plant root system.
Resumo:
Attentional allocation to emotional stimuli is often proposed to be driven by valence and in particular by negativity. However, many negative stimuli are also arousing leaving the question whether valence or arousal accounts for this effect. The authors examined whether the valence or the arousal level of emotional stimuli influences the allocation of spatial attention using a modified spatial cueing task. Participants responded to targets that were preceded by cues consisting of emotional pictures varying on arousal and valence. Response latencies showed that disengagement of spatial attention was slower for stimuli high in arousal than for stimuli low in arousal. The effect was independent of the valence of the pictures and not gender-specific. The findings support the idea that arousal affects the allocation of attention.
Resumo:
We present projections of winter storm-induced insured losses in the German residential building sector for the 21st century. With this aim, two structurally most independent downscaling methods and one hybrid downscaling method are applied to a 3-member ensemble of ECHAM5/MPI-OM1 A1B scenario simulations. One method uses dynamical downscaling of intense winter storm events in the global model, and a transfer function to relate regional wind speeds to losses. The second method is based on a reshuffling of present day weather situations and sequences taking into account the change of their frequencies according to the linear temperature trends of the global runs. The third method uses statistical-dynamical downscaling, considering frequency changes of the occurrence of storm-prone weather patterns, and translation into loss by using empirical statistical distributions. The A1B scenario ensemble was downscaled by all three methods until 2070, and by the (statistical-) dynamical methods until 2100. Furthermore, all methods assume a constant statistical relationship between meteorology and insured losses and no developments other than climate change, such as in constructions or claims management. The study utilizes data provided by the German Insurance Association encompassing 24 years and with district-scale resolution. Compared to 1971–2000, the downscaling methods indicate an increase of 10-year return values (i.e. loss ratios per return period) of 6–35 % for 2011–2040, of 20–30 % for 2041–2070, and of 40–55 % for 2071–2100, respectively. Convolving various sources of uncertainty in one confidence statement (data-, loss model-, storm realization-, and Pareto fit-uncertainty), the return-level confidence interval for a return period of 15 years expands by more than a factor of two. Finally, we suggest how practitioners can deal with alternative scenarios or possible natural excursions of observed losses.
Resumo:
An extensive off-line evaluation of the Noah/Single Layer Urban Canopy Model (Noah/SLUCM) urban land-surface model is presented using data from 15 sites to assess (1) the ability of the scheme to reproduce the surface energy balance observed in a range of urban environments, including seasonal changes, and (2) the impact of increasing complexity of input parameter information. Model performance is found to be most dependent on representation of vegetated surface area cover; refinement of other parameter values leads to smaller improvements. Model biases in net all-wave radiation and trade-offs between turbulent heat fluxes are highlighted using an optimization algorithm. Here we use the Urban Zones to characterize Energy partitioning (UZE) as the basis to assign default SLUCM parameter values. A methodology (FRAISE) to assign sites (or areas) to one of these categories based on surface characteristics is evaluated. Using three urban sites from the Basel Urban Boundary Layer Experiment (BUBBLE) dataset, an independent evaluation of the model performance with the parameter values representative of each class is performed. The scheme copes well with both seasonal changes in the surface characteristics and intra-urban heterogeneities in energy flux partitioning, with RMSE performance comparable to similar state-of-the-art models for all fluxes, sites and seasons. The potential of the methodology for high-resolution atmospheric modelling application using the Weather Research and Forecasting (WRF) model is highlighted. This analysis supports the recommendations that (1) three classes are appropriate to characterize the urban environment, and (2) that the parameter values identified should be adopted as default values in WRF.
Resumo:
In Kazakhstan, a transitional nation in Central Asia, the development of public–private partnerships (PPPs) is at its early stage and increasingly of strategic importance. This case study investigates risk allocation in an ongoing project: the construction and operation of 11 kindergartens in the city of Karaganda in the concession form for 14 years. Drawing on a conceptual framework of effective risk allocation, the study identifies principal PPP risks, provides a critical assessment of how and in what way each partner bears a certain risk, highlights the reasons underpinning risk allocation decisions and delineates the lessons learned. The findings show that the government has effectively transferred most risks to the private sector partner, whilst both partners share the demand risk of childcare services and the project default risk. The strong elements of risk allocation include clear assignment of parties’ responsibilities, streamlined financing schemes and incentives to complete the main project phases on time. However, risk allocation has missed an opportunity to create incentives for service quality improvements and take advantage of economies of scale. The most controversial element of risk allocation, as the study finds, is a revenue stream that an operator is supposed to receive from the provision of services unrelated to childcare, as neither partner is able to mitigate this revenue risk. The article concludes that in the kindergartens’ PPP, the government has achieved almost complete transfer of risks to the private sector partner. However, the costs of transfer are extensive government financial outlays that seriously compromise the PPP value for money.
Resumo:
Artificial diagenesis of the intra-crystalline proteins isolated from Patella vulgata was induced by isothermal heating at 140 °C, 110 °C and 80 °C. Protein breakdown was quantified for multiple amino acids, measuring the extent of peptide bond hydrolysis, amino acid racemisation and decomposition. The patterns of diagenesis are complex; therefore the kinetic parameters of the main reactions were estimated by two different methods: 1) a well-established approach based on fitting mathematical expressions to the experimental data, e.g. first-order rate equations for hydrolysis and power-transformed first-order rate equations for racemisation; and 2) an alternative model-free approach, which was developed by estimating a “scaling” factor for the independent variable (time) which produces the best alignment of the experimental data. This method allows the calculation of the relative reaction rates for the different temperatures of isothermal heating. High-temperature data were compared with the extent of degradation detected in sub-fossil Patella specimens of known age, and we evaluated the ability of kinetic experiments to mimic diagenesis at burial temperature. The results highlighted a difference between patterns of degradation at low and high temperature and therefore we recommend caution for the extrapolation of protein breakdown rates to low burial temperatures for geochronological purposes when relying solely on kinetic data.