130 resultados para lock and key model
Resumo:
The issue of diversification in direct real estate investment portfolios has been widely studied in academic and practitioner literature. Most work, however, has been done using either partially aggregated data or data for small samples of individual properties. This paper reports results from tests of both risk reduction and diversification that use the records of 10,000+ UK properties tracked by Investment Property Databank. It provides, for the first time, robust estimates of the diversification gains attainable given the returns, risks and cross‐correlations across the individual properties available to fund managers. The results quantify the number of assets and amount of money needed to construct both ‘balanced’ and ‘specialist’ property portfolios by direct investment. Target numbers will vary according to the objectives of investors and the degree to which tracking error is tolerated. The top‐level results are consistent with previous work, showing that a large measure of risk reduction can be achieved with portfolios of 30–50 properties, but full diversification of specific risk can only be achieved in very large portfolios. However, the paper extends previous work by demonstrating on a single, large dataset the implications of different methods of calculating risk reduction, and also by showing more disaggregated results relevant to the construction of specialist, sector‐focussed funds.
Resumo:
In 'Tales from Ovid' and 'War Music' respectively, Ted Hughes and Christopher Logue turned to classical epic as source material and a model for contemporary poetry. In this essay I consider the different ways in which they work with the original epic poems and how they rework them both textually and generically. In the process, I suggest, Hughes gives his readers an Ovid modeled on his own, vatic conception of Homer, while Logue reworks Homer in a manner that is essentially Ovidian.
Resumo:
This paper discusses key contextual differences and similarities in a comparative study on brownfield regeneration in England and Japan. Over the last decade, the regeneration of large-scale ‘flagship’ projects has been a primary focus in England, and previous research has discussed policy issues and key barriers at these sites. However, further research is required to explore specific barriers associated with problematic ‘hardcore’ sites suffering from long-term dereliction due to site-specific obstacles such as contamination and fragmented ownership. In comparison with England, brownfield regeneration is a relatively new urban agenda in Japan. Japan has less experience in terms of promoting redevelopment of brownfield sites at national level and the specific issues of ‘hardcore’ sites have been under-researched. The paper reviews and highlights important issues in comparing the definitions, national policy frameworks and the current stock of brownfields.
Resumo:
Peat soils consist of poorly decomposed plant detritus, preserved by low decay rates, and deep peat deposits are globally significant stores in the carbon cycle. High water tables and low soil temperatures are commonly held to be the primary reasons for low peat decay rates. However, recent studies suggest a thermodynamic limit to peat decay, whereby the slow turnover of peat soil pore water may lead to high concentrations of phenols and dissolved inorganic carbon. In sufficient concentrations, these chemicals may slow or even halt microbial respiration, providing a negative feedback to peat decay. We document the analysis of a simple, one-dimensional theoretical model of peatland pore water residence time distributions (RTDs). The model suggests that broader, thicker peatlands may be more resilient to rapid decay caused by climate change because of slow pore water turnover in deep layers. Even shallow peat deposits may also be resilient to rapid decay if rainfall rates are low. However, the model suggests that even thick peatlands may be vulnerable to rapid decay under prolonged high rainfall rates, which may act to flush pore water with fresh rainwater. We also used the model to illustrate a particular limitation of the diplotelmic (i.e., acrotelm and catotelm) model of peatland structure. Model peatlands of contrasting hydraulic structure exhibited identical water tables but contrasting RTDs. These scenarios would be treated identically by diplotelmic models, although the thermodynamic limit suggests contrasting decay regimes. We therefore conclude that the diplotelmic model be discarded in favor of model schemes that consider continuous variation in peat properties and processes.
Resumo:
In mid-March 2005, a rare lower stratospheric polar vortex filamentation event was observed simultaneously by the JPL lidar at Mauna Loa Observatory, Hawaii, and by the EOS MLS instrument onboard the Aura satellite. The event coincided with the beginning of the spring 2005 final warming. On 16 March, the filament was observed by lidar around 0600 UT between 415 K and 455 K, and by MLS six hours earlier. It was seen on both the lidar and MLS profiles as a layer of enhanced ozone, peaking at 1.7 ppmv in a region where the climatological values are usually around or below 1 ppmv. Ozone profiles measured by lidar and MLS were compared to profiles from the Chemical Transport Model MIMOSA-CHIM. The agreement between lidar, MLS, and the model is excellent considering the difference in the sampling techniques. MLS was also able to identify the filament at another location north of Hawaii.
Resumo:
This study puts forward a method to model and simulate the complex system of hospital on the basis of multi-agent technology. The formation of the agents of hospitals with intelligent and coordinative characteristics was designed, the message object was defined, and the model operating mechanism of autonomous activities and coordination mechanism was also designed. In addition, the Ontology library and Norm library etc. were introduced using semiotic method and theory, to enlarge the method of system modelling. Swarm was used to develop the multi-agent based simulation system, which is favorable for making guidelines for hospital's improving it's organization and management, optimizing the working procedure, improving the quality of medical care as well as reducing medical charge costs.
Resumo:
We present a statistical analysis of the time evolution of ground magnetic fluctuations in three (12–48 s, 24–96 s and 48–192 s) period bands during nightside auroral activations. We use an independently derived auroral activation list composed of both substorms and pseudo-breakups to provide an estimate of the activation times of nightside aurora during periods with comprehensive ground magnetometer coverage. One hundred eighty-one events in total are studied to demonstrate the statistical nature of the time evolution of magnetic wave power during the ∼30 min surrounding auroral activations. We find that the magnetic wave power is approximately constant before an auroral activation, starts to grow up to 90 s prior to the optical onset time, maximizes a few minutes after the auroral activation, then decays slightly to a new, and higher, constant level. Importantly, magnetic ULF wave power always remains elevated after an auroral activation, whether it is a substorm or a pseudo-breakup. We subsequently divide the auroral activation list into events that formed part of ongoing auroral activity and events that had little preceding geomagnetic activity. We find that the evolution of wave power in the ∼10–200 s period band essentially behaves in the same manner through auroral onset, regardless of event type. The absolute power across ULF wave bands, however, displays a power law-like dependency throughout a 30 min period centered on auroral onset time. We also find evidence of a secondary maximum in wave power at high latitudes ∼10 min following isolated substorm activations. Most significantly, we demonstrate that magnetic wave power levels persist after auroral activations for ∼10 min, which is consistent with recent findings of wave-driven auroral precipitation during substorms. This suggests that magnetic wave power and auroral particle precipitation are intimately linked and key components of the substorm onset process.
Resumo:
Global warming is expected to enhance fluxes of fresh water between the surface and atmosphere, causing wet regions to become wetter and dry regions drier, with serious implications for water resource management. Defining the wet and dry regions as the upper 30% and lower 70% of the precipitation totals across the tropics (30° S–30° N) each month we combine observations and climate model simulations to understand changes in the wet and dry regions over the period 1850–2100. Observed decreases in precipitation over dry tropical land (1950–2010) are also simulated by coupled atmosphere–ocean climate models (−0.3%/decade) with trends projected to continue into the 21st century. Discrepancies between observations and simulations over wet land regions since 1950 exist, relating to decadal fluctuations in El Niño southern oscillation, the timing of which is not represented by the coupled simulations. When atmosphere-only simulations are instead driven by observed sea surface temperature they are able to adequately represent this variability over land. Global distributions of precipitation trends are dominated by spatial changes in atmospheric circulation. However, the tendency for already wet regions to become wetter (precipitation increases with warming by 3% K−1 over wet tropical oceans) and the driest regions drier (precipitation decreases of −2% K−1 over dry tropical land regions) emerges over the 21st century in response to the substantial surface warming.
Resumo:
For an increasing number of applications, mesoscale modelling systems now aim to better represent urban areas. The complexity of processes resolved by urban parametrization schemes varies with the application. The concept of fitness-for-purpose is therefore critical for both the choice of parametrizations and the way in which the scheme should be evaluated. A systematic and objective model response analysis procedure (Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm) is used to assess the fitness of the single-layer urban canopy parametrization implemented in the Weather Research and Forecasting (WRF) model. The scheme is evaluated regarding its ability to simulate observed surface energy fluxes and the sensitivity to input parameters. Recent amendments are described, focussing on features which improve its applicability to numerical weather prediction, such as a reduced and physically more meaningful list of input parameters. The study shows a high sensitivity of the scheme to parameters characterizing roof properties in contrast to a low response to road-related ones. Problems in partitioning of energy between turbulent sensible and latent heat fluxes are also emphasized. Some initial guidelines to prioritize efforts to obtain urban land-cover class characteristics in WRF are provided. Copyright © 2010 Royal Meteorological Society and Crown Copyright.
Resumo:
We use a state-of-the-art ocean general circulation and biogeochemistry model to examine the impact of changes in ocean circulation and biogeochemistry in governing the change in ocean carbon-13 and atmospheric CO2 at the last glacial maximum (LGM). We examine 5 different realisations of the ocean's overturning circulation produced by a fully coupled atmosphere-ocean model under LGM forcing and suggested changes in the atmospheric deposition of iron and phytoplankton physiology at the LGM. Measured changes in carbon-13 and carbon-14, as well as a qualitative reconstruction of the change in ocean carbon export are used to evaluate the results. Overall, we find that while a reduction in ocean ventilation at the LGM is necessary to reproduce carbon-13 and carbon-14 observations, this circulation results in a low net sink for atmospheric CO2. In contrast, while biogeochemical processes contribute little to carbon isotopes, we propose that most of the change in atmospheric CO2 was due to such factors. However, the lesser role for circulation means that when all plausible factors are accounted for, most of the necessary CO2 change remains to be explained. This presents a serious challenge to our understanding of the mechanisms behind changes in the global carbon cycle during the geologic past.
Resumo:
Sea ice friction models are necessary to predict the nature of interactions between sea ice floes. These interactions are of interest on a range of scales, for example, to predict loads on engineering structures in icy waters or to understand the basin-scale motion of sea ice. Many models use Amonton's friction law due to its simplicity. More advanced models allow for hydrodynamic lubrication and refreezing of asperities; however, modeling these processes leads to greatly increased complexity. In this paper we propose, by analogy with rock physics, that a rate- and state-dependent friction law allows us to incorporate memory (and thus the effects of lubrication and bonding) into ice friction models without a great increase in complexity. We support this proposal with experimental data on both the laboratory (∼0.1 m) and ice tank (∼1 m) scale. These experiments show that the effects of static contact under normal load can be incorporated into a friction model. We find the parameters for a first-order rate and state model to be A = 0.310, B = 0.382, and μ0 = 0.872. Such a model then allows us to make predictions about the nature of memory effects in moving ice-ice contacts.
Resumo:
Medium range flood forecasting activities, driven by various meteorological forecasts ranging from high resolution deterministic forecasts to low spatial resolution ensemble prediction systems, share a major challenge in the appropriateness and design of performance measures. In this paper possible limitations of some traditional hydrological and meteorological prediction quality and verification measures are identified. Some simple modifications are applied in order to circumvent the problem of the autocorrelation dominating river discharge time-series and in order to create a benchmark model enabling the decision makers to evaluate the forecast quality and the model quality. Although the performance period is quite short the advantage of a simple cost-loss function as a measure of forecast quality can be demonstrated.
Resumo:
The CWRF is developed as a climate extension of the Weather Research and Forecasting model (WRF) by incorporating numerous improvements in the representation of physical processes and integration of external (top, surface, lateral) forcings that are crucial to climate scales, including interactions between land, atmosphere, and ocean; convection and microphysics; and cloud, aerosol, and radiation; and system consistency throughout all process modules. This extension inherits all WRF functionalities for numerical weather prediction while enhancing the capability for climate modeling. As such, CWRF can be applied seamlessly to weather forecast and climate prediction. The CWRF is built with a comprehensive ensemble of alternative parameterization schemes for each of the key physical processes, including surface (land, ocean), planetary boundary layer, cumulus (deep, shallow), microphysics, cloud, aerosol, and radiation, and their interactions. This facilitates the use of an optimized physics ensemble approach to improve weather or climate prediction along with a reliable uncertainty estimate. The CWRF also emphasizes the societal service capability to provide impactrelevant information by coupling with detailed models of terrestrial hydrology, coastal ocean, crop growth, air quality, and a recently expanded interactive water quality and ecosystem model. This study provides a general CWRF description and basic skill evaluation based on a continuous integration for the period 1979– 2009 as compared with that of WRF, using a 30-km grid spacing over a domain that includes the contiguous United States plus southern Canada and northern Mexico. In addition to advantages of greater application capability, CWRF improves performance in radiation and terrestrial hydrology over WRF and other regional models. Precipitation simulation, however, remains a challenge for all of the tested models.
Resumo:
During April and May 2010 the ash cloud from the eruption of the Icelandic volcano Eyjafjallajökull caused widespread disruption to aviation over northern Europe. The location and impact of the eruption led to a wealth of observations of the ash cloud were being obtained which can be used to assess modelling of the long range transport of ash in the troposphere. The UK FAAM (Facility for Airborne Atmospheric Measurements) BAe-146-301 research aircraft overflew the ash cloud on a number of days during May. The aircraft carries a downward looking lidar which detected the ash layer through the backscatter of the laser light. In this study ash concentrations derived from the lidar are compared with simulations of the ash cloud made with NAME (Numerical Atmospheric-dispersion Modelling Environment), a general purpose atmospheric transport and dispersion model. The simulated ash clouds are compared to the lidar data to determine how well NAME simulates the horizontal and vertical structure of the ash clouds. Comparison between the ash concentrations derived from the lidar and those from NAME is used to define the fraction of ash emitted in the eruption that is transported over long distances compared to the total emission of tephra. In making these comparisons possible position errors in the simulated ash clouds are identified and accounted for. The ash layers seen by the lidar considered in this study were thin, with typical depths of 550–750 m. The vertical structure of the ash cloud simulated by NAME was generally consistent with the observed ash layers, although the layers in the simulated ash clouds that are identified with observed ash layers are about twice the depth of the observed layers. The structure of the simulated ash clouds were sensitive to the profile of ash emissions that was assumed. In terms of horizontal and vertical structure the best results were obtained by assuming that the emission occurred at the top of the eruption plume, consistent with the observed structure of eruption plumes. However, early in the period when the intensity of the eruption was low, assuming that the emission of ash was uniform with height gives better guidance on the horizontal and vertical structure of the ash cloud. Comparison of the lidar concentrations with those from NAME show that 2–5% of the total mass erupted by the volcano remained in the ash cloud over the United Kingdom.
Resumo:
The political economy literature on agriculture emphasizes influence over political outcomes via lobbying conduits in general, political action committee contributions in particular and the pervasive view that political preferences with respect to agricultural issues are inherently geographic. In this context, ‘interdependence’ in Congressional vote behaviour manifests itself in two dimensions. One dimension is the intensity by which neighboring vote propensities influence one another and the second is the geographic extent of voter influence. We estimate these facets of dependence using data on a Congressional vote on the 2001 Farm Bill using routine Markov chain Monte Carlo procedures and Bayesian model averaging, in particular. In so doing, we develop a novel procedure to examine both the reliability and the consequences of different model representations for measuring both the ‘scale’ and the ‘scope’ of spatial (geographic) co-relations in voting behaviour.