925 resultados para Long-term data
Resumo:
The purpose of this paper is to propose hybrid capital securities as a new approach to compensation for senior bank executives and risk-takers instead of cash or equity-based compensation currently adopted by the industry. The global financial turmoil indicated that misaligned pay-for-performance compensation arrangements encouraged management short-termism and rewarded excessive risk-taking behaviour in Anglo-Saxon system. Rather than regulating specific instruments and processes, we believe that it is much more efficient to overhaul the compensation scheme to align it with risk management and governance. This empirical paper investigates the European hybrid market by employing data from the Merrill Lynch Global Index System from 2000 to 2010. Our paper contributes to both literature and practices by designing a structured scheme to tie the executive’s interests to long-term performance of the bank, the goal of regulators and the economy at large which consequently reduce the probability of future bank failures.
Resumo:
Measurements of the ionospheric E region during total solar eclipses in the period 1932-1999 have been used to investigate the fraction of Extreme Ultra Violet and soft X-ray radiation, phi, that is emitted from the limb corona and chromosphere. The relative apparent sizes of the Moon and the Sun are different for each eclipse, and techniques are presented which correct the measurements and, therefore, allow direct comparisons between different eclipses. The results show that the fraction of ionising radiation emitted by the limb corona has a clear solar cycle variation and that the underlying trend shows this fraction has been increasing since 1932. Data from the SOHO spacecraft are used to study the effects of short-term variability and it is shown that the observed long-term rise in phi has a negligible probability of being a chance occurrence.
Resumo:
We test the method of Lockwood et al. [1999] for deriving the coronal source flux from the geomagnetic aa index and show it to be accurate to within 12% for annual means and 4.5% for averages over a sunspot cycle. Using data from four solar constant monitors during 1981-1995, we find a linear relationship between this magnetic flux and the total solar irradiance. From this correlation, we show that the 131% rise in the mean coronal source field over the interval 1901-1995 corresponds to a rise in the average total solar irradiance of {\Delta}I = 1.65 +/- 0.23 Wm^{-2}.
Resumo:
More and more households are purchasing electric vehicles (EVs), and this will continue as we move towards a low carbon future. There are various projections as to the rate of EV uptake, but all predict an increase over the next ten years. Charging these EVs will produce one of the biggest loads on the low voltage network. To manage the network, we must not only take into account the number of EVs taken up, but where on the network they are charging, and at what time. To simulate the impact on the network from high, medium and low EV uptake (as outlined by the UK government), we present an agent-based model. We initialise the model to assign an EV to a household based on either random distribution or social influences - that is, a neighbour of an EV owner is more likely to also purchase an EV. Additionally, we examine the effect of peak behaviour on the network when charging is at day-time, night-time, or a mix of both. The model is implemented on a neighbourhood in south-east England using smart meter data (half hourly electricity readings) and real life charging patterns from an EV trial. Our results indicate that social influence can increase the peak demand on a local level (street or feeder), meaning that medium EV uptake can create higher peak demand than currently expected.
Resumo:
A one-dimensional surface energy-balance lake model, coupled to a thermodynamic model of lake ice, is used to simulate variations in the temperature of and evaporation from three Estonian lakes: Karujärv, Viljandi and Kirjaku. The model is driven by daily climate data, derived by cubic-spline interpolation from monthly mean data, and was run for periods of 8 years (Kirjaku) up to 30 years (Viljandi). Simulated surface water temperature is in good agreement with observations: mean differences between simulated and observed temperatures are from −0.8°C to +0.1°C. The simulated duration of snow and ice cover is comparable with observed. However, the model generally underpredicts ice thickness and overpredicts snow depth. Sensitivity analyses suggest that the model results are robust across a wide range (0.1–2.0 m−1) of lake extinction coefficient: surface temperature differs by less than 0.5°C between extreme values of the extinction coefficient. The model results are more sensitive to snow and ice albedos. However, changing the snow (0.2–0.9) and ice (0.15–0.55) albedos within realistic ranges does not improve the simulations of snow depth and ice thickness. The underestimation of ice thickness is correlated with the overestimation of snow cover, since a thick snow layer insulates the ice and limits ice formation. The overestimation of snow cover results from the assumption that all the simulated winter precipitation occurs as snow, a direct consequence of using daily climate data derived by interpolation from mean monthly data.
Resumo:
1. Species’ distributions are likely to be affected by a combination of environmental drivers. We used a data set of 11 million species occurrence records over the period 1970–2010 to assess changes in the frequency of occurrence of 673 macro-moth species in Great Britain. Groups of species with different predicted sensitivities showed divergent trends, which we interpret in the context of land-use and climatic changes. 2. A diversity of responses was revealed: 260 moth species declined significantly, whereas 160 increased significantly. Overall, frequencies of occurrence declined, mirroring trends in less species-rich, yet more intensively studied taxa. 3. Geographically widespread species, which were predicted to be more sensitive to land use than to climate change, declined significantly in southern Britain, where the cover of urban and arable land has increased. 4. Moths associated with low nitrogen and open environments (based on their larval host plant characteristics) declined most strongly, which is also consistent with a land-use change explanation. 5. Some moths that reach their northern (leading edge) range limit in southern Britain increased, whereas species restricted to northern Britain (trailing edge) declined significantly, consistent with a climate change explanation. 6. Not all species of a given type behaved similarly, suggesting that complex interactions between species’ attributes and different combinations of environmental drivers determine frequency of occurrence changes. 7. Synthesis and applications. Our findings are consistent with large-scale responses to climatic and land-use changes, with some species increasing and others decreasing. We suggest that land-use change (e.g. habitat loss, nitrogen deposition) and climate change are both major drivers of moth biodiversity change, acting independently and in combination. Importantly, the diverse responses revealed in this species-rich taxon show that multifaceted conservation strategies are needed to minimize negative biodiversity impacts of multiple environmental changes. We suggest that habitat protection, management and ecological restoration can mitigate combined impacts of land-use change and climate change by providing environments that are suitable for existing populations and also enable species to shift their ranges.
Resumo:
A dynamical wind-wave climate simulation covering the North Atlantic Ocean and spanning the whole 21st century under the A1B scenario has been compared with a set of statistical projections using atmospheric variables or large scale climate indices as predictors. As a first step, the performance of all statistical models has been evaluated for the present-day climate; namely they have been compared with a dynamical wind-wave hindcast in terms of winter Significant Wave Height (SWH) trends and variance as well as with altimetry data. For the projections, it has been found that statistical models that use wind speed as independent variable predictor are able to capture a larger fraction of the winter SWH inter-annual variability (68% on average) and of the long term changes projected by the dynamical simulation. Conversely, regression models using climate indices, sea level pressure and/or pressure gradient as predictors, account for a smaller SWH variance (from 2.8% to 33%) and do not reproduce the dynamically projected long term trends over the North Atlantic. Investigating the wind-sea and swell components separately, we have found that the combination of two regression models, one for wind-sea waves and another one for the swell component, can improve significantly the wave field projections obtained from single regression models over the North Atlantic.
Resumo:
This in vitro study evaluated the microtensile bond strength of a resin composite to Er:YAG-prepared dentin after long-term storage and thermocycling. Eighty bovine incisors were selected and their roots removed. The crowns were ground to expose superficial dentin. The samples were randomly divided according to cavity preparation method (I-Er:YAG laser and II-carbide bur). Subsequently, an etch & rinse adhesive system was applied and the samples were restored with a resin composite. The samples were subdivided according to time of water storage (WS)/number of thermocycles (TC) performed: A) 24 hours WS/no TC; B) 7 days WS/500 TC; C) 1 month WS/2,000 TC; D) 6 months WS/12,000 TC. The teeth were sectioned in sticks with a cross-sectional area of 1.0-mm(2), which were loaded in tension in a universal testing machine. The data were subjected to two-way ANOVA, Scheffe and Fisher`s tests at a 5% level. In general, the bur-prepared group displayed higher microtensile bond strength values than the laser-treated group. Based on one-month water storage and 2,000 thermocycles, the performance of the tested adhesive system to Er:YAG-laser irradiated dentin was negatively affected (Group IC), while adhesion of the bur-prepared group decreased only within six months of water storage combined with 12,000 thermocycles (Group IID). It may be concluded that adhesion to the Er:YAG laser cavity preparation was more affected by the methods used for simulating degradation of the adhesive interface.
Resumo:
Background: the Mini Nutritional Assessment (MNA) is a multidimensional method of nutritional evaluation that allows the diagnosis of malnutrition and risk of malnutrition in elderly people, it is important to mention that this method has not been well studied in Brazil. Objective: to verify the use of the MNA in elderly people that has been living in long term institutions for elderly people. Design: transversal study. Participants: 89 people (>= 60 years), being 64.0% men. The average of age for both genders was 73.7 +/- 9.1 years old, being 72.8 +/- 8.9 years old for men, and 75.3 +/- 9.3 years old for women. Setting: long-term institutions for elderly people located in the Southeast of Brazil. Methods: it was calculated the sensibility, specificity, and positive and negative predictive values. It was data to set up a ROC curve to verify the accuracy of the MNA. The variable used as a ""standard"" for the nutritional diagnosis of the elderly people was the corrected arm muscle area because it is able to provide information or an estimative of the muscle reserve of a person being considered a good indicator of malnutrition in elderly people. Results: the sensibility was 84.0%, the specificity was 36.0%, the positive predictive value was 77.0%, and the negative predictive value was 47.0%; the area of the ROC curve was 0.71 (71.0%). Conclusion: the MNA method has showed accuracy, and sensibility when dealing with the diagnosis of malnutrition and risk of malnutrition in institutionalized elderly groups of the Southeastern region of Brazil, however, it presented a low specificity.
Resumo:
Nitric oxide synthase (NOS) inhibitors are largely used to evaluate the NO contribution to pulmonary allergy, but contrasting data have been reported. In this study, pharmacological, biochemical and pharmacokinetic assays were performed to compare the effects of acute and long-term treatment of BALB/C mice with the non-selective NOS inhibitor L-NAME in ovalbumin (OVA)-challenged mice. Acute L-NAME treatment (50 mg/kg, gavage) significantly reduced the eosinophil number in bronchoalveolar lavage fluid (BALF). The inducible NOS (iNOS) inhibitor aminoguanidine (20 mg/kg/day in the drinking water) also significantly reduced the eosinophil number in BALF In contrast, 3-week L-NAME treatment (50 and 150 mg/kg/day in the drinking water) significantly increased the pulmonary eosinophil influx. The constitutive NOS (cNOS) activity in brain and lungs was reduced by both acute and 3-week L-NAME treatments. The pulmonary iNOS activity was reduced by acute L-NAME (or aminoguanidine), but unaffected by 3-week L-NAME treatment. Acute L-NAME (or aminoguanidine) treatment was more efficient to reduce the NO(x) levels compared with 3-week L-NAME treatment. The pharmacokinetic study revealed that L-NAME is not bioavailable when given orally. After acute L-NAME intake, serum concentrations of the metabolite N(omega)-nitro-L-arginine decreased from 30 min to 24 h. In the 3-week L-NAME treatment, the N(omega)-nitro-L-arginine concentration was close to the detection limit. In conclusion, 3-week treatment with L-NAME yields low serum N(omega)-nitro-L-arginine concentrations, causing preferential inhibition of cNOS activity. Therefore, eosinophil influx potentiation by 3-week L-NAME treatment may reflect removal of protective cNOS-derived NO, with no interference on the ongoing inflammation due to iNOS-derived NO. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
In many data sets from clinical studies there are patients insusceptible to the occurrence of the event of interest. Survival models which ignore this fact are generally inadequate. The main goal of this paper is to describe an application of the generalized additive models for location, scale, and shape (GAMLSS) framework to the fitting of long-term survival models. in this work the number of competing causes of the event of interest follows the negative binomial distribution. In this way, some well known models found in the literature are characterized as particular cases of our proposal. The model is conveniently parameterized in terms of the cured fraction, which is then linked to covariates. We explore the use of the gamlss package in R as a powerful tool for inference in long-term survival models. The procedure is illustrated with a numerical example. (C) 2009 Elsevier Ireland Ltd. All rights reserved.
Resumo:
In this paper we extend the long-term survival model proposed by Chen et al. [Chen, M.-H., Ibrahim, J.G., Sinha, D., 1999. A new Bayesian model for survival data with a surviving fraction. journal of the American Statistical Association 94, 909-919] via the generating function of a real sequence introduced by Feller [Feller, W., 1968. An Introduction to Probability Theory and its Applications, third ed., vol. 1, Wiley, New York]. A direct consequence of this new formulation is the unification of the long-term survival models proposed by Berkson and Gage [Berkson, J., Gage, R.P., 1952. Survival cure for cancer patients following treatment. journal of the American Statistical Association 47, 501-515] and Chen et al. (see citation above). Also, we show that the long-term survival function formulated in this paper satisfies the proportional hazards property if, and only if, the number of competing causes related to the occurrence of an event of interest follows a Poisson distribution. Furthermore, a more flexible model than the one proposed by Yin and Ibrahim [Yin, G., Ibrahim, J.G., 2005. Cure rate models: A unified approach. The Canadian journal of Statistics 33, 559-570] is introduced and, motivated by Feller`s results, a very useful competing index is defined. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
We present a new climatology of atmospheric aerosols (primarily pyrogenic and biogenic) for the Brazilian tropics on the basis of a high-quality data set of spectral aerosol optical depth and directional sky radiance measurements from Aerosol Robotic Network (AERONET) Cimel Sun-sky radiometers at more than 15 sites distributed across the Amazon basin and adjacent Cerrado region. This network is the only long-term project (with a record including observations from more than 11 years at some locations) ever to have provided ground-based remotely-sensed column aerosol properties for this critical region. Distinctive features of the Amazonian area aerosol are presented by partitioning the region into three aerosol regimes: southern Amazonian forest, Cerrado, and northern Amazonian forest. The monitoring sites generally include measurements from the interval 1999-2006, but some sites have measurement records that date back to the initial days of the AERONET program in 1993. Seasonal time series of aerosol optical depth (AOD), angstrom ngstrom exponent, and columnar-averaged microphysical properties of the aerosol derived from sky radiance inversion techniques (single-scattering albedo, volume size distribution, fine mode fraction of AOD, etc.) are described and contrasted for the defined regions. During the wet season, occurrences of mineral dust penetrating deep into the interior were observed.
Resumo:
The studied sector of the central Ribeira Fold Belt (SE Brazil) comprises metatexites, diatexites, charnockites and blastomylonites. This study integrates petrological and thermochronological data in order to constrain the thermotectonic and geodynamic evolution of this Neoproterozoic-Ordovician mobile belt during Western Gondwana amalgamation. New data indicate that after an earlier collision stage at similar to 610 Ma (zircon, U-Pb age), peak metamorphism and lower crust partial melting, coeval with the main regional high grade D(1) thrust deformation, occurred at 572-562 Ma (zircon, U-Pb ages). The overall average cooling rate was low (<5 degrees C/Ma) from 750 to 250 degrees C (at similar to 455 Ma; biotite-WR Rb-Sr age), but disparate cooling paths indicate differential uplift between distinct lithotypes: (a) metatexites and blastomylonites show a overall stable 3-5 degrees C/Ma cooling rate; (b) charnockites and associated rocks remained at T>650 degrees C during sub-horizontal D(2) shearing until similar to 510-470 Ma (garnet-WR Sm-Nd ages) (1-2 degrees C/Ma), being then rapidly exhumed/cooled (8-30 degrees C/Ma) during post-orogenic D(3) deformation with late granite emplacement at similar to 490 Ma (zircon, U-Pb age). Cooling rates based on garnet-biotite Fe-Mg diffusion are broadly consistent with the geochronological cooling rates: (a) metatexites were cooled faster at high temperatures (6 degrees C/Ma) and slowly at low temperatures (0.1 degrees C/Ma), decreasing cooling rates with time; (b) charnockites show low cooling rates (2 degrees C/Ma) near metamorphic peak conditions and high cooling rates (120 degrees C/Ma) at lower temperatures, increasing cooling rates during retrogression. The charnockite thermal evolution and the extensive production of granitoid melts in the area imply that high geothermal gradients were sustained fora long period of time (50-90 Ma). This thermal anomaly most likely reflects upwelling of asthenospheric mantle and magma underplating coupled with long-term generation of high HPE (heat producing elements) granitoids. These factors must have sustained elevated crustal geotherms for similar to 100 Ma, promoting widespread charnockite generation at middle to lower crustal levels. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The gradual changes in the world development have brought energy issues back into high profile. An ongoing challenge for countries around the world is to balance the development gains against its effects on the environment. The energy management is the key factor of any sustainable development program. All the aspects of development in agriculture, power generation, social welfare and industry in Iran are crucially related to the energy and its revenue. Forecasting end-use natural gas consumption is an important Factor for efficient system operation and a basis for planning decisions. In this thesis, particle swarm optimization (PSO) used to forecast long run natural gas consumption in Iran. Gas consumption data in Iran for the previous 34 years is used to predict the consumption for the coming years. Four linear and nonlinear models proposed and six factors such as Gross Domestic Product (GDP), Population, National Income (NI), Temperature, Consumer Price Index (CPI) and yearly Natural Gas (NG) demand investigated.