955 resultados para Chlorophyll a, standard deviation
Resumo:
A relatively simple, selective, precise and accurate high performance liquid chromatography (HPLC) method based on a reaction of phenylisothiocyanate (PITC) with glucosamine (GL) in alkaline media was developed and validated to determine glucosamine hydrochloride permeating through human skin in vitro. It is usually problematic to develop an accurate assay for chemicals traversing skin because the excellent barrier properties of the tissue ensure that only low amounts of the material pass through the membrane and skin components may leach out of the tissue to interfere with the analysis. In addition, in the case of glucosamine hydrochloride, chemical instability adds further complexity to assay development. The assay, utilising the PITC-GL reaction was refined by optimizing the reaction temperature, reaction time and PITC concentration. The reaction produces a phenylthiocarbarnyl-glucosamine (PTC-GL) adduct which was separated on a reverse-phase (RP) column packed with 5 mu m ODS (C-18) Hypersil particles using a diode array detector (DAD) at 245 nm. The mobile phase was methanol-water-glacial acetic acid (10:89.96:0.04 v/v/v, pH 3.5) delivered to the column at 1 ml min(-1) and the column temperature was maintained at 30 degrees C Using a saturated aqueous solution of glucosamine hydrochloride, in vitro permeation studies were performed at 32 +/- 1 degrees C over 48 h using human epidermal membranes prepared by a heat separation method and mounted in Franz-type diffusion cells with a diffusional area 2.15 +/- 0.1 cm(2). The optimum derivatisation reaction conditions for reaction temperature, reaction time and PITC concentration were found to be 80 degrees C, 30 min and 1 % v/v, respectively. PTC-Gal and GL adducts eluted at 8.9 and 9.7 min, respectively. The detector response was found to be linear in the concentration range 0-1000 mu g ml(-1). The assay was robust with intra- and inter-day precisions (described as a percentage of relative standard deviation, %R.S.D.) < 12. Intra- and inter-day accuracy (as a percentage of the relative error, %RE) was <=-5.60 and <=-8.00, respectively. Using this assay, it was found that GL-HCI permeates through human skin with a flux 1.497 +/- 0.42 mu g cm(-2) h(-1), a permeability coefficient of 5.66 +/- 1.6 x 10(-6) cm h(-1) and with a lag time of 10.9 +/- 4.6 h. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The usefulness of motor subtypes of delirium is unclear due to inconsistency in subtyping methods and a lack of validation with objective measures of activity. The activity of 40 patients was measured over 24 h with a discrete accelerometer-based activity monitor. The continuous wavelet transform (CWT) with various mother wavelets were applied to accelerometry data from three randomly selected patients with DSM-IV delirium that were readily divided into hyperactive, hypoactive, and mixed motor subtypes. A classification tree used the periods of overall movement as measured by the discrete accelerometer-based monitor as determining factors for which to classify these delirious patients. This data used to create the classification tree were based upon the minimum, maximum, standard deviation, and number of coefficient values, generated over a range of scales by the CWT. The classification tree was subsequently used to define the remaining motoric subtypes. The use of a classification system shows how delirium subtypes can be categorized in relation to overall motoric behavior. The classification system was also implemented to successfully define other patient motoric subtypes. Motor subtypes of delirium defined by observed ward behavior differ in electronically measured activity levels.
Marker placement to describe the wrist movements during activities of daily living in cyclical tasks
Resumo:
Objective. To describe the wrist kinematics during movement through free range of motion and activities of daily living using a cyclical task. Design. The wrist angles were initially calculated in a calibration trial and then in two selected activities of daily living (jar opening and carton pouring). Background. Existing studies which describe the wrist movement do not address the specific application of daily activities. Moreover, the data presented from subject to subject may differ simply because of the non-cyclical nature of the upper limbs movements. Methods. The coordinates of external markers attached to bone references on the forearm and dorsal side of the hand were obtained using an optical motion capture system. The wrist angles were derived from free motion trials and successively calculated in four healthy subjects for two specific cyclical daily activities (opening a jar and pouring from a carton). Results. The free motions trial highlighted the interaction between the wrist angles. Both the jar opening and the carton pouring activity showed a repetitive pattern for the three angles within the cycle length. In the jar-opening task, the standard deviation for the whole population was 10.8degrees for flexion-extension, 5.3degrees for radial-ulnar deviation and 10.4degrees for pronation-supination. In the carton-pouring task, the standard deviation for the whole population was 16.0degrees for flexion-extension, 3.4degrees for radial-ulnar deviation and 10.7degrees for pro nation-supination. Conclusion. Wrist kinematics in healthy subjects can be successfully described by the rotations about the axes of marker-defined coordinates systems during free range of motion and daily activities using cyclical tasks.
Resumo:
A poor representation of cloud structure in a general circulation model (GCM) is widely recognised as a potential source of error in the radiation budget. Here, we develop a new way of representing both horizontal and vertical cloud structure in a radiation scheme. This combines the ‘Tripleclouds’ parametrization, which introduces inhomogeneity by using two cloudy regions in each layer as opposed to one, each with different water content values, with ‘exponential-random’ overlap, in which clouds in adjacent layers are not overlapped maximally, but according to a vertical decorrelation scale. This paper, Part I of two, aims to parametrize the two effects such that they can be used in a GCM. To achieve this, we first review a number of studies for a globally applicable value of fractional standard deviation of water content for use in Tripleclouds. We obtain a value of 0.75 ± 0.18 from a variety of different types of observations, with no apparent dependence on cloud type or gridbox size. Then, through a second short review, we create a parametrization of decorrelation scale for use in exponential-random overlap, which varies the scale linearly with latitude from 2.9 km at the Equator to 0.4 km at the poles. When applied to radar data, both components are found to have radiative impacts capable of offsetting biases caused by cloud misrepresentation. Part II of this paper implements Tripleclouds and exponential-random overlap into a radiation code and examines both their individual and combined impacts on the global radiation budget using re-analysis data.
Resumo:
Normally wind measurements from Doppler radars rely on the presence of rain. During fine weather, insects become a potential radar target for wind measurement. However, it is difficult to separate ground clutter and insect echoes when spectral or polarimetric methods are not available. Archived reflectivity and velocity data from repeated scans provide alternative methods. The probability of detection (POD) method, which maps areas with a persistent signal as ground clutter, is ineffective when most scans also contain persistent insect echoes. We developed a clutter detection method which maps the standard deviation of velocity (SDV) over a large number of scans, and can differentiate insects and ground clutter close to the radar. Beyond the range of persistent insect echoes, the POD method more thoroughly removes ground clutter. A new, pseudo-probability clutter map was created by combining the POD and SDV maps. The new map optimised ground clutter detection without removing insect echoes.
Resumo:
The Cambridge Tropospheric Trajectory model of Chemistry and Transport (CiTTyCAT), a Lagrangian chemistry model, has been evaluated using atmospheric chemical measurements collected during the East Atlantic Summer Experiment 1996 (EASE '96). This field campaign was part of the UK Natural Environment Research Council's (NERC) Atmospheric Chemistry Studies in the Oceanic Environment (ACSOE) programme, conducted at Mace Head, Republic of Ireland, during July and August 1996. The model includes a description of gas-phase tropospheric chemistry, and simple parameterisations for surface deposition, mixing from the free troposphere and emissions. The model generally compares well with the measurements and is used to study the production and loss of O3 under a variety of conditions. The mean difference between the hourly O3 concentrations calculated by the model and those measured is 0.6 ppbv with a standard deviation of 8.7 ppbv. Three specific air-flow regimes were identified during the campaign – westerly, anticyclonic (easterly) and south westerly. The westerly flow is typical of background conditions for Mace Head. However, on some occasions there was evidence of long-range transport of pollutants from North America. In periods of anticyclonic flow, air parcels had collected emissions of NOx and VOCs immediately before arriving at Mace Head, leading to O3 production. The level of calculated O3 depends critically on the precise details of the trajectory, and hence on the emissions into the air parcel. In several periods of south westerly flow, low concentrations of O3 were measured which were consistent with deposition and photochemical destruction inside the tropical marine boundary layer.
Resumo:
A key strategy to improve the skill of quantitative predictions of precipitation, as well as hazardous weather such as severe thunderstorms and flash floods is to exploit the use of observations of convective activity (e.g. from radar). In this paper, a convection-permitting ensemble prediction system (EPS) aimed at addressing the problems of forecasting localized weather events with relatively short predictability time scale and based on a 1.5 km grid-length version of the Met Office Unified Model is presented. Particular attention is given to the impact of using predicted observations of radar-derived precipitation intensity in the ensemble transform Kalman filter (ETKF) used within the EPS. Our initial results based on the use of a 24-member ensemble of forecasts for two summer case studies show that the convective-scale EPS produces fairly reliable forecasts of temperature, horizontal winds and relative humidity at 1 h lead time, as evident from the inspection of rank histograms. On the other hand, the rank histograms seem also to show that the EPS generates too much spread for forecasts of (i) surface pressure and (ii) surface precipitation intensity. These may indicate that for (i) the value of surface pressure observation error standard deviation used to generate surface pressure rank histograms is too large and for (ii) may be the result of non-Gaussian precipitation observation errors. However, further investigations are needed to better understand these findings. Finally, the inclusion of predicted observations of precipitation from radar in the 24-member EPS considered in this paper does not seem to improve the 1-h lead time forecast skill.
Resumo:
This paper assesses the relationship between amount of climate forcing – as indexed by global mean temperature change – and hydrological response in a sample of UK catchments. It constructs climate scenarios representing different changes in global mean temperature from an ensemble of 21 climate models assessed in the IPCC AR4. The results show a considerable range in impact between the 21 climate models, with – for example - change in summer runoff at a 2oC increase in global mean temperature varying between -40% and +20%. There is evidence of clustering in the results, particularly in projected changes in summer runoff and indicators of low flows, implying that the ensemble mean is not an appropriate generalised indicator of impact, and that the standard deviation of responses does not adequately characterise uncertainty. The uncertainty in hydrological impact is therefore best characterised by considering the shape of the distribution of responses across multiple climate scenarios. For some climate model patterns, and some catchments, there is also evidence that linear climate change forcings produce non-linear hydrological impacts. For most variables and catchments, the effects of climate change are apparent above the effects of natural multi-decadal variability with an increase in global mean temperature above 1oC, but there are differences between catchments. Based on the scenarios represented in the ensemble, the effect of climate change in northern upland catchments will be seen soonest in indicators of high flows, but in southern catchments effects will be apparent soonest in measures of summer and low flows. The uncertainty in response between different climate model patterns is considerably greater than the range due to uncertainty in hydrological model parameterisation.
Resumo:
Urban boundary layers (UBLs) can be highly complex due to the heterogeneous roughness and heating of the surface, particularly at night. Due to a general lack of observations, it is not clear whether canonical models of boundary layer mixing are appropriate in modelling air quality in urban areas. This paper reports Doppler lidar observations of turbulence profiles in the centre of London, UK, as part of the second REPARTEE campaign in autumn 2007. Lidar-measured standard deviation of vertical velocity averaged over 30 min intervals generally compared well with in situ sonic anemometer measurements at 190 m on the BT telecommunications Tower. During calm, nocturnal periods, the lidar underestimated turbulent mixing due mainly to limited sampling rate. Mixing height derived from the turbulence, and aerosol layer height from the backscatter profiles, showed similar diurnal cycles ranging from c. 300 to 800 m, increasing to c. 200 to 850 m under clear skies. The aerosol layer height was sometimes significantly different to the mixing height, particularly at night under clear skies. For convective and neutral cases, the scaled turbulence profiles resembled canonical results; this was less clear for the stable case. Lidar observations clearly showed enhanced mixing beneath stratocumulus clouds reaching down on occasion to approximately half daytime boundary layer depth. On one occasion the nocturnal turbulent structure was consistent with a nocturnal jet, suggesting a stable layer. Given the general agreement between observations and canonical turbulence profiles, mixing timescales were calculated for passive scalars released at street level to reach the BT Tower using existing models of turbulent mixing. It was estimated to take c. 10 min to diffuse up to 190 m, rising to between 20 and 50 min at night, depending on stability. Determination of mixing timescales is important when comparing to physico-chemical processes acting on pollutant species measured simultaneously at both the ground and at the BT Tower during the campaign. From the 3 week autumnal data-set there is evidence for occasional stable layers in central London, effectively decoupling surface emissions from air aloft.
Resumo:
Volatility, or the variability of the underlying asset, is one of the key fundamental components of property derivative pricing and in the application of real option models in development analysis. There has been relatively little work on volatility in real terms of its application to property derivatives and the real options analysis. Most research on volatility stems from investment performance (Nathakumaran & Newell (1995), Brown & Matysiak 2000, Booth & Matysiak 2001). Historic standard deviation is often used as a proxy for volatility and there has been a reliance on indices, which are subject to valuation smoothing effects. Transaction prices are considered to be more volatile than the traditional standard deviations of appraisal based indices. This could lead, arguably, to inefficiencies and mis-pricing, particularly if it is also accepted that changes evolve randomly over time and where future volatility and not an ex-post measure is the key (Sing 1998). If history does not repeat, or provides an unreliable measure, then estimating model based (implied) volatility is an alternative approach (Patel & Sing 2000). This paper is the first of two that employ alternative approaches to calculating and capturing volatility in UK real estate for the purposes of applying the measure to derivative pricing and real option models. It draws on a uniquely constructed IPD/Gerald Eve transactions database, containing over 21,000 properties over the period 1983-2005. In this first paper the magnitude of historic amplification associated with asset returns by sector and geographic spread is looked at. In the subsequent paper the focus will be upon model based (implied) volatility.
Resumo:
The number of properties to hold to achieve a well-diversified real estate property portfolio presents a puzzle, as the estimated number is considerably higher than that seen in actual portfolios. However, Statman (1987) argues that investors should only increase the number of holdings as long as the marginal benefits of diversification exceed their costs. Using this idea we find that the marginal benefits of diversification in real estate portfolios are so small that investors are probably rational in holding small portfolios, at least as far as the reduction in standard deviation is concerned.
Resumo:
The argument for the inclusion of real estate in the mixed-asset portfolio has concentrated on examining its effect in reducing the portfolio risk - the time series standard deviation (TSSD), mainly using ex-post time series data. However, the past as such is not really relevant to the long-term institutional investors, such as the insurance companies and pension funds, who are more concerned the terminal wealth (TW) of their investments and the variability of this wealth, the terminal wealth standard deviation (TWSD), since it is from the TW of their investment portfolio that policyholders and pensioners will derive their benefits. These kinds of investors with particular holding period requirements will be less concerned about the within period volatility of their portfolios and more by the possibility that their portfolio returns will fail to finance their liabilities. This variability in TW will be closely linked to the risk of shortfall in the quantity of assets needed to match the institution’s liabilities. The question remains therefore can real estate enhance the TW of the mixed-asset portfolio and/or reduce the variability of the TW. This paper uses annual data from the United Kingdom (UK) for the period 1972-2001 to test whether real estate is an asset class that not only reduces ex-post portfolio risk but also enhances portfolio TW and/or reduces the variability of TW.
Resumo:
The statistics of cloud-base vertical velocity simulated by the non-hydrostatic mesoscale model AROME are compared with Cloudnet remote sensing observations at two locations: the ARM SGP site in Central Oklahoma, and the DWD observatory at Lindenberg, Germany. The results show that, as expected, AROME significantly underestimates the variability of vertical velocity at cloud-base compared to observations at their nominal resolution; the standard deviation of vertical velocity in the model is typically 4-6 times smaller than observed, and even more during the winter at Lindenberg. Averaging the observations to the horizontal scale corresponding to the physical grid spacing of AROME (2.5 km) explains 70-80% of the underestimation by the model. Further averaging of the observations in the horizontal is required to match the model values for the standard deviation in vertical velocity. This indicates an effective horizontal resolution for the AROME model of at least 4 times the physically-defined grid spacing. The results illustrate the need for special treatment of sub-grid scale variability of vertical velocities in kilometer-scale atmospheric models, if processes such as aerosol-cloud interactions are to be included in the future.
Resumo:
We study the global atmospheric budgets of mass, moisture, energy and angular momentum in the latest reanalysis from the European Centre for Medium-Range Weather Forecasts (ECMWF), ERA-Interim, for the period 1989–2008 and compare with ERA-40. Most of the measures we use indicate that the ERA-Interim reanalysis is superior in quality to ERA-40. In ERA-Interim the standard deviation of the monthly mean global dry mass of 0.7 kg m−2 (0.007%) is slightly worse than in ERA-40, and long time-scale variations in dry mass originate predominately in the surface pressure field. The divergent winds are improved in ERA-Interim: the global standard deviation of the time-averaged dry mass budget residual is 10 kg m−2 day−1 and the quality of the cross-equatorial mass fluxes is improved. The temporal variations in the global evaporation minus precipitation (E − P) are too large but the global moisture budget residual is 0.003 kg m−2 day−1 with a spatial standard deviation of 0.3 kg m−2 day−1. Both the E − P over ocean and P − E over land are about 15% larger than the 1.1 Tg s−1 transport of water from ocean to land. The top of atmosphere (TOA) net energy losses are improved, with a value of 1 W m−2, but the meridional gradient of the TOA net energy flux is smaller than that from the Clouds and the Earth's Radiant Energy System (CERES) data. At the surface the global energy losses are worse, with a value of 7 W m−2. Over land however, the energy loss is only 0.5 W m−2. The downwelling thermal radiation at the surface in ERA-Interim of 341 W m−2 is towards the higher end of previous estimates. The global mass-adjusted energy budget residual is 8 W m−2 with a spatial standard deviation of 11 W m−2, and the mass-adjusted atmospheric energy transport from low to high latitudes (the sum for the two hemispheres) is 9.5 PW
Resumo:
We bridge the properties of the regular triangular, square, and hexagonal honeycomb Voronoi tessellations of the plane to the Poisson-Voronoi case, thus analyzing in a common framework symmetry breaking processes and the approach to uniform random distributions of tessellation-generating points. We resort to ensemble simulations of tessellations generated by points whose regular positions are perturbed through a Gaussian noise, whose variance is given by the parameter α2 times the square of the inverse of the average density of points. We analyze the number of sides, the area, and the perimeter of the Voronoi cells. For all valuesα >0, hexagons constitute the most common class of cells, and 2-parameter gamma distributions provide an efficient description of the statistical properties of the analyzed geometrical characteristics. The introduction of noise destroys the triangular and square tessellations, which are structurally unstable, as their topological properties are discontinuous in α = 0. On the contrary, the honeycomb hexagonal tessellation is topologically stable and, experimentally, all Voronoi cells are hexagonal for small but finite noise withα <0.12. For all tessellations and for small values of α, we observe a linear dependence on α of the ensemble mean of the standard deviation of the area and perimeter of the cells. Already for a moderate amount of Gaussian noise (α >0.5), memory of the specific initial unperturbed state is lost, because the statistical properties of the three perturbed regular tessellations are indistinguishable. When α >2, results converge to those of Poisson-Voronoi tessellations. The geometrical properties of n-sided cells change with α until the Poisson- Voronoi limit is reached for α > 2; in this limit the Desch law for perimeters is shown to be not valid and a square root dependence on n is established. This law allows for an easy link to the Lewis law for areas and agrees with exact asymptotic results. Finally, for α >1, the ensemble mean of the cells area and perimeter restricted to the hexagonal cells agree remarkably well with the full ensemble mean; this reinforces the idea that hexagons, beyond their ubiquitous numerical prominence, can be interpreted as typical polygons in 2D Voronoi tessellations.