870 resultados para assessment data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is increasingly accepted that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The sub-continent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability and the identification of rainfall extremes is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. The majority of previous climate model verification studies have compared model output with observational data at monthly timescales. In this research, the assessment of ability of a state of the art climate model to simulate climate at daily timescales is carried out using satellite-derived rainfall data from the Microwave Infrared Rainfall Algorithm (MIRA). This dataset covers the period from 1993 to 2002 and the whole of southern Africa at a spatial resolution of 0.1° longitude/latitude. This paper concentrates primarily on the ability of the model to simulate the spatial and temporal patterns of present-day rainfall variability over southern Africa and is not intended to discuss possible future changes in climate as these have been documented elsewhere. Simulations of current climate from the UKMeteorological Office Hadley Centre’s climate model, in both regional and global mode, are firstly compared to the MIRA dataset at daily timescales. Secondly, the ability of the model to reproduce daily rainfall extremes is assessed, again by a comparison with extremes from the MIRA dataset. The results suggest that the model reproduces the number and spatial distribution of rainfall extremes with some accuracy, but that mean rainfall and rainfall variability is underestimated (over-estimated) over wet (dry) regions of southern Africa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on the consistency of water vapour line intensities in selected spectral regions between 800–12,000 cm−1 under atmospheric conditions using sun-pointing Fourier transform infrared spectroscopy. Measurements were made across a number of days at both a low and high altitude field site, sampling a relatively moist and relatively dry atmosphere. Our data suggests that across most of the 800–12,000 cm−1 spectral region water vapour line intensities in recent spectral line databases are generally consistent with what was observed. However, we find that HITRAN-2008 water vapour line intensities are systematically lower by up to 20% in the 8000–9200 cm−1 spectral interval relative to other spectral regions. This discrepancy is essentially removed when two new linelists (UCL08, a compilation of linelists and ab-initio calculations, and one based on recent laboratory measurements by Oudot et al. (2010) [10] in the 8000–9200 cm−1 spectral region) are used. This strongly suggests that the H2O line strengths in the HITRAN-2008 database are indeed underestimated in this spectral region and in need of revision. The calculated global-mean clear-sky absorption of solar radiation is increased by about 0.3 W m−2 when using either the UCL08 or Oudot line parameters in the 8000–9200 cm−1 region, instead of HITRAN-2008. We also found that the effect of isotopic fractionation of HDO is evident in the 2500–2900 cm−1 region in the observations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Along-Track Scanning Radiometers (ATSRs) provide a long time-series of measurements suitable for the retrieval of cloud properties. This work evaluates the freely-available Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) dataset (version 3) created from the ATSR-2 (1995�2003) and Advanced ATSR (AATSR; 2002 onwards) records. Users are recommended to consider only retrievals flagged as high-quality, where there is a good consistency between the measurements and the retrieved state (corresponding to about 60% of converged retrievals over sea, and more than 80% over land). Cloud properties are found to be generally free of any significant spurious trends relating to satellite zenith angle. Estimates of the random error on retrieved cloud properties are suggested to be generally appropriate for optically-thick clouds, and up to a factor of two too small for optically-thin cases. The correspondence between ATSR-2 and AATSR cloud properties is high, but a relative calibration difference between the sensors of order 5�10% at 660 nm and 870 nm limits the potential of the current version of the dataset for trend analysis. As ATSR-2 is thought to have the better absolute calibration, the discussion focusses on this portion of the record. Cloud-top heights from GRAPE compare well to ground-based data at four sites, particularly for shallow clouds. Clouds forming in boundary-layer inversions are typically around 1 km too high in GRAPE due to poorly-resolved inversions in the modelled temperature profiles used. Global cloud fields are compared to satellite products derived from the Moderate Resolution Imaging Spectroradiometer (MODIS), Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) measurements, and a climatology of liquid water content derived from satellite microwave radiometers. In all cases the main reasons for differences are linked to differing sensitivity to, and treatment of, multi-layer cloud systems. The correlation coefficient between GRAPE and the two MODIS products considered is generally high (greater than 0.7 for most cloud properties), except for liquid and ice cloud effective radius, which also show biases between the datasets. For liquid clouds, part of the difference is linked to choice of wavelengths used in the retrieval. Total cloud cover is slightly lower in GRAPE (0.64) than the CALIOP dataset (0.66). GRAPE underestimates liquid cloud water path relative to microwave radiometers by up to 100 g m�2 near the Equator and overestimates by around 50 g m�2 in the storm tracks. Finally, potential future improvements to the algorithm are outlined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accumulating data suggest that diets rich in flavanols and procyanidins are beneficial for human health. In this context, there has been a great interest in elucidating the systemic levels and metabolic profiles at which these compounds occur in humans. While recent progress has been made, there still exist considerable differences and various disagreements with regard to the mammalian metabolites of these compounds, which in turn is largely a consequence of the lack of availability of authentic standards that would allow for the directed development and validation of expedient analytical methodologies. In the present study, we developed a method for the analysis of structurally-related flavanol metabolites using a wide range of authentic standards. Applying this method in the context of a human dietary intervention study using comprehensively characterized and standardized flavanol- and procyanidin-containing cocoa, we were able to identify the structurally-related (−)-epicatechin metabolites (SREM) postprandially extant in the systemic circulation of humans. Our results demonstrate that (−)-epicatechin-3′-β-D-glucuronide, (−)-epicatechin-3′-sulfate, and a 3′-O-methyl(−)-epicatechin-5/7-sulfate are the predominant SREM in humans, and further confirm the relevance of the stereochemical configuration in the context of flavanol metabolism. In addition, we also identified plausible causes for the previously reported discrepancies regarding flavanol metabolism, consisting to a significant extent of inter-laboratory differences in sample preparation (enzymatic treatment and sample conditioning for HPLC analysis) and detection systems. Thus, these findings may also aid in the establishment of consensus on this topic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Functional Rating Scale Taskforce for pre-Huntington Disease (FuRST-pHD) is a multinational, multidisciplinary initiative with the goal of developing a data-driven, comprehensive, psychometrically sound, rating scale for assessing symptoms and functional ability in prodromal and early Huntington disease (HD) gene expansion carriers. The process involves input from numerous sources to identify relevant symptom domains, including HD individuals, caregivers, and experts from a variety of fields, as well as knowledge gained from the analysis of data from ongoing large-scale studies in HD using existing clinical scales. This is an iterative process in which an ongoing series of field tests in prodromal (prHD) and early HD individuals provides the team with data on which to make decisions regarding which questions should undergo further development or testing and which should be excluded. We report here the development and assessment of the first iteration of interview questions aimed to assess cognitive symptoms in prHD and early HD individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Functional Rating Scale Taskforce for pre-Huntington Disease (FuRST-pHD) is a multinational, multidisciplinary initiative with the goal of developing a data-driven, comprehensive, psychometrically sound, rating scale for assessing symptoms and functional ability in prodromal and early Huntington disease (HD) gene expansion carriers. The process involves input from numerous sources to identify relevant symptom domains, including HD individuals, caregivers, and experts from a variety of fields, as well as knowledge gained from the analysis of data from ongoing large-scale studies in HD using existing clinical scales. This is an iterative process in which an ongoing series of field tests in prodromal (prHD) and early HD individuals provides the team with data on which to make decisions regarding which questions should undergo further development or testing and which should be excluded. We report here the development and assessment of the first iteration of interview questions aimed to assess functional impact in day-to-day activities in prHD and early HD individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Functional Rating Scale Taskforce for pre-Huntington Disease (FuRST-pHD) is a multinational, multidisciplinary initiative with the goal of developing a data-driven, comprehensive, psychometrically sound, rating scale for assessing symptoms and functional ability in prodromal and early Huntington disease (HD) gene expansion carriers. The process involves input from numerous sources to identify relevant symptom domains, including HD individuals, caregivers, and experts from a variety of fields, as well as knowledge gained from the analysis of data from ongoing large-scale studies in HD using existing clinical scales. This is an iterative process in which an ongoing series of field tests in prodromal (prHD) and early HD individuals provides the team with data on which to make decisions regarding which questions should undergo further development or testing and which should be excluded. We report here the development and assessment of the first iteration of interview questions aimed to assess Depression, Anxiety and Apathy in prHD and early HD individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Functional Rating Scale Taskforce for pre-Huntington Disease (FuRST-pHD) is a multinational, multidisciplinary initiative with the goal of developing a data-driven, comprehensive, psychometrically sound, rating scale for assessing symptoms and functional ability in prodromal and early Huntington disease (HD) gene expansion carriers. The process involves input from numerous sources to identify relevant symptom domains, including HD individuals, caregivers, and experts from a variety of fields, as well as knowledge gained from the analysis of data from ongoing large-scale studies in HD using existing clinical scales. This is an iterative process in which an ongoing series of field tests in prodromal (prHD) and early HD individuals provides the team with data on which to make decisions regarding which questions should undergo further development or testing and which should be excluded. We report here the development and assessment of the first iteration of interview questions aimed to assess functional impact of motor manifestations in prHD and early HD individuals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The requirement to forecast volcanic ash concentrations was amplified as a response to the 2010 Eyjafjallajökull eruption when ash safety limits for aviation were introduced in the European area. The ability to provide accurate quantitative forecasts relies to a large extent on the source term which is the emissions of ash as a function of time and height. This study presents source term estimations of the ash emissions from the Eyjafjallajökull eruption derived with an inversion algorithm which constrains modeled ash emissions with satellite observations of volcanic ash. The algorithm is tested with input from two different dispersion models, run on three different meteorological input data sets. The results are robust to which dispersion model and meteorological data are used. Modeled ash concentrations are compared quantitatively to independent measurements from three different research aircraft and one surface measurement station. These comparisons show that the models perform reasonably well in simulating the ash concentrations, and simulations using the source term obtained from the inversion are in overall better agreement with the observations (rank correlation = 0.55, Figure of Merit in Time (FMT) = 25–46%) than simulations using simplified source terms (rank correlation = 0.21, FMT = 20–35%). The vertical structures of the modeled ash clouds mostly agree with lidar observations, and the modeled ash particle size distributions agree reasonably well with observed size distributions. There are occasionally large differences between simulations but the model mean usually outperforms any individual model. The results emphasize the benefits of using an ensemble-based forecast for improved quantification of uncertainties in future ash crises.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose NANA is a 3-year project using sensitively-designed technology to improve data collection and integrate information on nutrition, physical and cognitive function and mental health to identify individuals at risk of under-nourishment and improve targeting of interventions. This research will also improve our understanding of the interactions between these factors, in order to better medical treatment and social provision. The toolkit has potential for commercial development for additional segments of the population. Method This is a multi-disciplinary program involving psychology, nutrition, engineering and software engineering. The first phase is a user needs analysis and will involve consulting with a broad cross-section of older people, caregivers, and health professionals, to establish what technical approaches would be useful and acceptable. The second phase focuses on the development of an integrated measurement toolkit. There are three inter-related subsections: (i) an iterative program to develop the assessment technology, (ii) techniques for dietary assessment in older people, and (iii) a parallel investigation of measures of cognition and mental health in older people. It includes a full validation of the assessment toolkit and will comprise a comparison of the new, integrated assessment with traditional 'pen and paper' methods with volunteers having the equipment installed in their homes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The estimation of prediction quality is important because without quality measures, it is difficult to determine the usefulness of a prediction. Currently, methods for ligand binding site residue predictions are assessed in the function prediction category of the biennial Critical Assessment of Techniques for Protein Structure Prediction (CASP) experiment, utilizing the Matthews Correlation Coefficient (MCC) and Binding-site Distance Test (BDT) metrics. However, the assessment of ligand binding site predictions using such metrics requires the availability of solved structures with bound ligands. Thus, we have developed a ligand binding site quality assessment tool, FunFOLDQA, which utilizes protein feature analysis to predict ligand binding site quality prior to the experimental solution of the protein structures and their ligand interactions. The FunFOLDQA feature scores were combined using: simple linear combinations, multiple linear regression and a neural network. The neural network produced significantly better results for correlations to both the MCC and BDT scores, according to Kendall’s τ, Spearman’s ρ and Pearson’s r correlation coefficients, when tested on both the CASP8 and CASP9 datasets. The neural network also produced the largest Area Under the Curve score (AUC) when Receiver Operator Characteristic (ROC) analysis was undertaken for the CASP8 dataset. Furthermore, the FunFOLDQA algorithm incorporating the neural network, is shown to add value to FunFOLD, when both methods are employed in combination. This results in a statistically significant improvement over all of the best server methods, the FunFOLD method (6.43%), and one of the top manual groups (FN293) tested on the CASP8 dataset. The FunFOLDQA method was also found to be competitive with the top server methods when tested on the CASP9 dataset. To the best of our knowledge, FunFOLDQA is the first attempt to develop a method that can be used to assess ligand binding site prediction quality, in the absence of experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

CO, O3, and H2O data in the upper troposphere/lower stratosphere (UTLS) measured by the Atmospheric Chemistry Experiment Fourier Transform Spectrometer(ACE-FTS) on Canada’s SCISAT-1 satellite are validated using aircraft and ozonesonde measurements. In the UTLS, validation of chemical trace gas measurements is a challenging task due to small-scale variability in the tracer fields, strong gradients of the tracers across the tropopause, and scarcity of measurements suitable for validation purposes. Validation based on coincidences therefore suffers from geophysical noise. Two alternative methods for the validation of satellite data are introduced, which avoid the usual need for coincident measurements: tracer-tracer correlations, and vertical tracer profiles relative to tropopause height. Both are increasingly being used for model validation as they strongly suppress geophysical variability and thereby provide an “instantaneous climatology”. This allows comparison of measurements between non-coincident data sets which yields information about the precision and a statistically meaningful error-assessment of the ACE-FTS satellite data in the UTLS. By defining a trade-off factor, we show that the measurement errors can be reduced by including more measurements obtained over a wider longitude range into the comparison, despite the increased geophysical variability. Applying the methods then yields the following upper bounds to the relative differences in the mean found between the ACE-FTS and SPURT aircraft measurements in the upper troposphere (UT) and lower stratosphere (LS), respectively: for CO ±9% and ±12%, for H2O ±30% and ±18%, and for O3 ±25% and ±19%. The relative differences for O3 can be narrowed down by using a larger dataset obtained from ozonesondes, yielding a high bias in the ACEFTS measurements of 18% in the UT and relative differences of ±8% for measurements in the LS. When taking into account the smearing effect of the vertically limited spacing between measurements of the ACE-FTS instrument, the relative differences decrease by 5–15% around the tropopause, suggesting a vertical resolution of the ACE-FTS in the UTLS of around 1 km. The ACE-FTS hence offers unprecedented precision and vertical resolution for a satellite instrument, which will allow a new global perspective on UTLS tracer distributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Canopy leaf area index (LAI), defined as the single-sided leaf area per unit ground area, is a quantitative measure of canopy foliar area. LAI is a controlling biophysical property of vegetation function, and quantifying LAI is thus vital for understanding energy, carbon and water fluxes between the land surface and the atmosphere. LAI is routinely available from Earth Observation (EO) instruments such as MODIS. However EO-derived estimates of LAI require validation before they are utilised by the ecosystem modelling community. Previous validation work on the MODIS collection 4 (c4) product suggested considerable error especially in forested biomes, and as a result significant modification of the MODIS LAI algorithm has been made for the most recent collection 5 (c5). As a result of these changes the current MODIS LAI product has not been widely validated. We present a validation of the MODIS c5 LAI product over a 121 km2 area of mixed coniferous forest in Oregon, USA, based on detailed ground measurements which we have upscaled using high resolution EO data. Our analysis suggests that c5 shows a much more realistic temporal LAI dynamic over c4 values for the site we examined. We find improved spatial consistency between the MODIS c5 LAI product and upscaled in situ measurements. However results also suggest that the c5 LAI product underestimates the upper range of upscaled in situ LAI measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cladistic analyses begin with an assessment of variation for a group of organisms and the subsequent representation of that variation as a data matrix. The step of converting observed organismal variation into a data matrix has been considered subjective, contentious, under-investigated, imprecise, unquantifiable, intuitive, as a black-box, and at the same time as ultimately the most influential phase of any cladistic analysis (Pimentel and Riggins, 1987; Bryant, 1989; Pogue and Mickevich, 1990; de Pinna, 1991; Stevens, 1991; Bateman et al., 1992; Smith, 1994; Pleijel, 1995; Wilkinson, 1995; Patterson and Johnson, 1997). Despite the concerns of these authors, primary homology assessment is often perceived as reproducible. In a recent paper, Hawkins et al. (1997) reiterated two points made by a number of these authors: that different interpretations of characters and coding are possible and that different workers will perceive and define characters in different ways. One reviewer challenged us: did we really think that two people working on the same group would come up with different data sets? The conflicting views regarding the reproducibility of the cladistic character matrix provoke a number of questions. Do the majority of workers consistently follow the same guidelines? Has the theoretical framework informing primary homology assessment been adequately explored? The objective of this study is to classify approaches to primary homology assessment, and to quantify the extent to which different approaches are found in the literature by examining variation in the way characters are defined and coded in a data matrix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A situation assessment uses reports from sensors to produce hypotheses about a situation at a level of aggregation that is of direct interest to a military commander. A low level of aggregation could mean forming tracks from reports, which is well documented in the tracking literature as track initiation and data association. In this paper there is also discussion on higher level aggregation; assessing the membership of tracks to larger groups. Ideas used in joint tracking and identification are extended, using multi-entity Bayesian networks to model a number of static variables, of which the identity of a target is one. For higher level aggregation a scheme for hypothesis management is required. It is shown how an offline clustering of vehicles can be reduced to an assignment problem.