882 resultados para Results assessment
Resumo:
In collaborative situations, eye gaze is a critical element of behavior which supports and fulfills many activities and roles. In current computer-supported collaboration systems, eye gaze is poorly supported. Even in a state-of-the-art video conferencing system such as the access grid, although one can see the face of the user, much of the communicative power of eye gaze is lost. This article gives an overview of some preliminary work that looks towards integrating eye gaze into an immersive collaborative virtual environment and assessing the impact that this would have on interaction between the users of such a system. Three experiments were conducted to assess the efficacy of eye gaze within immersive virtual environments. In each experiment, subjects observed on a large screen the eye-gaze behavior of an avatar. The eye-gaze behavior of that avatar had previously been recorded from a user with the use of a head-mounted eye tracker. The first experiment was conducted to assess the difference between users' abilities to judge what objects an avatar is looking at with only head gaze being viewed and also with eye- and head-gaze data being displayed. The results from the experiment show that eye gaze is of vital importance to the subjects, correctly identifying what a person is looking at in an immersive virtual environment. The second experiment examined whether a monocular or binocular eye-tracker would be required. This was examined by testing subjects' ability to identify where an avatar was looking from their eye direction alone, or by eye direction combined with convergence. This experiment showed that convergence had a significant impact on the subjects' ability to identify where the avatar was looking. The final experiment looked at the effects of stereo and mono-viewing of the scene, with the subjects being asked to identify where the avatar was looking. This experiment showed that there was no difference in the subjects' ability to detect where the avatar was gazing. This is followed by a description of how the eye-tracking system has been integrated into an immersive collaborative virtual environment and some preliminary results from the use of such a system.
Resumo:
Three simple climate models (SCMs) are calibrated using simulations from atmosphere ocean general circulation models (AOGCMs). In addition to using two conventional SCMs, results from a third simpler model developed specifically for this study are obtained. An easy to implement and comprehensive iterative procedure is applied that optimises the SCM emulation of global-mean surface temperature and total ocean heat content, and, if available in the SCM, of surface temperature over land, over the ocean and in both hemispheres, and of the global-mean ocean temperature profile. The method gives best-fit estimates as well as uncertainty intervals for the different SCM parameters. For the calibration, AOGCM simulations with two different types of forcing scenarios are used: pulse forcing simulations performed with 2 AOGCMs and gradually changing forcing simulations from 15 AOGCMs obtained within the framework of the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. The method is found to work well. For all possible combinations of SCMs and AOGCMs the emulation of AOGCM results could be improved. The obtained SCM parameters depend both on the AOGCM data and the type of forcing scenario. SCMs with a poor representation of the atmosphere thermal inertia are better able to emulate AOGCM results from gradually changing forcing than from pulse forcing simulations. Correct simultaneous emulation of both atmospheric temperatures and the ocean temperature profile by the SCMs strongly depends on the representation of the temperature gradient between the atmosphere and the mixed layer. Introducing climate sensitivities that are dependent on the forcing mechanism in the SCMs allows the emulation of AOGCM responses to carbon dioxide and solar insolation forcings equally well. Also, some SCM parameters are found to be very insensitive to the fitting, and the reduction of their uncertainty through the fitting procedure is only marginal, while other parameters change considerably. The very simple SCM is found to reproduce the AOGCM results as well as the other two comparably more sophisticated SCMs.
Resumo:
This paper describes the design and manufacture of the filters and antireflection coatings used in the HIRDLS instrument. The multilayer design of the filters and coatings, choice of layer materials, and the deposition techniques adopted to ensure adequate layer thickness control is discussed. The spectral assessment of the filters and coatings is carried out using a FTIR spectrometer; some measurement results are presented together with discussion of measurement accuracy and the identification and avoidance of measurement artifacts. The post-deposition processing of the filters by sawing to size, writing of an identification code onto the coatings and the environmental testing of the finished filters are also described.
Resumo:
SCIENTIFIC SUMMARY Globally averaged total column ozone has declined over recent decades due to the release of ozone-depleting substances (ODSs) into the atmosphere. Now, as a result of the Montreal Protocol, ozone is expected to recover from the effects of ODSs as ODS abundances decline in the coming decades. However, a number of factors in addition to ODSs have led to and will continue to lead to changes in ozone. Discriminating between the causes of past and projected ozone changes is necessary, not only to identify the progress in ozone recovery from ODSs, but also to evaluate the effectiveness of climate and ozone protection policy options. Factors Affecting Future Ozone and Surface Ultraviolet Radiation • At least for the next few decades, the decline of ODSs is expected to be the major factor affecting the anticipated increase in global total column ozone. However, several factors other than ODS will affect the future evolution of ozone in the stratosphere. These include changes in (i) stratospheric circulation and temperature due to changes in long-lived greenhouse gas (GHG) abundances, (ii) stratospheric aerosol loading, and (iii) source gases of highly reactive stratospheric hydrogen and nitrogen compounds. Factors that amplify the effects of ODSs on ozone (e.g., stratospheric aerosols) will likely decline in importance as ODSs are gradually eliminated from the atmosphere. • Increases in GHG emissions can both positively and negatively affect ozone. Carbon dioxide (CO2)-induced stratospheric cooling elevates middle and upper stratospheric ozone and decreases the time taken for ozone to return to 1980 levels, while projected GHG-induced increases in tropical upwelling decrease ozone in the tropical lower stratosphere and increase ozone in the extratropics. Increases in nitrous oxide (N2O) and methane (CH4) concentrations also directly impact ozone chemistry but the effects are different in different regions. • The Brewer-Dobson circulation (BDC) is projected to strengthen over the 21st century and thereby affect ozone amounts. Climate models consistently predict an acceleration of the BDC or, more specifically, of the upwelling mass flux in the tropical lower stratosphere of around 2% per decade as a consequence of GHG abundance increases. A stronger BDC would decrease the abundance of tropical lower stratospheric ozone, increase poleward transport of ozone, and could reduce the atmospheric lifetimes of long-lived ODSs and other trace gases. While simulations showing faster ascent in the tropical lower stratosphere to date are a robust feature of chemistry-climate models (CCMs), this has not been confirmed by observations and the responsible mechanisms remain unclear. • Substantial ozone losses could occur if stratospheric aerosol loading were to increase in the next few decades, while halogen levels are high. Stratospheric aerosol increases may be caused by sulfur contained in volcanic plumes entering the stratosphere or from human activities. The latter might include attempts to geoengineer the climate system by enhancing the stratospheric aerosol layer. The ozone losses mostly result from enhanced heterogeneous chemistry on stratospheric aerosols. Enhanced aerosol heating within the stratosphere also leads to changes in temperature and circulation that affect ozone. • Surface ultraviolet (UV) levels will not be affected solely by ozone changes but also by the effects of climate change and by air quality change in the troposphere. These tropospheric effects include changes in clouds, tropospheric aerosols, surface reflectivity, and tropospheric sulfur dioxide (SO2) and nitrogen dioxide (NO2). The uncertainties in projections of these factors are large. Projected increases in tropospheric ozone are more certain and may lead to reductions in surface erythemal (“sunburning”) irradiance of up to 10% by 2100. Changes in clouds may lead to decreases or increases in surface erythemal irradiance of up to 15% depending on latitude. Expected Future Changes in Ozone Full ozone recovery from the effects of ODSs and return of ozone to historical levels are not synonymous. In this chapter a key target date is chosen to be 1980, in part to retain the connection to previous Ozone Assessments. Noting, however, that decreases in ozone may have occurred in some regions of the atmosphere prior to 1980, 1960 return dates are also reported. The projections reported on in this chapter are taken from a recent compilation of CCM simulations. The ozone projections, which also form the basis for the UV projections, are limited in their representativeness of possible futures since they mostly come from CCM simulations based on a single GHG emissions scenario (scenario A1B of Emissions Scenarios. A Special Report of Working Group III of the Intergovernmental Panel on Climate Change, Cambridge University Press, 2000) and a single ODS emissions scenario (adjusted A1 of the previous (2006) Ozone Assessment). Throughout this century, the vertical, latitudinal, and seasonal structure of the ozone distribution will be different from what it was in 1980. For this reason, ozone changes in different regions of the atmosphere are considered separately. • The projections of changes in ozone and surface clear-sky UV are broadly consistent with those reported on in the 2006 Assessment. • The capability of making projections and attribution of future ozone changes has been improved since the 2006 Assessment. Use of CCM simulations from an increased number of models extending through the entire period of ozone depletion and recovery from ODSs (1960–2100) as well as sensitivity simulations have allowed more robust projections of long-term changes in the stratosphere and of the relative contributions of ODSs and GHGs to those changes. • Global annually averaged total column ozone is projected to return to 1980 levels before the middle of the century and earlier than when stratospheric halogen loading returns to 1980 levels. CCM projections suggest that this early return is primarily a result of GHG-induced cooling of the upper stratosphere because the effects of circulation changes on tropical and extratropical ozone largely cancel. Global (90°S–90°N) annually averaged total column ozone will likely return to 1980 levels between 2025 and 2040, well before the return of stratospheric halogens to 1980 levels between 2045 and 2060. • Simulated changes in tropical total column ozone from 1960 to 2100 are generally small. The evolution of tropical total column ozone in models depends on the balance between upper stratospheric increases and lower stratospheric decreases. The upper stratospheric increases result from declining ODSs and a slowing of ozone destruction resulting from GHG-induced cooling. Ozone decreases in the lower stratosphere mainly result from an increase in tropical upwelling. From 1960 until around 2000, a general decline is simulated, followed by a gradual increase to values typical of 1980 by midcentury. Thereafter, although total column ozone amounts decline slightly again toward the end of the century, by 2080 they are no longer expected to be affected by ODSs. Confidence in tropical ozone projections is compromised by the fact that simulated decreases in column ozone to date are not supported by observations, suggesting that significant uncertainties remain. • Midlatitude total column ozone is simulated to evolve differently in the two hemispheres. Over northern midlatitudes, annually averaged total column ozone is projected to return to 1980 values between 2015 and 2030, while for southern midlatitudes the return to 1980 values is projected to occur between 2030 and 2040. The more rapid return to 1980 values in northern midlatitudes is linked to a more pronounced strengthening of the poleward transport of ozone due to the effects of increased GHG levels, and effects of Antarctic ozone depletion on southern midlatitudes. By 2100, midlatitude total column ozone is projected to be above 1980 values in both hemispheres. • October-mean Antarctic total column ozone is projected to return to 1980 levels after midcentury, later than in any other region, and yet earlier than when stratospheric halogen loading is projected to return to 1980 levels. The slightly earlier return of ozone to 1980 levels (2045–2060) results primarily from upper stratospheric cooling and resultant increases in ozone. The return of polar halogen loading to 1980 levels (2050–2070) in CCMs is earlier than in empirical models that exclude the effects of GHG-induced changes in circulation. Our confidence in the drivers of changes in Antarctic ozone is higher than for other regions because (i) ODSs exert a strong influence on Antarctic ozone, (ii) the effects of changes in GHG abundances are comparatively small, and (iii) projections of ODS emissions are more certain than those for GHGs. Small Antarctic ozone holes (areas of ozone <220 Dobson units, DU) could persist to the end of the 21st century. • March-mean Arctic total column ozone is projected to return to 1980 levels two to three decades before polar halogen loading returns to 1980 levels, and to exceed 1980 levels thereafter. While CCM simulations project a return to 1980 levels between 2020 and 2035, most models tend not to capture observed low temperatures and thus underestimate present-day Arctic ozone loss such that it is possible that this return date is biased early. Since the strengthening of the Brewer-Dobson circulation through the 21st century leads to increases in springtime Arctic column ozone, by 2100 Arctic ozone is projected to lie well above 1960 levels. Uncertainties in Projections • Conclusions dependent on future GHG levels are less certain than those dependent on future ODS levels since ODS emissions are controlled by the Montreal Protocol. For the six GHG scenarios considered by a few CCMs, the simulated differences in stratospheric column ozone over the second half of the 21st century are largest in the northern midlatitudes and the Arctic, with maximum differences of 20–40 DU between the six scenarios in 2100. • There remain sources of uncertainty in the CCM simulations. These include the use of prescribed ODS mixing ratios instead of emission fluxes as lower boundary conditions, the range of sea surface temperatures and sea ice concentrations, missing tropospheric chemistry, model parameterizations, and model climate sensitivity. • Geoengineering schemes for mitigating climate change by continuous injections of sulfur-containing compounds into the stratosphere, if implemented, would substantially affect stratospheric ozone, particularly in polar regions. Ozone losses observed following large volcanic eruptions support this prediction. However, sporadic volcanic eruptions provide limited analogs to the effects of continuous sulfur emissions. Preliminary model simulations reveal large uncertainties in assessing the effects of continuous sulfur injections. Expected Future Changes in Surface UV. While a number of factors, in addition to ozone, affect surface UV irradiance, the focus in this chapter is on the effects of changes in stratospheric ozone on surface UV. For this reason, clear-sky surface UV irradiance is calculated from ozone projections from CCMs. • Projected increases in midlatitude ozone abundances during the 21st century, in the absence of changes in other factors, in particular clouds, tropospheric aerosols, and air pollutants, will result in decreases in surface UV irradiance. Clear-sky erythemal irradiance is projected to return to 1980 levels on average in 2025 for the northern midlatitudes, and in 2035 for the southern midlatitudes, and to fall well below 1980 values by the second half of the century. However, actual changes in surface UV will be affected by a number of factors other than ozone. • In the absence of changes in other factors, changes in tropical surface UV will be small because changes in tropical total column ozone are projected to be small. By the middle of the 21st century, the model projections suggest surface UV to be slightly higher than in the 1960s, very close to values in 1980, and slightly lower than in 2000. The projected decrease in tropical total column ozone through the latter half of the century will likely result in clear-sky surface UV remaining above 1960 levels. Average UV irradiance is already high in the tropics due to naturally occurring low total ozone columns and high solar elevations. • The magnitude of UV changes in the polar regions is larger than elsewhere because ozone changes in polar regions are larger. For the next decades, surface clear-sky UV irradiance, particularly in the Antarctic, will continue to be higher than in 1980. Future increases in ozone and decreases in clear-sky UV will occur at slower rates than those associated with the ozone decreases and UV increases that occurred before 2000. In Antarctica, surface clear-sky UV is projected to return to 1980 levels between 2040 and 2060, while in the Arctic this is projected to occur between 2020 and 2030. By 2100, October surface clear-sky erythemal irradiance in Antarctica is likely to be between 5% below to 25% above 1960 levels, with considerable uncertainty. This is consistent with multi-model-mean October Antarctic total column ozone not returning to 1960 levels by 2100. In contrast, by 2100, surface clear-sky UV in the Arctic is projected to be 0–10% below 1960 levels.
Resumo:
This paper presents results for thermal comfort assessment in non-uniform thermal environments. Three types of displacement ventilation (DV) units that created stratified condition in an environmental test chamber have been selected to carry out the thermal comfort assessment: a flat diffuser (DV1), semi-circular diffuser (DV2), and floor swirl diffuser (DV3). The CBE (Center for the Built Environment at Berkeley) comfort model was implemented in this study to assess the occupant’s thermal comfort for the three DV types. The CBE model predicted the occupant’s mean skin as well as local skin temperatures very well when compared with measurements found in the literature, while it underestimated the occupant’s core temperature. The predicted occupant’s thermal sensation and thermal comfort for the case of (DV2) were the best. Therefore, the semi-circular diffuser (DV2) provided better thermal comfort for the occupant in comparison with the other two DV types.
Resumo:
We report preliminary results from studies of biological effects induced by non-thermal levels of non-ionizing electromagnetic radiation. Exponentially growing Saccharomyces cerevisiae yeast cells grown on dry media were exposed to electromagnetic fields in the 200–350 GHz frequency range at low power density to observe possible non-thermal effects on the microcolony growth. Exposure to the electromagnetic field was conducted over 2.5 h. The data from exposure and control experiments were grouped into either large-, medium- or small-sized microcolonies to assist in the accurate assessment of growth. The three groups showed significant differences in growth between exposed and control microcolonies. A statistically significant enhanced growth rate was observed at 341 GHz. Growth rate was assessed every 30 min via time-lapse photography. Possible interaction mechanisms are discussed, taking into account Frohlich's hypothesis.
Resumo:
It is increasingly accepted that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The sub-continent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability and the identification of rainfall extremes is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. The majority of previous climate model verification studies have compared model output with observational data at monthly timescales. In this research, the assessment of ability of a state of the art climate model to simulate climate at daily timescales is carried out using satellite-derived rainfall data from the Microwave Infrared Rainfall Algorithm (MIRA). This dataset covers the period from 1993 to 2002 and the whole of southern Africa at a spatial resolution of 0.1° longitude/latitude. This paper concentrates primarily on the ability of the model to simulate the spatial and temporal patterns of present-day rainfall variability over southern Africa and is not intended to discuss possible future changes in climate as these have been documented elsewhere. Simulations of current climate from the UKMeteorological Office Hadley Centre’s climate model, in both regional and global mode, are firstly compared to the MIRA dataset at daily timescales. Secondly, the ability of the model to reproduce daily rainfall extremes is assessed, again by a comparison with extremes from the MIRA dataset. The results suggest that the model reproduces the number and spatial distribution of rainfall extremes with some accuracy, but that mean rainfall and rainfall variability is underestimated (over-estimated) over wet (dry) regions of southern Africa.
Resumo:
The Geostationary Earth Radiation Budget Intercomparison of Longwave and Shortwave radiation (GERBILS) was an observational field experiment over North Africa during June 2007. The campaign involved 10 flights by the FAAM BAe-146 research aircraft over southwestern parts of the Sahara Desert and coastal stretches of the Atlantic Ocean. Objectives of the GERBILS campaign included characterisation of mineral dust geographic distribution and physical and optical properties, assessment of the impact upon radiation, validation of satellite remote sensing retrievals, and validation of numerical weather prediction model forecasts of aerosol optical depths (AODs) and size distributions. We provide the motivation behind GERBILS and the experimental design and report the progress made in each of the objectives. We show that mineral dust in the region is relatively non-absorbing (mean single scattering albedo at 550 nm of 0.97) owing to the relatively small fraction of iron oxides present (1–3%), and that detailed spectral radiances are most accurately modelled using irregularly shaped particles. Satellite retrievals over bright desert surfaces are challenging owing to the lack of spectral contrast between the dust and the underlying surface. However, new techniques have been developed which are shown to be in relatively good agreement with AERONET estimates of AOD and with each other. This encouraging result enables relatively robust validation of numerical models which treat the production, transport, and deposition of mineral dust. The dust models themselves are able to represent large-scale synoptically driven dust events to a reasonable degree, but some deficiencies remain both in the Sahara and over the Sahelian region, where cold pool outflow from convective cells associated with the intertropical convergence zone can lead to significant dust production.
Resumo:
Volcanic ash fallout associated with renewal of explosive activity at Colima, represents a serious threat to the surrounding urbanized area. Here we assess the tephra fallout hazard associated with a Plinian eruption scenario. The eruptive history of Volcán de Colima shows that Plinian eruptions occur approximately every 100 years and the last eruption, the 1913, represents the largest historic eruption of this volcano. We used the last eruption as a reference to discuss volcanic hazard and risk scenarios connected with ash fallout. Tephra fallout deposits are modeled using HAZMAP, a model based on a semi-analytical solution of the advection– diffusion–sedimentation equation for volcanic particles. Based on a statistical study of wind profiles at Colima region, we first reconstructed ash loading maps and then computed ground load probability maps for different seasons. The obtained results show that a Plinian eruptive scenario at Volcán de Colima, could seriously damage more than 10 small towns and ranches, and potentially affect big cities located at tens of kilometers from the eruptive center. The probability maps obtained are aimed to give support to the risk mitigation strategies
Resumo:
Background. Meta-analyses show that cognitive behaviour therapy for psychosis (CBT-P) improves distressing positive symptoms. However, it is a complex intervention involving a range of techniques. No previous study has assessed the delivery of the different elements of treatment and their effect on outcome. Our aim was to assess the differential effect of type of treatment delivered on the effectiveness of CBT-P, using novel statistical methodology. Method. The Psychological Prevention of Relapse in Psychosis (PRP) trial was a multi-centre randomized controlled trial (RCT) that compared CBT-P with treatment as usual (TAU). Therapy was manualized, and detailed evaluations of therapy delivery and client engagement were made. Follow-up assessments were made at 12 and 24 months. In a planned analysis, we applied principal stratification (involving structural equation modelling with finite mixtures) to estimate intention-to-treat (ITT) effects for subgroups of participants, defined by qualitative and quantitative differences in receipt of therapy, while maintaining the constraints of randomization. Results. Consistent delivery of full therapy, including specific cognitive and behavioural techniques, was associated with clinically and statistically significant increases in months in remission, and decreases in psychotic and affective symptoms. Delivery of partial therapy involving engagement and assessment was not effective. Conclusions. Our analyses suggest that CBT-P is of significant benefit on multiple outcomes to patients able to engage in the full range of therapy procedures. The novel statistical methods illustrated in this report have general application to the evaluation of heterogeneity in the effects of treatment.
Resumo:
Objective: To identify and assess healthy eating policies at national level which have been evaluated in terms of their impact on awareness of healthy eating, food consumption, health outcome or cost/benefit. Design: Review of policy documents and their evaluations when available. Setting: European Member States. Subjects: One hundred and twenty-one policy documents revised, 107 retained. Results: Of the 107 selected interventions, twenty-two had been evaluated for their impact on awareness or knowledge and twenty-seven for their impact on consumption. Furthermore sixteen interventions provided an evaluation of health impact, while three actions specifically measured any cost/benefit ratio. The indicators used in these evaluations were in most cases not comparable. Evaluation was more often found for public information campaigns, regulation of meals at schools/canteens and nutrition education programmes. Conclusions: The study highlights the need not only to develop harmonized and verifiable procedures but also indicators for measuring effectiveness and success and for comparing between interventions and countries. EU policies are recommended to provide a set of indicators that may be measured consistently and regularly in all countries. Furthermore, public information campaigns should be accompanied by other interventions, as evaluations may show an impact on awareness and intention, but rarely on consumption patterns and health outcome.
Resumo:
The performance of flood inundation models is often assessed using satellite observed data; however these data have inherent uncertainty. In this study we assess the impact of this uncertainty when calibrating a flood inundation model (LISFLOOD-FP) for a flood event in December 2006 on the River Dee, North Wales, UK. The flood extent is delineated from an ERS-2 SAR image of the event using an active contour model (snake), and water levels at the flood margin calculated through intersection of the shoreline vector with LiDAR topographic data. Gauged water levels are used to create a reference water surface slope for comparison with the satellite-derived water levels. Residuals between the satellite observed data points and those from the reference line are spatially clustered into groups of similar values. We show that model calibration achieved using pattern matching of observed and predicted flood extent is negatively influenced by this spatial dependency in the data. By contrast, model calibration using water elevations produces realistic calibrated optimum friction parameters even when spatial dependency is present. To test the impact of removing spatial dependency a new method of evaluating flood inundation model performance is developed by using multiple random subsamples of the water surface elevation data points. By testing for spatial dependency using Moran’s I, multiple subsamples of water elevations that have no significant spatial dependency are selected. The model is then calibrated against these data and the results averaged. This gives a near identical result to calibration using spatially dependent data, but has the advantage of being a statistically robust assessment of model performance in which we can have more confidence. Moreover, by using the variations found in the subsamples of the observed data it is possible to assess the effects of observational uncertainty on the assessment of flooding risk.
Resumo:
Accumulating data suggest that diets rich in flavanols and procyanidins are beneficial for human health. In this context, there has been a great interest in elucidating the systemic levels and metabolic profiles at which these compounds occur in humans. While recent progress has been made, there still exist considerable differences and various disagreements with regard to the mammalian metabolites of these compounds, which in turn is largely a consequence of the lack of availability of authentic standards that would allow for the directed development and validation of expedient analytical methodologies. In the present study, we developed a method for the analysis of structurally-related flavanol metabolites using a wide range of authentic standards. Applying this method in the context of a human dietary intervention study using comprehensively characterized and standardized flavanol- and procyanidin-containing cocoa, we were able to identify the structurally-related (−)-epicatechin metabolites (SREM) postprandially extant in the systemic circulation of humans. Our results demonstrate that (−)-epicatechin-3′-β-D-glucuronide, (−)-epicatechin-3′-sulfate, and a 3′-O-methyl(−)-epicatechin-5/7-sulfate are the predominant SREM in humans, and further confirm the relevance of the stereochemical configuration in the context of flavanol metabolism. In addition, we also identified plausible causes for the previously reported discrepancies regarding flavanol metabolism, consisting to a significant extent of inter-laboratory differences in sample preparation (enzymatic treatment and sample conditioning for HPLC analysis) and detection systems. Thus, these findings may also aid in the establishment of consensus on this topic.
Resumo:
The requirement to forecast volcanic ash concentrations was amplified as a response to the 2010 Eyjafjallajökull eruption when ash safety limits for aviation were introduced in the European area. The ability to provide accurate quantitative forecasts relies to a large extent on the source term which is the emissions of ash as a function of time and height. This study presents source term estimations of the ash emissions from the Eyjafjallajökull eruption derived with an inversion algorithm which constrains modeled ash emissions with satellite observations of volcanic ash. The algorithm is tested with input from two different dispersion models, run on three different meteorological input data sets. The results are robust to which dispersion model and meteorological data are used. Modeled ash concentrations are compared quantitatively to independent measurements from three different research aircraft and one surface measurement station. These comparisons show that the models perform reasonably well in simulating the ash concentrations, and simulations using the source term obtained from the inversion are in overall better agreement with the observations (rank correlation = 0.55, Figure of Merit in Time (FMT) = 25–46%) than simulations using simplified source terms (rank correlation = 0.21, FMT = 20–35%). The vertical structures of the modeled ash clouds mostly agree with lidar observations, and the modeled ash particle size distributions agree reasonably well with observed size distributions. There are occasionally large differences between simulations but the model mean usually outperforms any individual model. The results emphasize the benefits of using an ensemble-based forecast for improved quantification of uncertainties in future ash crises.
Resumo:
Motivation: The ability of a simple method (MODCHECK) to determine the sequence–structure compatibility of a set of structural models generated by fold recognition is tested in a thorough benchmark analysis. Four Model Quality Assessment Programs (MQAPs) were tested on 188 targets from the latest LiveBench-9 automated structure evaluation experiment. We systematically test and evaluate whether the MQAP methods can successfully detect native-likemodels. Results: We show that compared with the other three methods tested MODCHECK is the most reliable method for consistently performing the best top model selection and for ranking the models. In addition, we show that the choice of model similarity score used to assess a model's similarity to the experimental structure can influence the overall performance of these tools. Although these MQAP methods fail to improve the model selection performance for methods that already incorporate protein three dimension (3D) structural information, an improvement is observed for methods that are purely sequence-based, including the best profile–profile methods. This suggests that even the best sequence-based fold recognition methods can still be improved by taking into account the 3D structural information.