12 resultados para Focused retrieval, Result aggregation, Metrics, Users
em CentAUR: Central Archive University of Reading - UK
Resumo:
Space weather effects on technological systems originate with energy carried from the Sun to the terrestrial environment by the solar wind. In this study, we present results of modeling of solar corona-heliosphere processes to predict solar wind conditions at the L1 Lagrangian point upstream of Earth. In particular we calculate performance metrics for (1) empirical, (2) hybrid empirical/physics-based, and (3) full physics-based coupled corona-heliosphere models over an 8-year period (1995–2002). L1 measurements of the radial solar wind speed are the primary basis for validation of the coronal and heliosphere models studied, though other solar wind parameters are also considered. The models are from the Center for Integrated Space-Weather Modeling (CISM) which has developed a coupled model of the whole Sun-to-Earth system, from the solar photosphere to the terrestrial thermosphere. Simple point-by-point analysis techniques, such as mean-square-error and correlation coefficients, indicate that the empirical coronal-heliosphere model currently gives the best forecast of solar wind speed at 1 AU. A more detailed analysis shows that errors in the physics-based models are predominately the result of small timing offsets to solar wind structures and that the large-scale features of the solar wind are actually well modeled. We suggest that additional “tuning” of the coupling between the coronal and heliosphere models could lead to a significant improvement of their accuracy. Furthermore, we note that the physics-based models accurately capture dynamic effects at solar wind stream interaction regions, such as magnetic field compression, flow deflection, and density buildup, which the empirical scheme cannot.
Resumo:
We investigated whether oxidation alters the self-aggregation of low density lipoprotein (LDL) and the inhibition of such aggregation by albumin. Incubation with copper for different durations produced mildly, moderately, and highly oxidised LDL (having, respectively, ca. 60, 300 and 160 nmol lipid hydroperoxides/mg protein, and electrophoretic mobilities 1.2, 2.6 and 4.4 times that of native LDL). The rate of flow-induced aggregation was the same for native, mildly oxidised and moderately oxidised LDL, but decreased for highly oxidised LDL. The inhibitory effect of albumin (40 mg/ml) on aggregation was reduced by mild oxidation and further reduced by moderate or severe oxidation. The net result of the two effects was that in the presence of albumin, moderately oxidised LDL had the highest rate of aggregation and native the lowest. The reduction in the anti-aggregatory effect of albumin provides a new mechanism by which LDL oxidation might enhance net aggregation in vivo. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
LDL aggregates when exposed to even moderate fluid mechanical stresses in the laboratory, yet its half-life in the circulation is 2-3 days, implying that little aggregation occurs. LDL may be protected from aggregation in vivo by components of plasma, or by a qualitative difference in flows. Previous studies have shown that HDL and albumin inhibit the aggregation induced by vortexing. Using a more reproducible method of inducing aggregation and assessing aggregation both spectrophotometrically and by sedimentation techniques, we showed that at physiological concentrations, albumin is the more effective inhibitor, and that aggregation is substantially but not completely inhibited in plasma. Heat denatured and fatty-acid-stripped albumin were more effective inhibitors than normal albumin, supporting the idea that hydrophobic interactions are involved. Aggregation of LDL in a model reproducing several aspects of flow in the circulation was 200-fold slower, but was still inhibited by HDL and albumin, suggesting similar mechanisms are involved. Within the sensitivity of our technique, LDL aggregation did not occur in plasma exposed to these flows.jlr Thus, as a result of the characteristics of blood flow and the inhibitory effects of plasma components, particularly albumin, LDL aggregation is unlikely to occur within the circulation.
Resumo:
A new Bayesian algorithm for retrieving surface rain rate from Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) over the ocean is presented, along with validations against estimates from the TRMM Precipitation Radar (PR). The Bayesian approach offers a rigorous basis for optimally combining multichannel observations with prior knowledge. While other rain-rate algorithms have been published that are based at least partly on Bayesian reasoning, this is believed to be the first self-contained algorithm that fully exploits Bayes’s theorem to yield not just a single rain rate, but rather a continuous posterior probability distribution of rain rate. To advance the understanding of theoretical benefits of the Bayesian approach, sensitivity analyses have been conducted based on two synthetic datasets for which the “true” conditional and prior distribution are known. Results demonstrate that even when the prior and conditional likelihoods are specified perfectly, biased retrievals may occur at high rain rates. This bias is not the result of a defect of the Bayesian formalism, but rather represents the expected outcome when the physical constraint imposed by the radiometric observations is weak owing to saturation effects. It is also suggested that both the choice of the estimators and the prior information are crucial to the retrieval. In addition, the performance of the Bayesian algorithm herein is found to be comparable to that of other benchmark algorithms in real-world applications, while having the additional advantage of providing a complete continuous posterior probability distribution of surface rain rate.
Resumo:
The Along-Track Scanning Radiometers (ATSRs) provide a long time-series of measurements suitable for the retrieval of cloud properties. This work evaluates the freely-available Global Retrieval of ATSR Cloud Parameters and Evaluation (GRAPE) dataset (version 3) created from the ATSR-2 (1995�2003) and Advanced ATSR (AATSR; 2002 onwards) records. Users are recommended to consider only retrievals flagged as high-quality, where there is a good consistency between the measurements and the retrieved state (corresponding to about 60% of converged retrievals over sea, and more than 80% over land). Cloud properties are found to be generally free of any significant spurious trends relating to satellite zenith angle. Estimates of the random error on retrieved cloud properties are suggested to be generally appropriate for optically-thick clouds, and up to a factor of two too small for optically-thin cases. The correspondence between ATSR-2 and AATSR cloud properties is high, but a relative calibration difference between the sensors of order 5�10% at 660 nm and 870 nm limits the potential of the current version of the dataset for trend analysis. As ATSR-2 is thought to have the better absolute calibration, the discussion focusses on this portion of the record. Cloud-top heights from GRAPE compare well to ground-based data at four sites, particularly for shallow clouds. Clouds forming in boundary-layer inversions are typically around 1 km too high in GRAPE due to poorly-resolved inversions in the modelled temperature profiles used. Global cloud fields are compared to satellite products derived from the Moderate Resolution Imaging Spectroradiometer (MODIS), Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) measurements, and a climatology of liquid water content derived from satellite microwave radiometers. In all cases the main reasons for differences are linked to differing sensitivity to, and treatment of, multi-layer cloud systems. The correlation coefficient between GRAPE and the two MODIS products considered is generally high (greater than 0.7 for most cloud properties), except for liquid and ice cloud effective radius, which also show biases between the datasets. For liquid clouds, part of the difference is linked to choice of wavelengths used in the retrieval. Total cloud cover is slightly lower in GRAPE (0.64) than the CALIOP dataset (0.66). GRAPE underestimates liquid cloud water path relative to microwave radiometers by up to 100 g m�2 near the Equator and overestimates by around 50 g m�2 in the storm tracks. Finally, potential future improvements to the algorithm are outlined.
Resumo:
For decades regulators in the energy sector have focused on facilitating the maximisation of energy supply in order to meet demand through liberalisation and removal of market barriers. The debate on climate change has emphasised a new type of risk in the balance between energy demand and supply: excessively high energy demand brings about significantly negative environmental and economic impacts. This is because if a vast number of users is consuming electricity at the same time, energy suppliers have to activate dirty old power plants with higher greenhouse gas emissions and higher system costs. The creation of a Europe-wide electricity market requires a systematic investigation into the risk of aggregate peak demand. This paper draws on the e-Living Time-Use Survey database to assess the risk of aggregate peak residential electricity demand for European energy markets. Findings highlight in which countries and for what activities the risk of aggregate peak demand is greater. The discussion highlights which approaches energy regulators have started considering to convince users about the risks of consuming too much energy during peak times. These include ‘nudging’ approaches such as the roll-out of smart meters, incentives for shifting the timing of energy consumption, differentiated time-of-use tariffs, regulatory financial incentives and consumption data sharing at the community level.
Resumo:
Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.
Resumo:
It is now established that certain cognitive processes such as categorisation are tightly linked to the concepts encoded in language. Recent studies have shown that bilinguals with languages that differ in their concepts may show a shift in their cognition towards the L2 pattern primarily as a function of their L2 proficiency. This research has so far focused predominantly on L2 users who started learning the L2 in childhood or early puberty. The current study asks whether similar effects can be found in adult L2 learners. English speakers of L2 Japanese were given an object classification task involving real physical objects, and an online classification task involving artificial novel objects. Results showed a shift towards the L2 pattern, indicating that some degree of cognitive plasticity exists even when a second language is acquired later in life. These results have implications for theories of L2 acquisition and bilingualism, and contribute towards our understanding of the nature of the relationship between language and cognition in the L2 user’s mind.
Resumo:
Satellite data are increasingly used to provide observation-based estimates of the effects of aerosols on climate. The Aerosol-cci project, part of the European Space Agency's Climate Change Initiative (CCI), was designed to provide essential climate variables for aerosols from satellite data. Eight algorithms, developed for the retrieval of aerosol properties using data from AATSR (4), MERIS (3) and POLDER, were evaluated to determine their suitability for climate studies. The primary result from each of these algorithms is the aerosol optical depth (AOD) at several wavelengths, together with the Ångström exponent (AE) which describes the spectral variation of the AOD for a given wavelength pair. Other aerosol parameters which are possibly retrieved from satellite observations are not considered in this paper. The AOD and AE (AE only for Level 2) were evaluated against independent collocated observations from the ground-based AERONET sun photometer network and against “reference” satellite data provided by MODIS and MISR. Tools used for the evaluation were developed for daily products as produced by the retrieval with a spatial resolution of 10 × 10 km2 (Level 2) and daily or monthly aggregates (Level 3). These tools include statistics for L2 and L3 products compared with AERONET, as well as scoring based on spatial and temporal correlations. In this paper we describe their use in a round robin (RR) evaluation of four months of data, one month for each season in 2008. The amount of data was restricted to only four months because of the large effort made to improve the algorithms, and to evaluate the improvement and current status, before larger data sets will be processed. Evaluation criteria are discussed. Results presented show the current status of the European aerosol algorithms in comparison to both AERONET and MODIS and MISR data. The comparison leads to a preliminary conclusion that the scores are similar, including those for the references, but the coverage of AATSR needs to be enhanced and further improvements are possible for most algorithms. None of the algorithms, including the references, outperforms all others everywhere. AATSR data can be used for the retrieval of AOD and AE over land and ocean. PARASOL and one of the MERIS algorithms have been evaluated over ocean only and both algorithms provide good results.
Resumo:
Most developers of behavior change support systems (BCSS) employ ad hoc procedures in their designs. This paper presents a novel discussion concerning how analyzing the relationship between attitude toward target behavior, current behavior, and attitude toward change or maintaining behavior can facilitate the design of BCSS. We describe the three-dimensional relationships between attitude and behavior (3D-RAB) model and demonstrate how it can be used to categorize users, based on variations in levels of cognitive dissonance. The proposed model seeks to provide a method for analyzing the user context on the persuasive systems design model, and it is evaluated using existing BCSS. We identified that although designers seem to address the various cognitive states, this is not done purposefully, or in a methodical fashion, which implies that many existing applications are targeting users not considered at the design phase. As a result of this work, it is suggested that designers apply the 3D-RAB model in order to design solutions for targeted users.
Resumo:
The present study examined how achievement goals affect retrieval-induced forgetting. Researchers have suggested that mastery-approach goals (i.e., developing one’s own competence) promote a relational encoding, whereas performance-approach goals (i.e., demonstrating one’s ability in comparison to others) promote item-specific encoding. These different encoding processes may affect the degree to which participants integrate the exemplars within a category and, as a result, we expected that retrieval-induced forgetting may be reduced or eliminated under mastery-approach goals. Three experiments were conducted using a retrieval-practice paradigm with different stimuli, where participants’ achievement goals were manipulated through brief written instructions. A meta-analysis that synthesized the results of the three experiments showed that retrieval-induced forgetting was not statistically significant in the mastery-approach goal condition, whereas it was statistically significant in the performance-approach goal condition. These results suggest that mastery-approach goals eliminate retrieval-induced forgetting, but performance-approach goals do not, demonstrating that motivation factors can influence inhibition and forgetting.
Resumo:
Different treatments that could be implemented in the home environ-ment are evaluated with the objective of reaching a more rational and efficient use of energy. We consider that a detailed knowledge of energy-consuming behaviour is paramount for the development and implementation of new technologies, services and even policies that could result in more rational energy use. The proposed evaluation methodology is based on the development of economic experiments implemented in an experimental economics laboratory, where the behaviour of individuals when making decisions related to energy use in the domestic environment can be tested.