51 resultados para analysis of performance
Resumo:
In the literature on achievement goals, performance-approach goals (striving to do better than others) and performance-avoidance goals (striving to avoid doing worse than others) tend to exhibit a moderate to high correlation, raising questions about whether the 2 goals represent distinct constructs. In the current article, we sought to examine the separability of these 2 goals using a broad factor-analytic approach that attended to issues that have been overlooked or underexamined in prior research. Five studies provided strong evidence for the separation of these 2 goal constructs: Separation was observed not only with exploratory factor analysis across different age groups and countries (Studies 1a and 1b) but also with change analysis (Study 2), ipsative factor analysis (Study 3), within-person analysis (Study 4), and behavioral genetics analysis (Study 5). We conclude by discussing the implications of the present research for the achievement goal literature, as well as the psychological literature in general.
Resumo:
The Tropical Rainfall Measuring Mission 3B42 precipitation estimates are widely used in tropical regions for hydrometeorological research. Recently, version 7 of the product was released. Major revisions to the algorithm involve the radar refl ectivity - rainfall rates relationship, surface clutter detection over high terrain, a new reference database for the passive microwave algorithm, and a higher quality gauge analysis product for monthly bias correction. To assess the impacts of the improved algorithm, we compare the version 7 and the older version 6 product with data from 263 rain gauges in and around the northern Peruvian Andes. The region covers humid tropical rainforest, tropical mountains, and arid to humid coastal plains. We and that the version 7 product has a significantly lower bias and an improved representation of the rainfall distribution. We further evaluated the performance of versions 6 and 7 products as forcing data for hydrological modelling, by comparing the simulated and observed daily streamfl ow in 9 nested Amazon river basins. We find that the improvement in the precipitation estimation algorithm translates to an increase in the model Nash-Sutcliffe effciency, and a reduction in the percent bias between the observed and simulated flows by 30 to 95%.
Resumo:
The network paradigm has been highly influential in spatial analysis in the globalisation era. As economies across the world have become increasingly integrated, so-called global cities have come to play a growing role as central nodes in the networked global economy. The idea that a city’s position in global networks benefits its economic performance has resulted in a competitive policy focus on promoting the economic growth of cities by improving their network connectivity. However, in spite of the attention being given to boosting city connectivity little is known about whether this directly translates to improved city economic performance and, if so, how well connected a city needs to be in order to benefit from this. In this paper we test the relationship between network connectivity and economic performance between 2000 and 2008 for cities with over 500,000 inhabitants in Europe and the USA to inform European policy.
Resumo:
There are now considerable expectations that semi-distributed models are useful tools for supporting catchment water quality management. However, insufficient attention has been given to evaluating the uncertainties inherent to this type of model, especially those associated with the spatial disaggregation of the catchment. The Integrated Nitrogen in Catchments model (INCA) is subjected to an extensive regionalised sensitivity analysis in application to the River Kennet, part of the groundwater-dominated upper Thames catchment, UK The main results are: (1) model output was generally insensitive to land-phase parameters, very sensitive to groundwater parameters, including initial conditions, and significantly sensitive to in-river parameters; (2) INCA was able to produce good fits simultaneously to the available flow, nitrate and ammonium in-river data sets; (3) representing parameters as heterogeneous over the catchment (206 calibrated parameters) rather than homogeneous (24 calibrated parameters) produced a significant improvement in fit to nitrate but no significant improvement to flow and caused a deterioration in ammonium performance; (4) the analysis indicated that calibrating the flow-related parameters first, then calibrating the remaining parameters (as opposed to calibrating all parameters together) was not a sensible strategy in this case; (5) even the parameters to which the model output was most sensitive suffered from high uncertainty due to spatial inconsistencies in the estimated optimum values, parameter equifinality and the sampling error associated with the calibration method; (6) soil and groundwater nutrient and flow data are needed to reduce. uncertainty in initial conditions, residence times and nitrogen transformation parameters, and long-term historic data are needed so that key responses to changes in land-use management can be assimilated. The results indicate the general, difficulty of reconciling the questions which catchment nutrient models are expected to answer with typically limited data sets and limited knowledge about suitable model structures. The results demonstrate the importance of analysing semi-distributed model uncertainties prior to model application, and illustrate the value and limitations of using Monte Carlo-based methods for doing so. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.
Resumo:
Water quality models generally require a relatively large number of parameters to define their functional relationships, and since prior information on parameter values is limited, these are commonly defined by fitting the model to observed data. In this paper, the identifiability of water quality parameters and the associated uncertainty in model simulations are investigated. A modification to the water quality model `Quality Simulation Along River Systems' is presented in which an improved flow component is used within the existing water quality model framework. The performance of the model is evaluated in an application to the Bedford Ouse river, UK, using a Monte-Carlo analysis toolbox. The essential framework of the model proved to be sound, and calibration and validation performance was generally good. However some supposedly important water quality parameters associated with algal activity were found to be completely insensitive, and hence non-identifiable, within the model structure, while others (nitrification and sedimentation) had optimum values at or close to zero, indicating that those processes were not detectable from the data set examined. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Discrepancies between recent global earth albedo anomaly data obtained from the climate models, space and ground observations call for a new and better earth reflectance measurement technique. The SALEX (Space Ashen Light Explorer) instrument is a space-based visible and IR instrument for precise estimation of the global earth albedo by measuring the ashen light reflected off the shadowy side of the Moon from the low earth orbit. The instrument consists of a conventional 2-mirror telescope, a pair of a 3-mirror visible imager and an IR bolometer. The performance of this unique multi-channel optical system is sensitive to the stray light contamination due to the complex optical train incorporating several reflecting and refracting elements, associated mounts and the payload mechanical enclosure. This could be further aggravated by the very bright and extended observation target (i.e. the Moon). In this paper, we report the details of extensive stray light analysis including ghosts and cross-talks, leading to the optimum set of stray light precautions for the highest signal-to-noise ratio attainable.
Resumo:
A limitation of small-scale dairy systems in central Mexico is that traditional feeding strategies are less effective when nutrient availability varies through the year. In the present work, a linear programming (LP) model that maximizes income over feed cost was developed, and used to evaluate two strategies: the traditional one used by the small-scale dairy producers in Michoacan State, based on fresh lucerne, maize grain and maize straw; and an alternative strategy proposed by the LIP model, based on ryegrass hay, maize silage and maize grain. Biological and economic efficiency for both strategies were evaluated. Results obtained with the traditional strategy agree with previously published work. The alternative strategy did not improve upon the performance of the traditional strategy because of low metabolizable protein content of the maize silage considered by the model. However, the Study recommends improvement of forage quality to increase the efficiency of small-scale dairy systems, rather than looking for concentrate supplementation.
Resumo:
Background: The objective was to evaluate the efficacy and tolerability of donepezil (5 and 10 mg/day) compared with placebo in alleviating manifestations of mild to moderate Alzheimer's disease (AD). Method: A systematic review of individual patient data from Phase II and III double-blind, randomised, placebo-controlled studies of up to 24 weeks and completed by 20 December 1999. The main outcome measures were the ADAS-cog, the CIBIC-plus, and reports of adverse events. Results: A total of 2376 patients from ten trials were randomised to either donepezil 5 mg/day (n = 821), 10 mg/day (n = 662) or placebo (n = 893). Cognitive performance was better in patients receiving donepezil than in patients receiving placebo. At 12 weeks the differences in ADAS-cog scores were 5 mg/day-placebo: - 2.1 [95% confidence interval (CI), - 2.6 to - 1.6; p < 0.001], 10 mg/day-placebo: - 2.5 ( - 3.1 to - 2.0; p < 0.001). The corresponding results at 24 weeks were - 2.0 ( - 2.7 to - 1.3; p < 0.001) and - 3.1 ( - 3.9 to - 2.4; p < 0.001). The difference between the 5 and 10 mg/day doses was significant at 24 weeks (p = 0.005). The odds ratios (OR) of improvement on the CIBIC-plus at 12 weeks were: 5 mg/day-placebo 1.8 (1.5 to 2.1; p < 0.001), 10 mg/day-placebo 1.9 (1.5 to 2.4; p < 0.001). The corresponding values at 24 weeks were 1.9 (1.5 to 2.4; p = 0.001) and 2.1 (1.6 to 2.8; p < 0.001). Donepezil was well tolerated; adverse events were cholinergic in nature and generally of mild severity and brief in duration. Conclusion: Donepezil (5 and 10 mg/day) provides meaningful benefits in alleviating deficits in cognitive and clinician-rated global function in AD patients relative to placebo. Increased improvements in cognition were indicated for the higher dose. Copyright © 2004 John Wiley & Sons, Ltd.
Resumo:
Polarized epithelial cells are responsible for the vectorial transport of solutes and have a key role in maintaining body fluid and electrolyte homeostasis. Such cells contain structurally and functionally distinct plasma membrane domains. Brush border and basolateral membranes of renal and intestinal epithelial cells can be separated using a number of different separation techniques, which allow their different transport functions and receptor expressions to be studied. In this communication, we report a proteomic analysis of these two membrane segments, apical and basolateral, obtained from the rat renal cortex isolated by two different methods: differential centrifugation and free-flow electrophoresis. The study was aimed at assessing the nature of the major proteins isolated by these two separation techniques. Two analytical strategies were used: separation by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) at the protein level or by cation-exchange high-performance liquid chromatography (HPLC) after proteolysis (i.e., at the peptide level). Proteolytic peptides derived from the proteins present in gel pieces or from HPLC fractions after proteolysis were sequenced by on-line liquid chromatography-tandem mass spectrometry (LC-MS/MS). Several hundred proteins were identified in each membrane section. In addition to proteins known to be located at the apical and basolateral membranes, several novel proteins were also identified. In particular, a number of proteins with putative roles in signal transduction were identified in both membranes. To our knowledge, this is the first reported study to try and characterize the membrane proteome of polarized epithelial cells and to provide a data set of the most abundant proteins present in renal proximal tubule cell membranes.
Resumo:
Polarized epithelial cells are responsible for the vectorial transport of solutes and have a key role in maintaining body fluid and electrolyte homeostasis. Such cells contain structurally and functionally distinct plasma membrane domains. Brush border and basolateral membranes of renal and intestinal epithelial cells can be separated using a number of different separation techniques, which allow their different transport functions and receptor expressions to be studied. In this communication, we report a proteomic analysis of these two membrane segments, apical and basolateral, obtained from the rat renal cortex isolated by two different methods: differential centrifugation and free-flow electrophoresis. The study was aimed at assessing the nature of the major proteins isolated by these two separation techniques. Two analytical strategies were used: separation by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) at the protein level or by cation-exchange high-performance liquid chromatography (HPLC) after proteolysis (i.e., at the peptide level). Proteolytic peptides derived from the proteins present in gel pieces or from HPLC fractions after proteolysis were sequenced by on-line liquid chromatography-tandem mass spectrometry (LC-MS/MS). Several hundred proteins were identified in each membrane section. In addition to proteins known to be located at the apical and basolateral membranes, several novel proteins were also identified. In particular, a number of proteins with putative roles in signal transduction were identified in both membranes. To our knowledge, this is the first reported study to try and characterize the membrane proteome of polarized epithelial cells and to provide a data set of the most abundant proteins present in renal proximal tubule cell membranes.
Resumo:
Phenolic compounds in wastewaters are difficult to treat using the conventional biological techniques such as activated sludge processes because of their bio-toxic and recalcitrant properties and the high volumes released from various chemical, pharmaceutical and other industries. In the current work, a modified heterogeneous advanced Fenton process (AFP) is presented as a novel methodology for the treatment of phenolic wastewater. The modified AFP, which is a combination of hydrodynamic cavitation generated using a liquid whistle reactor and the AFP is a promising technology for wastewaters containing high organic content. The presence of hydrodynamic cavitation in the treatment scheme intensifies the Fenton process by generation of additional free radicals. Also, the turbulence produced during the hydrodynamic cavitation process increases the mass transfer rates as well as providing better contact between the pseudo-catalyst surfaces and the reactants. A multivariate design of experiments has been used to ascertain the influence of hydrogen peroxide dosage and iron catalyst loadings on the oxidation performance of the modified AFP. High er TOC removal rates were achieved with increased concentrations of hydrogen peroxide. In contrast, the effect of catalyst loadings was less important on the TOC removal rate under conditions used in this work although there is an optimum value of this parameter. The concentration of iron species in the reaction solution was measured at 105 min and its relationship with the catalyst loadings and hydrogen peroxide level is presented.
Resumo:
This paper analyzes the performance of enhanced relay-enabled distributed coordination function (ErDCF) for wireless ad hoc networks under transmission errors. The idea of ErDCF is to use high data rate nodes to work as relays for the low data rate nodes. ErDCF achieves higher throughput and reduces energy consumption compared to IEEE 802.11 distributed coordination function (DCF) in an ideal channel environment. However, there is a possibility that this expected gain may decrease in the presence of transmission errors. In this work, we modify the saturation throughput model of ErDCF to accurately reflect the impact of transmission errors under different rate combinations. It turns out that the throughput gain of ErDCF can still be maintained under reasonable link quality and distance.
Resumo:
In this paper we present error analysis for a Monte Carlo algorithm for evaluating bilinear forms of matrix powers. An almost Optimal Monte Carlo (MAO) algorithm for solving this problem is formulated. Results for the structure of the probability error are presented and the construction of robust and interpolation Monte Carlo algorithms are discussed. Results are presented comparing the performance of the Monte Carlo algorithm with that of a corresponding deterministic algorithm. The two algorithms are tested on a well balanced matrix and then the effects of perturbing this matrix, by small and large amounts, is studied.
Resumo:
This paper analyzes the delay performance of Enhanced relay-enabled Distributed Coordination Function (ErDCF) for wireless ad hoc networks under ideal condition and in the presence of transmission errors. Relays are nodes capable of supporting high data rates for other low data rate nodes. In ideal channel ErDCF achieves higher throughput and reduced energy consumption compared to IEEE 802.11 Distributed Coordination Function (DCF). This gain is still maintained in the presence of errors. It is also expected of relays to reduce the delay. However, the impact on the delay behavior of ErDCF under transmission errors is not known. In this work, we have presented the impact of transmission errors on delay. It turns out that under transmission errors of sufficient magnitude to increase dropped packets, packet delay is reduced. This is due to increase in the probability of failure. As a result the packet drop time increases, thus reflecting the throughput degradation.