904 resultados para Uncertainty bias
Resumo:
By an analysis of the exchange of carriers through a semiconductor junction, a general relationship for the nonequilibrium population of the interface states in Schottky barrier diodes has been derived. Based on this relationship, an analytical expression for the ideality factor valid in the whole range of applied bias has been given. This quantity exhibits two different behaviours depending on the value of the applied bias with respect to a critical voltage. This voltage, which depends on the properties of the interfacial layer, constitutes a new parameter to complete the characterization of these junctions. A simple interpretation of the different behaviours of the ideality factor has been given in terms of the nonequilibrium charging properties of interface states, which in turn explains why apparently different approaches have given rise to similar results. Finally, the relevance of our results has been considered on the determination of the density of interface states from nonideal current-voltage characteristics and in the evaluation of the effects of the interfacial layer thickness in metal-insulator-semiconductor tunnelling diodes.
Resumo:
The assessment of spatial uncertainty in the prediction of nutrient losses by erosion associated with landscape models is an important tool for soil conservation planning. The purpose of this study was to evaluate the spatial and local uncertainty in predicting depletion rates of soil nutrients (P, K, Ca, and Mg) by soil erosion from green and burnt sugarcane harvesting scenarios, using sequential Gaussian simulation (SGS). A regular grid with equidistant intervals of 50 m (626 points) was established in the 200-ha study area, in Tabapuã, São Paulo, Brazil. The rate of soil depletion (SD) was calculated from the relation between the nutrient concentration in the sediments and the chemical properties in the original soil for all grid points. The data were subjected to descriptive statistical and geostatistical analysis. The mean SD rate for all nutrients was higher in the slash-and-burn than the green cane harvest scenario (Student’s t-test, p<0.05). In both scenarios, nutrient loss followed the order: Ca>Mg>K>P. The SD rate was highest in areas with greater slope. Lower uncertainties were associated to the areas with higher SD and steeper slopes. Spatial uncertainties were highest for areas of transition between concave and convex landforms.
Resumo:
Approaching or looming sounds (L-sounds) have been shown to selectively increase visual cortex excitability [Romei, V., Murray, M. M., Cappe, C., & Thut, G. Preperceptual and stimulus-selective enhancement of low-level human visual cortex excitability by sounds. Current Biology, 19, 1799-1805, 2009]. These cross-modal effects start at an early, preperceptual stage of sound processing and persist with increasing sound duration. Here, we identified individual factors contributing to cross-modal effects on visual cortex excitability and studied the persistence of effects after sound offset. To this end, we probed the impact of different L-sound velocities on phosphene perception postsound as a function of individual auditory versus visual preference/dominance using single-pulse TMS over the occipital pole. We found that the boosting of phosphene perception by L-sounds continued for several tens of milliseconds after the end of the L-sound and was temporally sensitive to different L-sound profiles (velocities). In addition, we found that this depended on an individual's preferred sensory modality (auditory vs. visual) as determined through a divided attention task (attentional preference), but not on their simple threshold detection level per sensory modality. Whereas individuals with "visual preference" showed enhanced phosphene perception irrespective of L-sound velocity, those with "auditory preference" showed differential peaks in phosphene perception whose delays after sound-offset followed the different L-sound velocity profiles. These novel findings suggest that looming signals modulate visual cortex excitability beyond sound duration possibly to support prompt identification and reaction to potentially dangerous approaching objects. The observed interindividual differences favor the idea that unlike early effects this late L-sound impact on visual cortex excitability is influenced by cross-modal attentional mechanisms rather than low-level sensory processes.
Resumo:
Hepatitis A virus (HAV), the prototype of genus Hepatovirus, has several unique biological characteristics that distinguish it from other members of the Picornaviridae family. Among these, the need for an intact eIF4G factor for the initiation of translation results in an inability to shut down host protein synthesis by a mechanism similar to that of other picornaviruses. Consequently, HAV must inefficiently compete for the cellular translational machinery and this may explain its poor growth in cell culture. In this context of virus/cell competition, HAV has strategically adopted a naturally highly deoptimized codon usage with respect to that of its cellular host. With the aim to optimize its codon usage the virus was adapted to propagate in cells with impaired protein synthesis, in order to make tRNA pools more available for the virus. A significant loss of fitness was the immediate response to the adaptation process that was, however, later on recovered and more associated to a re-deoptimization rather than to an optimization of the codon usage specifically in the capsid coding region. These results exclude translation selection and instead suggest fine-tuning translation kinetics selection as the underlying mechanism of the codon usage bias in this specific genome region. Additionally, the results provide clear evidence of the Red Queen dynamics of evolution since the virus has very much evolved to re-adapt its codon usage to the environmental cellular changing conditions in order to recover the original fitness.
Resumo:
The pursuit of high response rates to minimise the threat of nonresponse bias continues to dominate decisions about resource allocation in survey research. Yet a growing body of research has begun to question this practice. In this study, we use previously unavailable data from a new sampling frame based on population registers to assess the value of different methods designed to increase response rates on the European Social Survey in Switzerland. Using sampling data provides information about both respondents and nonrespondents, making it possible to examine how changes in response rates resulting from the use of different fieldwork methods relate to changes in the composition and representativeness of the responding sample. We compute an R-indicator to assess representativity with respect to the sampling register variables, and find little improvement in the sample composition as response rates increase. We then examine the impact of response rate increases on the risk of nonresponse bias based on Maximal Absolute Bias (MAB), and coefficients of variation between subgroup response rates, alongside the associated costs of different types of fieldwork effort. The results show that increases in response rate help to reduce MAB, while only small but important improvements to sample representativity are gained by varying the type of effort. These findings lend further support to research that has called into question the value of extensive investment in procedures aimed at reaching response rate targets and the need for more tailored fieldwork strategies aimed both at reducing survey costs and minimising the risk of bias.
Resumo:
As the list of states adopting the HWTD continues to grow, there is a need to evaluate how results are utilized. AASHTO T 324 does not standardize the analysis and reporting of test results. Furthermore, processing and reporting of the results among manufacturers is not uniform. This is partly due to the variation among agency reporting requirements. Some include only the midpoint rut depth, while others include the average across the entire length of the wheel track. To eliminate bias in reporting, statistical analysis was performed on over 150 test runs on gyratory specimens. Measurement location was found to be a source of significant variation in the HWTD. This is likely due to the nonuniform wheel speed across the specimen, geometry of the specimen, and air void profile. Eliminating this source of bias when reporting results is feasible though is dependent upon the average rut depth at the final pass. When reporting rut depth at the final pass, it is suggested for poor performing samples to average measurement locations near the interface of the adjoining gyratory specimens. This is necessary due to the wheel lipping on the mold. For all other samples it is reasonable to only eliminate the 3 locations furthest from the gear house. For multi‐wheel units, wheel side was also found to be significant for poor and good performing samples. After eliminating the suggested measurements from the analysis, the wheel was no longer a significant source of variation.
Resumo:
In the vast majority of bottom-up proteomics studies, protein digestion is performed using only mammalian trypsin. Although it is clearly the best enzyme available, the sole use of trypsin rarely leads to complete sequence coverage, even for abundant proteins. It is commonly assumed that this is because many tryptic peptides are either too short or too long to be identified by RPLC-MS/MS. We show through in silico analysis that 20-30% of the total sequence of three proteomes (Schizosaccharomyces pombe, Saccharomyces cerevisiae, and Homo sapiens) is expected to be covered by Large post-Trypsin Peptides (LpTPs) with M(r) above 3000 Da. We then established size exclusion chromatography to fractionate complex yeast tryptic digests into pools of peptides based on size. We found that secondary digestion of LpTPs followed by LC-MS/MS analysis leads to a significant increase in identified proteins and a 32-50% relative increase in average sequence coverage compared to trypsin digestion alone. Application of the developed strategy to analyze the phosphoproteomes of S. pombe and of a human cell line identified a significant fraction of novel phosphosites. Overall our data indicate that specific targeting of LpTPs can complement standard bottom-up workflows to reveal a largely neglected portion of the proteome.
Resumo:
Total disc replacement (TDR) clinical success has been reported to be related to the residual motion of the operated level. Thus, accurate measurement of TDR range of motion (ROM) is of utmost importance. One commonly used tool in measuring ROM is the Oxford Cobbometer. Little is known however on its accuracy (precision and bias) in measuring TDR angles. The aim of this study was to assess the ability of the Cobbometer to accurately measure radiographic TDR angles. An anatomically accurate synthetic L4-L5 motion segment was instrumented with a CHARITE artificial disc. The TDR angle and anatomical position between L4 and L5 was fixed to prohibit motion while the motion segment was radiographically imaged in various degrees of rotation and elevation, representing a sample of possible patient placement positions. An experienced observer made ten readings of the TDR angle using the Cobbometer at each different position. The Cobbometer readings were analyzed to determine measurement accuracy at each position. Furthermore, analysis of variance was used to study rotation and elevation of the motion segment as treatment factors. Cobbometer TDR angle measurements were most accurate (highest precision and lowest bias) at the centered position (95.5%), which placed the TDR directly inline with the x-ray beam source without any rotation. In contrast, the lowest accuracy (75.2%) was observed in the most rotated and off-centered view. A difference as high as 4 degrees between readings at any individual position, and as high as 6 degrees between all the positions was observed. Furthermore, the Cobbometer was unable to detect the expected trend in TDR angle projection with changing position. Although the Cobbometer has been reported to be reliable in different clinical applications, it lacks the needed accuracy to measure TDR angles and ROM. More accurate ROM measurement methods need to be developed to help surgeons and researchers assess radiological success of TDRs.
Resumo:
BACKGROUND: Practicing physicians are faced with many medical decisions daily. These are mainly influenced by personal experience but should also consider patient preferences and the scientific evidence reflected by a constantly increasing number of medical publications and guidelines. With the objective of optimal medical treatment, the concept of evidence-based medicine is founded on these three aspects. It should be considered that there is a high risk of misinterpreting evidence, leading to medical errors and adverse effects without knowledge of the methodological background. OBJECTIVES: This article explains the concept of systematic error (bias) and its importance. Causes and effects as well as methods to minimize bias are discussed. This information should impart a deeper understanding, leading to a better assessment of studies and implementation of its recommendations in daily medical practice. CONCLUSION: Developed by the Cochrane Collaboration, the risk of bias (RoB) tool is an assessment instrument for the potential of bias in controlled trials. Good handling, short processing time, high transparency of judgements and a graphical presentation of findings that is easily comprehensible are among its strengths. Attached to this article the German translation of the RoB tool is published. This should facilitate the applicability for non-experts and moreover, support evidence-based medical decision-making.
Resumo:
It is estimated that around 230 people die each year due to radon (222Rn) exposure in Switzerland. 222Rn occurs mainly in closed environments like buildings and originates primarily from the subjacent ground. Therefore it depends strongly on geology and shows substantial regional variations. Correct identification of these regional variations would lead to substantial reduction of 222Rn exposure of the population based on appropriate construction of new and mitigation of already existing buildings. Prediction of indoor 222Rn concentrations (IRC) and identification of 222Rn prone areas is however difficult since IRC depend on a variety of different variables like building characteristics, meteorology, geology and anthropogenic factors. The present work aims at the development of predictive models and the understanding of IRC in Switzerland, taking into account a maximum of information in order to minimize the prediction uncertainty. The predictive maps will be used as a decision-support tool for 222Rn risk management. The construction of these models is based on different data-driven statistical methods, in combination with geographical information systems (GIS). In a first phase we performed univariate analysis of IRC for different variables, namely the detector type, building category, foundation, year of construction, the average outdoor temperature during measurement, altitude and lithology. All variables showed significant associations to IRC. Buildings constructed after 1900 showed significantly lower IRC compared to earlier constructions. We observed a further drop of IRC after 1970. In addition to that, we found an association of IRC with altitude. With regard to lithology, we observed the lowest IRC in sedimentary rocks (excluding carbonates) and sediments and the highest IRC in the Jura carbonates and igneous rock. The IRC data was systematically analyzed for potential bias due to spatially unbalanced sampling of measurements. In order to facilitate the modeling and the interpretation of the influence of geology on IRC, we developed an algorithm based on k-medoids clustering which permits to define coherent geological classes in terms of IRC. We performed a soil gas 222Rn concentration (SRC) measurement campaign in order to determine the predictive power of SRC with respect to IRC. We found that the use of SRC is limited for IRC prediction. The second part of the project was dedicated to predictive mapping of IRC using models which take into account the multidimensionality of the process of 222Rn entry into buildings. We used kernel regression and ensemble regression tree for this purpose. We could explain up to 33% of the variance of the log transformed IRC all over Switzerland. This is a good performance compared to former attempts of IRC modeling in Switzerland. As predictor variables we considered geographical coordinates, altitude, outdoor temperature, building type, foundation, year of construction and detector type. Ensemble regression trees like random forests allow to determine the role of each IRC predictor in a multidimensional setting. We found spatial information like geology, altitude and coordinates to have stronger influences on IRC than building related variables like foundation type, building type and year of construction. Based on kernel estimation we developed an approach to determine the local probability of IRC to exceed 300 Bq/m3. In addition to that we developed a confidence index in order to provide an estimate of uncertainty of the map. All methods allow an easy creation of tailor-made maps for different building characteristics. Our work is an essential step towards a 222Rn risk assessment which accounts at the same time for different architectural situations as well as geological and geographical conditions. For the communication of 222Rn hazard to the population we recommend to make use of the probability map based on kernel estimation. The communication of 222Rn hazard could for example be implemented via a web interface where the users specify the characteristics and coordinates of their home in order to obtain the probability to be above a given IRC with a corresponding index of confidence. Taking into account the health effects of 222Rn, our results have the potential to substantially improve the estimation of the effective dose from 222Rn delivered to the Swiss population.
Resumo:
Winter weather in Iowa is often unpredictable and can have an adverse impact on traffic flow. The Iowa Department of Transportation (Iowa DOT) attempts to lessen the impact of winter weather events on traffic speeds with various proactive maintenance operations. In order to assess the performance of these maintenance operations, it would be beneficial to develop a model for expected speed reduction based on weather variables and normal maintenance schedules. Such a model would allow the Iowa DOT to identify situations in which speed reductions were much greater than or less than would be expected for a given set of storm conditions, and make modifications to improve efficiency and effectiveness. The objective of this work was to predict speed changes relative to baseline speed under normal conditions, based on nominal maintenance schedules and winter weather covariates (snow type, temperature, and wind speed), as measured by roadside weather stations. This allows for an assessment of the impact of winter weather covariates on traffic speed changes, and estimation of the effect of regular maintenance passes. The researchers chose events from Adair County, Iowa and fit a linear model incorporating the covariates mentioned previously. A Bayesian analysis was conducted to estimate the values of the parameters of this model. Specifically, the analysis produces a distribution for the parameter value that represents the impact of maintenance on traffic speeds. The effect of maintenance is not a constant, but rather a value that the researchers have some uncertainty about and this distribution represents what they know about the effects of maintenance. Similarly, examinations of the distributions for the effects of winter weather covariates are possible. Plots of observed and expected traffic speed changes allow a visual assessment of the model fit. Future work involves expanding this model to incorporate many events at multiple locations. This would allow for assessment of the impact of winter weather maintenance across various situations, and eventually identify locations and times in which maintenance could be improved.
Resumo:
We study the determinants of political myopia in a rational model of electoral accountability where the key elements are informational frictions and uncertainty. We build aframework where political ability is ex-ante unknown and policy choices are not perfectlyobservable. On the one hand, elections improve accountability and allow to keep well-performing incumbents. On the other, politicians invest too little in costly policies withfuture returns in an attempt to signal high ability and increase their reelection probability.Contrary to the conventional wisdom, uncertainty reduces political myopia and may, undersome conditions, increase social welfare. We use the model to study how political rewardscan be set so as to maximise social welfare and the desirability of imposing a one-term limitto governments. The predictions of our theory are consistent with a number of stylised factsand with a new empirical observation documented in this paper: aggregate uncertainty, measured by economic volatility, is associated to better fiscal discipline in a panel of 20 OECDcountries.