878 resultados para Uncertainty in generation
Resumo:
Innovation is a critical factor in ensuring commercial success within the area of medical technology. Biotechnology and Healthcare developments require huge financial and resource investment, in-depth research and clinical trials. Consequently, these developments involve a complex multidisciplinary structure, which is inherently full of risks and uncertainty. In this context, early technology assessment and 'proof of concept' is often sporadic and unstructured. Existing methodologies for managing the feasibility stage of medical device development are predominantly suited to the later phases of development and favour detail in optimisation, validation and regulatory approval. During these early phases, feasibility studies are normally conducted to establish whether technology is potentially viable. However, it is not clear how this technology viability is currently measured. This paper aims to redress this gap through the development of a technology confidence scale, as appropriate explicitly to the feasibility phase of medical device design. These guidelines were developed from analysis of three recent innovation studies within the medical device industry.
Resumo:
The optimization of dialogue policies using reinforcement learning (RL) is now an accepted part of the state of the art in spoken dialogue systems (SDS). Yet, it is still the case that the commonly used training algorithms for SDS require a large number of dialogues and hence most systems still rely on artificial data generated by a user simulator. Optimization is therefore performed off-line before releasing the system to real users. Gaussian Processes (GP) for RL have recently been applied to dialogue systems. One advantage of GP is that they compute an explicit measure of uncertainty in the value function estimates computed during learning. In this paper, a class of novel learning strategies is described which use uncertainty to control exploration on-line. Comparisons between several exploration schemes show that significant improvements to learning speed can be obtained and that rapid and safe online optimisation is possible, even on a complex task. Copyright © 2011 ISCA.
Resumo:
The desire to seek new and unfamiliar experiences is a fundamental behavioral tendency in humans and other species. In economic decision making, novelty seeking is often rational, insofar as uncertain options may prove valuable and advantageous in the long run. Here, we show that, even when the degree of perceptual familiarity of an option is unrelated to choice outcome, novelty nevertheless drives choice behavior. Using functional magnetic resonance imaging (fMRI), we show that this behavior is specifically associated with striatal activity, in a manner consistent with computational accounts of decision making under uncertainty. Furthermore, this activity predicts interindividual differences in susceptibility to novelty. These data indicate that the brain uses perceptual novelty to approximate choice uncertainty in decision making, which in certain contexts gives rise to a newly identified and quantifiable source of human irrationality.
Resumo:
An analysis is made of the conditions for the generation of superfluorescence pulses in an inverted medium of electron-hole pairs in a semiconductor. It is shown that strong optical amplification in laser semiconductor amplifiers characterised by αL ≫ 1 leads to suppression of phase re-laxation of the medium during the initial stages of evolution of superfluorescence and to formation of a macroscopic dipole from electron-hole pairs. Cooperative emission of radiation in this system results in generation of a powerful ultrashort pulse of the optical gain, which interacts coherently with the semiconductor medium. It is shown that coherent pulsations of the optical field, observed earlier by the author in Q-switched semiconductor lasers, are the result of superfluorescence and of the coherent interaction between the optical field and the medium.
Resumo:
An analysis is made of the conditions for the generation of superfluorescence pulses in an inverted medium of electron-hole pairs in a semiconductor. It is shown that strong optical amplification in laser semiconductor amplifiers characterised by αL ≫ 1 (α is the small-signal gain and L is the amplifier length) leads to suppression of phase relaxation of the medium during the initial stages of evolution of superfluorescence and to formation of a macroscopic dipole from electron - hole pairs. Cooperative emission of radiation in this system results in generation of a powerful ultrashort pulse of the optical gain, which interacts coherently with the semiconductor medium. It is shown that coherent pulsations of the optical field, observed earlier by the author in Q-switched semiconductor lasers, are the result of superfluorescence and of the coherent interaction between the optical field and the medium.
Resumo:
Vibration and acoustic analysis at higher frequencies faces two challenges: computing the response without using an excessive number of degrees of freedom, and quantifying its uncertainty due to small spatial variations in geometry, material properties and boundary conditions. Efficient models make use of the observation that when the response of a decoupled vibro-acoustic subsystem is sufficiently sensitive to uncertainty in such spatial variations, the local statistics of its natural frequencies and mode shapes saturate to universal probability distributions. This holds irrespective of the causes that underly these spatial variations and thus leads to a nonparametric description of uncertainty. This work deals with the identification of uncertain parameters in such models by using experimental data. One of the difficulties is that both experimental errors and modeling errors, due to the nonparametric uncertainty that is inherent to the model type, are present. This is tackled by employing a Bayesian inference strategy. The prior probability distribution of the uncertain parameters is constructed using the maximum entropy principle. The likelihood function that is subsequently computed takes the experimental information, the experimental errors and the modeling errors into account. The posterior probability distribution, which is computed with the Markov Chain Monte Carlo method, provides a full uncertainty quantification of the identified parameters, and indicates how well their uncertainty is reduced, with respect to the prior information, by the experimental data. © 2013 Taylor & Francis Group, London.
Resumo:
A V-shaped solar cell module consists of two tilted mono-crystalline cells [J. Li, China Patent No. 200410007708.6 (March, 2004)]. The angle included between the two tilted cells is 90 degrees. The two cells were fabricated by using polished silicon wafers. The scheme of both-side polished wafers has been proposed to reduce optical loss. Compared to solar cells in a planar way, the V-shaped structure enhances external quantum efficiency and leads to an increase of 15% in generation photocurrent density. The following three kinds of trapped photons are suggested to contribute to the increase: (1) infrared photons converted from visible photons due to a transformation mechanism, (2) photons reflected from top contact metal, and (3) a residual reflection which can not be eliminated by an antireflection coating.
Resumo:
Robots must plan and execute tasks in the presence of uncertainty. Uncertainty arises from sensing errors, control errors, and uncertainty in the geometry of the environment. The last, which is called model error, has received little previous attention. We present a framework for computing motion strategies that are guaranteed to succeed in the presence of all three kinds of uncertainty. The motion strategies comprise sensor-based gross motions, compliant motions, and simple pushing motions.
Resumo:
X. Wang, J. Yang, R. Jensen and X. Liu, 'Rough Set Feature Selection and Rule Induction for Prediction of Malignancy Degree in Brain Glioma,' Computer Methods and Programs in Biomedicine, vol. 83, no. 2, pp. 147-156, 2006.
Resumo:
A neural network system, NAVITE, for incremental trajectory generation and obstacle avoidance is presented. Unlike other approaches, the system is effective in unstructured environments. Multimodal inforrnation from visual and range data is used for obstacle detection and to eliminate uncertainty in the measurements. Optimal paths are computed without explicitly optimizing cost functions, therefore reducing computational expenses. Simulations of a planar mobile robot (including the dynamic characteristics of the plant) in obstacle-free and object avoidance trajectories are presented. The system can be extended to incorporate global map information into the local decision-making process.
Resumo:
The last 30 years have seen Fuzzy Logic (FL) emerging as a method either complementing or challenging stochastic methods as the traditional method of modelling uncertainty. But the circumstances under which FL or stochastic methods should be used are shrouded in disagreement, because the areas of application of statistical and FL methods are overlapping with differences in opinion as to when which method should be used. Lacking are practically relevant case studies comparing these two methods. This work compares stochastic and FL methods for the assessment of spare capacity on the example of pharmaceutical high purity water (HPW) utility systems. The goal of this study was to find the most appropriate method modelling uncertainty in industrial scale HPW systems. The results provide evidence which suggests that stochastic methods are superior to the methods of FL in simulating uncertainty in chemical plant utilities including HPW systems in typical cases whereby extreme events, for example peaks in demand, or day-to-day variation rather than average values are of interest. The average production output or other statistical measures may, for instance, be of interest in the assessment of workshops. Furthermore the results indicate that the stochastic model should be used only if found necessary by a deterministic simulation. Consequently, this thesis concludes that either deterministic or stochastic methods should be used to simulate uncertainty in chemical plant utility systems and by extension some process system because extreme events or the modelling of day-to-day variation are important in capacity extension projects. Other reasons supporting the suggestion that stochastic HPW models are preferred to FL HPW models include: 1. The computer code for stochastic models is typically less complex than a FL models, thus reducing code maintenance and validation issues. 2. In many respects FL models are similar to deterministic models. Thus the need for a FL model over a deterministic model is questionable in the case of industrial scale HPW systems as presented here (as well as other similar systems) since the latter requires simpler models. 3. A FL model may be difficult to "sell" to an end-user as its results represent "approximate reasoning" a definition of which is, however, lacking. 4. Stochastic models may be applied with some relatively minor modifications on other systems, whereas FL models may not. For instance, the stochastic HPW system could be used to model municipal drinking water systems, whereas the FL HPW model should or could not be used on such systems. This is because the FL and stochastic model philosophies of a HPW system are fundamentally different. The stochastic model sees schedule and volume uncertainties as random phenomena described by statistical distributions based on either estimated or historical data. The FL model, on the other hand, simulates schedule uncertainties based on estimated operator behaviour e.g. tiredness of the operators and their working schedule. But in a municipal drinking water distribution system the notion of "operator" breaks down. 5. Stochastic methods can account for uncertainties that are difficult to model with FL. The FL HPW system model does not account for dispensed volume uncertainty, as there appears to be no reasonable method to account for it with FL whereas the stochastic model includes volume uncertainty.
Resumo:
This paper studies the multiplicity-correction effect of standard Bayesian variable-selection priors in linear regression. Our first goal is to clarify when, and how, multiplicity correction happens automatically in Bayesian analysis, and to distinguish this correction from the Bayesian Ockham's-razor effect. Our second goal is to contrast empirical-Bayes and fully Bayesian approaches to variable selection through examples, theoretical results and simulations. Considerable differences between the two approaches are found. In particular, we prove a theorem that characterizes a surprising aymptotic discrepancy between fully Bayes and empirical Bayes. This discrepancy arises from a different source than the failure to account for hyperparameter uncertainty in the empirical-Bayes estimate. Indeed, even at the extreme, when the empirical-Bayes estimate converges asymptotically to the true variable-inclusion probability, the potential for a serious difference remains. © Institute of Mathematical Statistics, 2010.
Resumo:
Aquifer denitrification is among the most poorly constrained fluxes in global and regional nitrogen budgets. The few direct measurements of denitrification in groundwaters provide limited information about its spatial and temporal variability, particularly at the scale of whole aquifers. Uncertainty in estimates of denitrification may also lead to underestimates of its effect on isotopic signatures of inorganic N, and thereby confound the inference of N source from these data. In this study, our objectives are to quantify the magnitude and variability of denitrification in the Upper Floridan Aquifer (UFA) and evaluate its effect on N isotopic signatures at the regional scale. Using dual noble gas tracers (Ne, Ar) to generate physical predictions of N2 gas concentrations for 112 observations from 61 UFA springs, we show that excess (i.e. denitrification-derived) N2 is highly variable in space and inversely correlated with dissolved oxygen (O2). Negative relationships between O2 and δ15N NO3 across a larger dataset of 113 springs, well-constrained isotopic fractionation coefficients, and strong 15N:18O covariation further support inferences of denitrification in this uniquely organic-matter-poor system. Despite relatively low average rates, denitrification accounted for 32 % of estimated aquifer N inputs across all sampled UFA springs. Back-calculations of source δ15N NO3 based on denitrification progression suggest that isotopically-enriched nitrate (NO3-) in many springs of the UFA reflects groundwater denitrification rather than urban- or animal-derived inputs. © Author(s) 2012.
Resumo:
Based on extensive research on reinforcing steel corrosion in concrete in the past decades, it is now possible to estimate the effect of the progression of reinforcement corrosion in concrete infrastructure on its structural performance. There are still areas of considerable uncertainty in the models and in the data available, however This paper uses a recently developed model for reinforcement corrosion in concrete to improve the estimation process and to indicate the practical implications. In particular stochastic models are used to estimate the time likely to elapse for each phase of the whole corrosion process: initiation, corrosion-induced concrete cracking, and structural strength reduction. It was found that, for practical flexural structures subject to chloride attacks, corrosion initiation may start quite early in their service life. It was also found that, once the structure is considered to be unserviceable due to corrosion-induced cracking, there is considerable remaining service life before the structure can be considered to have become unsafe. The procedure proposed in the paper has the potential to serve as a rational tool for practitioners, operators, and asset managers to make decisions about the optimal timing of repairs, strengthening, and/or rehabilitation of corrosion-affected concrete infrastructure. Timely intervention has the potential to prolong the service life of infrastructure.