991 resultados para uncertainty-functions
Resumo:
In this paper, a Lyapunov function candidate is introduced for multivariable systems with inner delays, without assuming a priori stability for the nondelayed subsystem. By using this Lyapunov function, a controller is deduced. Such a controller utilizes an input-output description of the original system, a circumstance that facilitates practical applications of the proposed approach.
Resumo:
http://www.archive.org/details/theparishpriesto00heusuoft
Resumo:
We investigate the problem of learning disjunctions of counting functions, which are general cases of parity and modulo functions, with equivalence and membership queries. We prove that, for any prime number p, the class of disjunctions of integer-weighted counting functions with modulus p over the domain Znq (or Zn) for any given integer q ≥ 2 is polynomial time learnable using at most n + 1 equivalence queries, where the hypotheses issued by the learner are disjunctions of at most n counting functions with weights from Zp. The result is obtained through learning linear systems over an arbitrary field. In general a counting function may have a composite modulus. We prove that, for any given integer q ≥ 2, over the domain Zn2, the class of read-once disjunctions of Boolean-weighted counting functions with modulus q is polynomial time learnable with only one equivalence query, and the class of disjunctions of log log n Boolean-weighted counting functions with modulus q is polynomial time learnable. Finally, we present an algorithm for learning graph-based counting functions.
Resumo:
Co-release of the inhibitory neurotransmitter GABA and the neuropeptide substance-P (SP) from single axons is a conspicuous feature of the basal ganglia, yet its computational role, if any, has not been resolved. In a new learning model, co-release of GABA and SP from axons of striatal projection neurons emerges as a highly efficient way to compute the uncertainty responses that are exhibited by dopamine (DA) neurons when animals adapt to probabilistic contingencies between rewards and the stimuli that predict their delivery. Such uncertainty-related dopamine release appears to be an adaptive phenotype, because it promotes behavioral switching at opportune times. Understanding the computational linkages between SP and DA in the basal ganglia is important, because Huntington's disease is characterized by massive SP depletion, whereas Parkinson's disease is characterized by massive DA depletion.
Resumo:
Before choosing, it helps to know both the expected value signaled by a predictive cue and the associated uncertainty that the reward will be forthcoming. Recently, Fiorillo et al. (2003) found the dopamine (DA) neurons of the SNc exhibit sustained responses related to the uncertainty that a cure will be followed by reward, in addition to phasic responses related to reward prediction errors (RPEs). This suggests that cue-dependent anticipations of the timing, magnitude, and uncertainty of rewards are learned and reflected in components of the DA signals broadcast by SNc neurons. What is the minimal local circuit model that can explain such multifaceted reward-related learning? A new computational model shows how learned uncertainty responses emerge robustly on single trial along with phasic RPE responses, such that both types of DA responses exhibit the empirically observed dependence on conditional probability, expected value of reward, and time since onset of the reward-predicting cue. The model includes three major pathways for computing: immediate expected values of cures, timed predictions of reward magnitudes (and RPEs), and the uncertainty associated with these predictions. The first two model pathways refine those previously modeled by Brown et al. (1999). A third, newly modeled, pathway is formed by medium spiny projection neurons (MSPNs) of the matrix compartment of the striatum, whose axons co-release GABA and a neuropeptide, substance P, both at synapses with GABAergic neurons in the SNr and with the dendrites (in SNr) of DA neurons whose somas are in ventral SNc. Co-release enables efficient computation of sustained DA uncertainty responses that are a non-monotonic function of the conditonal probability that a reward will follow the cue. The new model's incorporation of a striatal microcircuit allowed it to reveals that variability in striatal cholinergic transmission can explain observed difference, between monkeys, in the amplitutude of the non-monotonic uncertainty function. Involvement of matriceal MSPNs and striatal cholinergic transmission implpies a relation between uncertainty in the cue-reward contigency and action-selection functions of the basal ganglia. The model synthesizes anatomical, electrophysiological and behavioral data regarding the midbrain DA system in a novel way, by relating the ability to compute uncertainty, in parallel with other aspects of reward contingencies, to the unique distribution of SP inputs in ventral SN.
Resumo:
A neural network system, NAVITE, for incremental trajectory generation and obstacle avoidance is presented. Unlike other approaches, the system is effective in unstructured environments. Multimodal inforrnation from visual and range data is used for obstacle detection and to eliminate uncertainty in the measurements. Optimal paths are computed without explicitly optimizing cost functions, therefore reducing computational expenses. Simulations of a planar mobile robot (including the dynamic characteristics of the plant) in obstacle-free and object avoidance trajectories are presented. The system can be extended to incorporate global map information into the local decision-making process.
Resumo:
Adaptive Resonance Theory (ART) models are real-time neural networks for category learning, pattern recognition, and prediction. Unsupervised fuzzy ART and supervised fuzzy ARTMAP networks synthesize fuzzy logic and ART by exploiting the formal similarity between tile computations of fuzzy subsethood and the dynamics of ART category choice, search, and learning. Fuzzy ART self-organizes stable recognition categories in response to arbitrary sequences of analog or binary input patterns. It generalizes the binary ART 1 model, replacing the set-theoretic intersection (∩) with the fuzzy intersection(∧), or component-wise minimum. A normalization procedure called complement coding leads to a symmetric theory in which the fuzzy intersection and the fuzzy union (∨), or component-wise maximum, play complementary roles. A geometric interpretation of fuzzy ART represents each category as a box that increases in size as weights decrease. This paper analyzes fuzzy ART models that employ various choice functions for category selection. One such function minimizes total weight change during learning. Benchmark simulations compare peformance of fuzzy ARTMAP systems that use different choice functions.
Resumo:
Advanced Research Projects Agency (ONR N00014-92-J-4015); National Science Foundation (IRI-90-24877); Office of Naval Research (N00014-91-J-1309)
Resumo:
Wind energy is the energy source that contributes most to the renewable energy mix of European countries. While there are good wind resources throughout Europe, the intermittency of the wind represents a major problem for the deployment of wind energy into the electricity networks. To ensure grid security a Transmission System Operator needs today for each kilowatt of wind energy either an equal amount of spinning reserve or a forecasting system that can predict the amount of energy that will be produced from wind over a period of 1 to 48 hours. In the range from 5m/s to 15m/s a wind turbine’s production increases with a power of three. For this reason, a Transmission System Operator requires an accuracy for wind speed forecasts of 1m/s in this wind speed range. Forecasting wind energy with a numerical weather prediction model in this context builds the background of this work. The author’s goal was to present a pragmatic solution to this specific problem in the ”real world”. This work therefore has to be seen in a technical context and hence does not provide nor intends to provide a general overview of the benefits and drawbacks of wind energy as a renewable energy source. In the first part of this work the accuracy requirements of the energy sector for wind speed predictions from numerical weather prediction models are described and analysed. A unique set of numerical experiments has been carried out in collaboration with the Danish Meteorological Institute to investigate the forecast quality of an operational numerical weather prediction model for this purpose. The results of this investigation revealed that the accuracy requirements for wind speed and wind power forecasts from today’s numerical weather prediction models can only be met at certain times. This means that the uncertainty of the forecast quality becomes a parameter that is as important as the wind speed and wind power itself. To quantify the uncertainty of a forecast valid for tomorrow requires an ensemble of forecasts. In the second part of this work such an ensemble of forecasts was designed and verified for its ability to quantify the forecast error. This was accomplished by correlating the measured error and the forecasted uncertainty on area integrated wind speed and wind power in Denmark and Ireland. A correlation of 93% was achieved in these areas. This method cannot solve the accuracy requirements of the energy sector. By knowing the uncertainty of the forecasts, the focus can however be put on the accuracy requirements at times when it is possible to accurately predict the weather. Thus, this result presents a major step forward in making wind energy a compatible energy source in the future.
Resumo:
The last 30 years have seen Fuzzy Logic (FL) emerging as a method either complementing or challenging stochastic methods as the traditional method of modelling uncertainty. But the circumstances under which FL or stochastic methods should be used are shrouded in disagreement, because the areas of application of statistical and FL methods are overlapping with differences in opinion as to when which method should be used. Lacking are practically relevant case studies comparing these two methods. This work compares stochastic and FL methods for the assessment of spare capacity on the example of pharmaceutical high purity water (HPW) utility systems. The goal of this study was to find the most appropriate method modelling uncertainty in industrial scale HPW systems. The results provide evidence which suggests that stochastic methods are superior to the methods of FL in simulating uncertainty in chemical plant utilities including HPW systems in typical cases whereby extreme events, for example peaks in demand, or day-to-day variation rather than average values are of interest. The average production output or other statistical measures may, for instance, be of interest in the assessment of workshops. Furthermore the results indicate that the stochastic model should be used only if found necessary by a deterministic simulation. Consequently, this thesis concludes that either deterministic or stochastic methods should be used to simulate uncertainty in chemical plant utility systems and by extension some process system because extreme events or the modelling of day-to-day variation are important in capacity extension projects. Other reasons supporting the suggestion that stochastic HPW models are preferred to FL HPW models include: 1. The computer code for stochastic models is typically less complex than a FL models, thus reducing code maintenance and validation issues. 2. In many respects FL models are similar to deterministic models. Thus the need for a FL model over a deterministic model is questionable in the case of industrial scale HPW systems as presented here (as well as other similar systems) since the latter requires simpler models. 3. A FL model may be difficult to "sell" to an end-user as its results represent "approximate reasoning" a definition of which is, however, lacking. 4. Stochastic models may be applied with some relatively minor modifications on other systems, whereas FL models may not. For instance, the stochastic HPW system could be used to model municipal drinking water systems, whereas the FL HPW model should or could not be used on such systems. This is because the FL and stochastic model philosophies of a HPW system are fundamentally different. The stochastic model sees schedule and volume uncertainties as random phenomena described by statistical distributions based on either estimated or historical data. The FL model, on the other hand, simulates schedule uncertainties based on estimated operator behaviour e.g. tiredness of the operators and their working schedule. But in a municipal drinking water distribution system the notion of "operator" breaks down. 5. Stochastic methods can account for uncertainties that are difficult to model with FL. The FL HPW system model does not account for dispensed volume uncertainty, as there appears to be no reasonable method to account for it with FL whereas the stochastic model includes volume uncertainty.
Resumo:
The class of all Exponential-Polynomial-Trigonometric (EPT) functions is classical and equal to the Euler-d’Alembert class of solutions of linear differential equations with constant coefficients. The class of non-negative EPT functions defined on [0;1) was discussed in Hanzon and Holland (2010) of which EPT probability density functions are an important subclass. EPT functions can be represented as ceAxb, where A is a square matrix, b a column vector and c a row vector where the triple (A; b; c) is the minimal realization of the EPT function. The minimal triple is only unique up to a basis transformation. Here the class of 2-EPT probability density functions on R is defined and shown to be closed under a variety of operations. The class is also generalised to include mixtures with the pointmass at zero. This class coincides with the class of probability density functions with rational characteristic functions. It is illustrated that the Variance Gamma density is a 2-EPT density under a parameter restriction. A discrete 2-EPT process is a process which has stochastically independent 2-EPT random variables as increments. It is shown that the distribution of the minimum and maximum of such a process is an EPT density mixed with a pointmass at zero. The Laplace Transform of these distributions correspond to the discrete time Wiener-Hopf factors of the discrete time 2-EPT process. A distribution of daily log-returns, observed over the period 1931-2011 from a prominent US index, is approximated with a 2-EPT density function. Without the non-negativity condition, it is illustrated how this problem is transformed into a discrete time rational approximation problem. The rational approximation software RARL2 is used to carry out this approximation. The non-negativity constraint is then imposed via a convex optimisation procedure after the unconstrained approximation. Sufficient and necessary conditions are derived to characterise infinitely divisible EPT and 2-EPT functions. Infinitely divisible 2-EPT density functions generate 2-EPT Lévy processes. An assets log returns can be modelled as a 2-EPT Lévy process. Closed form pricing formulae are then derived for European Options with specific times to maturity. Formulae for discretely monitored Lookback Options and 2-Period Bermudan Options are also provided. Certain Greeks, including Delta and Gamma, of these options are also computed analytically. MATLAB scripts are provided for calculations involving 2-EPT functions. Numerical option pricing examples illustrate the effectiveness of the 2-EPT approach to financial modelling.
Resumo:
One problem in most three-dimensional (3D) scalar data visualization techniques is that they often overlook to depict uncertainty that comes with the 3D scalar data and thus fail to faithfully present the 3D scalar data and have risks which may mislead users’ interpretations, conclusions or even decisions. Therefore this thesis focuses on the study of uncertainty visualization in 3D scalar data and we seek to create better uncertainty visualization techniques, as well as to find out the advantages/disadvantages of those state-of-the-art uncertainty visualization techniques. To do this, we address three specific hypotheses: (1) the proposed Texture uncertainty visualization technique enables users to better identify scalar/error data, and provides reduced visual overload and more appropriate brightness than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (2) The proposed Linked Views and Interactive Specification (LVIS) uncertainty visualization technique enables users to better search max/min scalar and error data than four state-of-the-art uncertainty visualization techniques, as demonstrated using a perceptual effectiveness user study. (3) The proposed Probabilistic Query uncertainty visualization technique, in comparison to traditional Direct Volume Rendering (DVR) methods, enables radiologists/physicians to better identify possible alternative renderings relevant to a diagnosis and the classification probabilities associated to the materials appeared on these renderings; this leads to improved decision support for diagnosis, as demonstrated in the domain of medical imaging. For each hypothesis, we test it by following/implementing a unified framework that consists of three main steps: the first main step is uncertainty data modeling, which clearly defines and generates certainty types of uncertainty associated to given 3D scalar data. The second main step is uncertainty visualization, which transforms the 3D scalar data and their associated uncertainty generated from the first main step into two-dimensional (2D) images for insight, interpretation or communication. The third main step is evaluation, which transforms the 2D images generated from the second main step into quantitative scores according to specific user tasks, and statistically analyzes the scores. As a result, the quality of each uncertainty visualization technique is determined.
Resumo:
We have identified a patient with a number of neutrophil dysfunctions. The patient was a female baby who lived for 8 months. During her life, she developed severe bacterial infections and showed omphalitis, impaired wound healing, and a pronounced leukocytosis. She was not a patient with leukocyte adhesion deficiency, because all leukocyte CD18 complex proteins were expressed at normal levels. Yet, neutrophil polarization and chemotaxis to platelet-activating factor, leukotriene B4, or formyl-methionyl-leucyl-phenylalanine (FMLP) were completely absent. We found a strong defect in actin polymerization in response to chemotactic stimuli, but only a retarded or even normal reaction with other stimuli. This indicates that the cellular dysfunctions were not due to an intrinsic defect in actin metabolism. Instead, the regulation of actin polymerization with chemotactic stimuli seemed to be defective. We concentrated on FMLP-induced responses in the patient's neutrophils. Functions dependent on activation of complement receptor type 3, such as aggregation or adherence to endothelial cells, were normally induced. Binding to serum-coated coverslips was normal in cell number; however, spreading was not observed. Exocytosis from the specific granules was readily induced. In contrast, FMLP failed to induce a respiratory burst activity or degranulation of the azurophil granules. FMLP induced a normal increase in free intracellular Ca2+, but a decreased formation of diglycerides (especially the 1-O-alkyl,2-acyl compounds). Thus, we have described a patient whose neutrophils show a severe defect in functional activation via chemotaxin receptors, resulting in a selective absence of NADPH oxidase activity, exocytosis from the azurophil granules, and actin polymerization. Our findings show that actin polymerization for neutrophil spreading and locomotion is regulated differently from that for phagocytosis. Also, the release of azurophil and specific granule contents is clearly shown to be regulated in a different way.
Resumo:
The mammalian protein POT1 binds to telomeric single-stranded DNA (ssDNA), protecting chromosome ends from being detected as sites of DNA damage. POT1 is composed of an N-terminal ssDNA-binding domain and a C-terminal protein interaction domain. With regard to the latter, POT1 heterodimerizes with the protein TPP1 to foster binding to telomeric ssDNA in vitro and binds the telomeric double-stranded-DNA-binding protein TRF2. We sought to determine which of these functions-ssDNA, TPP1, or TRF2 binding-was required to protect chromosome ends from being detected as DNA damage. Using separation-of-function POT1 mutants deficient in one of these three activities, we found that binding to TRF2 is dispensable for protecting telomeres but fosters robust loading of POT1 onto telomeric chromatin. Furthermore, we found that the telomeric ssDNA-binding activity and binding to TPP1 are required in cis for POT1 to protect telomeres. Mechanistically, binding of POT1 to telomeric ssDNA and association with TPP1 inhibit the localization of RPA, which can function as a DNA damage sensor, to telomeres.
Resumo:
B-lymphocytes have traditionally been thought to contribute to immunity and autoimmune disease through terminal differentiation into plasma cells that secrete antibody. However, studies in mice and recent clinical studies have demonstrated that genetically altered B-cell function and B-cell-targeted therapies can significantly affect autoimmune diseases that were predominantly thought to be T-cell-mediated. B-cell depletion in mouse models of disease has also led to the identification of alternative B-cell effector functions that regulate normal immune responses and autoimmune disease. This review highlights multiple B-cell effector mechanisms, including the promotion of cellular immunity, the negative regulation of immune responses, and the production of pathogenic antibodies.