976 resultados para Nonlinear Decision Functions


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The prefrontal (PFC) and orbitofrontal cortex (OFC) appear to be associated with both executive functions and olfaction. However, there is little data relating olfactory processing and executive functions in humans. The present study aimed at exploring the role of olfaction on executive functioning, making a distinction between primary and more cognitive aspects of olfaction. Three executive tasks of similar difficulty were used. One was used to assess hot executive functions (Iowa Gambling Task-IGT), and two as a measure of cold executive functioning (Stroop Colour and Word Test-SCWT and Wisconsin Card Sorting Test-WCST). Sixty two healthy participants were included: 31 with normosmia and 31 with hyposmia. Olfactory abilities were assessed using the ''Sniffin' Sticks'' test and the olfactory threshold, odour discrimination and odour identification measures were obtained. All participants were female, aged between 18 and 60. Results showed that participants with hyposmia displayed worse performance in decision making (IGT; Cohen's-d = 0.91) and cognitive flexibility (WCST; Cohen's-d between 0.54 and 0.68) compared to those with normosmia. Multiple regression adjusted by the covariates participants' age and education level showed a positive association between odour identification and the cognitive inhibition response (SCWT-interference; Beta = 0.29; p = .034). The odour discrimination capacity was not a predictor of the cognitive executive performance. Our results suggest that both hot and cold executive functions seem to be associated with higher-order olfactory functioning in humans. These results robustly support the hypothesis that olfaction and executive measures have a common neural substrate in PFC and OFC, and suggest that olfaction might be a reliable cognitive marker in psychiatric and neurologic disorders.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modeling concentration-response function became extremely popular in ecotoxicology during the last decade. Indeed, modeling allows determining the total response pattern of a given substance. However, reliable modeling is consuming in term of data, which is in contradiction with the current trend in ecotoxicology, which aims to reduce, for cost and ethical reasons, the number of data produced during an experiment. It is therefore crucial to determine experimental design in a cost-effective manner. In this paper, we propose to use the theory of locally D-optimal designs to determine the set of concentrations to be tested so that the parameters of the concentration-response function can be estimated with high precision. We illustrated this approach by determining the locally D-optimal designs to estimate the toxicity of the herbicide dinoseb on daphnids and algae. The results show that the number of concentrations to be tested is often equal to the number of parameters and often related to the their meaning, i.e. they are located close to the parameters. Furthermore, the results show that the locally D-optimal design often has the minimal number of support points and is not much sensitive to small changes in nominal values of the parameters. In order to reduce the experimental cost and the use of test organisms, especially in case of long-term studies, reliable nominal values may therefore be fixed based on prior knowledge and literature research instead of on preliminary experiments

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study steady-state correlation functions of nonlinear stochastic processes driven by external colored noise. We present a methodology that provides explicit expressions of correlation functions approximating simultaneously short- and long-time regimes. The non-Markov nature is reduced to an effective Markovian formulation, and the nonlinearities are treated systematically by means of double expansions in high and low frequencies. We also derive some exact expressions for the coefficients of these expansions for arbitrary noise by means of a generalization of projection-operator techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rhythmic activity plays a central role in neural computations and brain functions ranging from homeostasis to attention, as well as in neurological and neuropsychiatric disorders. Despite this pervasiveness, little is known about the mechanisms whereby the frequency and power of oscillatory activity are modulated, and how they reflect the inputs received by neurons. Numerous studies have reported input-dependent fluctuations in peak frequency and power (as well as couplings across these features). However, it remains unresolved what mediates these spectral shifts among neural populations. Extending previous findings regarding stochastic nonlinear systems and experimental observations, we provide analytical insights regarding oscillatory responses of neural populations to stimulation from either endogenous or exogenous origins. Using a deceptively simple yet sparse and randomly connected network of neurons, we show how spiking inputs can reliably modulate the peak frequency and power expressed by synchronous neural populations without any changes in circuitry. Our results reveal that a generic, non-nonlinear and input-induced mechanism can robustly mediate these spectral fluctuations, and thus provide a framework in which inputs to the neurons bidirectionally regulate both the frequency and power expressed by synchronous populations. Theoretical and computational analysis of the ensuing spectral fluctuations was found to reflect the underlying dynamics of the input stimuli driving the neurons. Our results provide insights regarding a generic mechanism supporting spectral transitions observed across cortical networks and spanning multiple frequency bands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A parametric procedure for the blind inversion of nonlinear channels is proposed, based on a recent method of blind source separation in nonlinear mixtures. Experiments show that the proposed algorithms perform efficiently, even in the presence of hard distortion. The method, based on the minimization of the output mutual information, needs the knowledge of log-derivative of input distribution (the so-called score function). Each algorithm consists of three adaptive blocks: one devoted to adaptive estimation of the score function, and two other blocks estimating the inverses of the linear and nonlinear parts of the channel, (quasi-)optimally adapted using the estimated score functions. This paper is mainly concerned by the nonlinear part, for which we propose two parametric models, the first based on a polynomial model and the second on a neural network, while [14, 15] proposed non-parametric approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although sources in general nonlinear mixturm arc not separable iising only statistical independence, a special and realistic case of nonlinear mixtnres, the post nonlinear (PNL) mixture is separable choosing a suited separating system. Then, a natural approach is based on the estimation of tho separating Bystem parameters by minimizing an indcpendence criterion, like estimated mwce mutual information. This class of methods requires higher (than 2) order statistics, and cannot separate Gaarsian sources. However, use of [weak) prior, like source temporal correlation or nonstationarity, leads to other source separation Jgw rithms, which are able to separate Gaussian sourra, and can even, for a few of them, works with second-order statistics. Recently, modeling time correlated s011rces by Markov models, we propose vcry efficient algorithms hmed on minimization of the conditional mutual information. Currently, using the prior of temporally correlated sources, we investigate the fesihility of inverting PNL mixtures with non-bijectiw non-liacarities, like quadratic functions. In this paper, we review the main ICA and BSS results for riunlinear mixtures, present PNL models and algorithms, and finish with advanced resutts using temporally correlated snu~sm

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé Cette thèse est consacrée à l'analyse, la modélisation et la visualisation de données environnementales à référence spatiale à l'aide d'algorithmes d'apprentissage automatique (Machine Learning). L'apprentissage automatique peut être considéré au sens large comme une sous-catégorie de l'intelligence artificielle qui concerne particulièrement le développement de techniques et d'algorithmes permettant à une machine d'apprendre à partir de données. Dans cette thèse, les algorithmes d'apprentissage automatique sont adaptés pour être appliqués à des données environnementales et à la prédiction spatiale. Pourquoi l'apprentissage automatique ? Parce que la majorité des algorithmes d'apprentissage automatiques sont universels, adaptatifs, non-linéaires, robustes et efficaces pour la modélisation. Ils peuvent résoudre des problèmes de classification, de régression et de modélisation de densité de probabilités dans des espaces à haute dimension, composés de variables informatives spatialisées (« géo-features ») en plus des coordonnées géographiques. De plus, ils sont idéaux pour être implémentés en tant qu'outils d'aide à la décision pour des questions environnementales allant de la reconnaissance de pattern à la modélisation et la prédiction en passant par la cartographie automatique. Leur efficacité est comparable au modèles géostatistiques dans l'espace des coordonnées géographiques, mais ils sont indispensables pour des données à hautes dimensions incluant des géo-features. Les algorithmes d'apprentissage automatique les plus importants et les plus populaires sont présentés théoriquement et implémentés sous forme de logiciels pour les sciences environnementales. Les principaux algorithmes décrits sont le Perceptron multicouches (MultiLayer Perceptron, MLP) - l'algorithme le plus connu dans l'intelligence artificielle, le réseau de neurones de régression généralisée (General Regression Neural Networks, GRNN), le réseau de neurones probabiliste (Probabilistic Neural Networks, PNN), les cartes auto-organisées (SelfOrganized Maps, SOM), les modèles à mixture Gaussiennes (Gaussian Mixture Models, GMM), les réseaux à fonctions de base radiales (Radial Basis Functions Networks, RBF) et les réseaux à mixture de densité (Mixture Density Networks, MDN). Cette gamme d'algorithmes permet de couvrir des tâches variées telle que la classification, la régression ou l'estimation de densité de probabilité. L'analyse exploratoire des données (Exploratory Data Analysis, EDA) est le premier pas de toute analyse de données. Dans cette thèse les concepts d'analyse exploratoire de données spatiales (Exploratory Spatial Data Analysis, ESDA) sont traités selon l'approche traditionnelle de la géostatistique avec la variographie expérimentale et selon les principes de l'apprentissage automatique. La variographie expérimentale, qui étudie les relations entre pairs de points, est un outil de base pour l'analyse géostatistique de corrélations spatiales anisotropiques qui permet de détecter la présence de patterns spatiaux descriptible par une statistique. L'approche de l'apprentissage automatique pour l'ESDA est présentée à travers l'application de la méthode des k plus proches voisins qui est très simple et possède d'excellentes qualités d'interprétation et de visualisation. Une part importante de la thèse traite de sujets d'actualité comme la cartographie automatique de données spatiales. Le réseau de neurones de régression généralisée est proposé pour résoudre cette tâche efficacement. Les performances du GRNN sont démontrées par des données de Comparaison d'Interpolation Spatiale (SIC) de 2004 pour lesquelles le GRNN bat significativement toutes les autres méthodes, particulièrement lors de situations d'urgence. La thèse est composée de quatre chapitres : théorie, applications, outils logiciels et des exemples guidés. Une partie importante du travail consiste en une collection de logiciels : Machine Learning Office. Cette collection de logiciels a été développée durant les 15 dernières années et a été utilisée pour l'enseignement de nombreux cours, dont des workshops internationaux en Chine, France, Italie, Irlande et Suisse ainsi que dans des projets de recherche fondamentaux et appliqués. Les cas d'études considérés couvrent un vaste spectre de problèmes géoenvironnementaux réels à basse et haute dimensionnalité, tels que la pollution de l'air, du sol et de l'eau par des produits radioactifs et des métaux lourds, la classification de types de sols et d'unités hydrogéologiques, la cartographie des incertitudes pour l'aide à la décision et l'estimation de risques naturels (glissements de terrain, avalanches). Des outils complémentaires pour l'analyse exploratoire des données et la visualisation ont également été développés en prenant soin de créer une interface conviviale et facile à l'utilisation. Machine Learning for geospatial data: algorithms, software tools and case studies Abstract The thesis is devoted to the analysis, modeling and visualisation of spatial environmental data using machine learning algorithms. In a broad sense machine learning can be considered as a subfield of artificial intelligence. It mainly concerns with the development of techniques and algorithms that allow computers to learn from data. In this thesis machine learning algorithms are adapted to learn from spatial environmental data and to make spatial predictions. Why machine learning? In few words most of machine learning algorithms are universal, adaptive, nonlinear, robust and efficient modeling tools. They can find solutions for the classification, regression, and probability density modeling problems in high-dimensional geo-feature spaces, composed of geographical space and additional relevant spatially referenced features. They are well-suited to be implemented as predictive engines in decision support systems, for the purposes of environmental data mining including pattern recognition, modeling and predictions as well as automatic data mapping. They have competitive efficiency to the geostatistical models in low dimensional geographical spaces but are indispensable in high-dimensional geo-feature spaces. The most important and popular machine learning algorithms and models interesting for geo- and environmental sciences are presented in details: from theoretical description of the concepts to the software implementation. The main algorithms and models considered are the following: multi-layer perceptron (a workhorse of machine learning), general regression neural networks, probabilistic neural networks, self-organising (Kohonen) maps, Gaussian mixture models, radial basis functions networks, mixture density networks. This set of models covers machine learning tasks such as classification, regression, and density estimation. Exploratory data analysis (EDA) is initial and very important part of data analysis. In this thesis the concepts of exploratory spatial data analysis (ESDA) is considered using both traditional geostatistical approach such as_experimental variography and machine learning. Experimental variography is a basic tool for geostatistical analysis of anisotropic spatial correlations which helps to understand the presence of spatial patterns, at least described by two-point statistics. A machine learning approach for ESDA is presented by applying the k-nearest neighbors (k-NN) method which is simple and has very good interpretation and visualization properties. Important part of the thesis deals with a hot topic of nowadays, namely, an automatic mapping of geospatial data. General regression neural networks (GRNN) is proposed as efficient model to solve this task. Performance of the GRNN model is demonstrated on Spatial Interpolation Comparison (SIC) 2004 data where GRNN model significantly outperformed all other approaches, especially in case of emergency conditions. The thesis consists of four chapters and has the following structure: theory, applications, software tools, and how-to-do-it examples. An important part of the work is a collection of software tools - Machine Learning Office. Machine Learning Office tools were developed during last 15 years and was used both for many teaching courses, including international workshops in China, France, Italy, Ireland, Switzerland and for realizing fundamental and applied research projects. Case studies considered cover wide spectrum of the real-life low and high-dimensional geo- and environmental problems, such as air, soil and water pollution by radionuclides and heavy metals, soil types and hydro-geological units classification, decision-oriented mapping with uncertainties, natural hazards (landslides, avalanches) assessments and susceptibility mapping. Complementary tools useful for the exploratory data analysis and visualisation were developed as well. The software is user friendly and easy to use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work describes a simulation tool being developed at UPC to predict the microwave nonlinear behavior of planar superconducting structures with very few restrictions on the geometry of the planar layout. The software is intended to be applicable to most structures used in planar HTS circuits, including line, patch, and quasi-lumped microstrip resonators. The tool combines Method of Moments (MoM) algorithms for general electromagnetic simulation with Harmonic Balance algorithms to take into account the nonlinearities in the HTS material. The Method of Moments code is based on discretization of the Electric Field Integral Equation in Rao, Wilton and Glisson Basis Functions. The multilayer dyadic Green's function is used with Sommerfeld integral formulation. The Harmonic Balance algorithm has been adapted to this application where the nonlinearity is distributed and where compatibility with the MoM algorithm is required. Tests of the algorithm in TM010 disk resonators agree with closed-form equations for both the fundamental and third-order intermodulation currents. Simulations of hairpin resonators show good qualitative agreement with previously published results, but it is found that a finer meshing would be necessary to get correct quantitative results. Possible improvements are suggested.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the dissertation is to increase understanding and knowledge in the field where group decision support system (GDSS) and technology selection research overlap in the strategic sense. The purpose is to develop pragmatic, unique and competent management practices and processes for strategic technology assessment and selection from the whole company's point of view. The combination of the GDSS and technology selection is approached from the points of view of the core competence concept, the lead user -method, and different technology types. In this research the aim is to find out how the GDSS contributes to the technology selection process, what aspects should be considered when selecting technologies to be developed or acquired, and what advantages and restrictions the GDSS has in the selection processes. These research objectives are discussed on the basis of experiences and findings in real life selection meetings. The research has been mainly carried outwith constructive, case study research methods. The study contributes novel ideas to the present knowledge and prior literature on the GDSS and technology selection arena. Academic and pragmatic research has been conducted in four areas: 1) the potential benefits of the group support system with the lead user -method,where the need assessment process is positioned as information gathering for the selection of wireless technology development projects; 2) integrated technology selection and core competencies management processes both in theory and in practice; 3) potential benefits of the group decision support system in the technology selection processes of different technology types; and 4) linkages between technology selection and R&D project selection in innovative product development networks. New type of knowledge and understanding has been created on the practical utilization of the GDSS in technology selection decisions. The study demonstrates that technology selection requires close cooperation between differentdepartments, functions, and strategic business units in order to gather the best knowledge for the decision making. The GDSS is proved to be an effective way to promote communication and co-operation between the selectors. The constructs developed in this study have been tested in many industry fields, for example in information and communication, forest, telecommunication, metal, software, and miscellaneous industries, as well as in non-profit organizations. The pragmatic results in these organizations are some of the most relevant proofs that confirm the scientific contribution of the study, according to the principles of the constructive research approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the thesis is to structure and model the factors that contribute to and can be used in evaluating project success. The purpose of this thesis is to enhance the understanding of three research topics. The goal setting process, success evaluation and decision-making process are studied in the context of a project, business unitand its business environment. To achieve the objective three research questionsare posed. These are 1) how to set measurable project goals, 2) how to evaluateproject success and 3) how to affect project success with managerial decisions.The main theoretical contribution comes from deriving a synthesis of these research topics which have mostly been discussed apart from each other in prior research. The research strategy of the study has features from at least the constructive, nomothetical, and decision-oriented research approaches. This strategy guides the theoretical and empirical part of the study. Relevant concepts and a framework are composed on the basis of the prior research contributions within the problem area. A literature review is used to derive constructs of factors withinthe framework. They are related to project goal setting, success evaluation, and decision making. On the basis of this, the case study method is applied to complement the framework. The empirical data includes one product development program, three construction projects, as well as one organization development, hardware/software, and marketing project in their contexts. In two of the case studiesthe analytic hierarchy process is used to formulate a hierarchical model that returns a numerical evaluation of the degree of project success. It has its origin in the solution idea which in turn has its foundation in the notion of projectsuccess. The achieved results are condensed in the form of a process model thatintegrates project goal setting, success evaluation and decision making. The process of project goal setting is analysed as a part of an open system that includes a project, the business unit and its competitive environment. Four main constructs of factors are suggested. First, the project characteristics and requirements are clarified. The second and the third construct comprise the components of client/market segment attractiveness and sources of competitive advantage. Together they determine the competitive position of a business unit. Fourth, the relevant goals and the situation of a business unit are clarified to stress their contribution to the project goals. Empirical evidence is gained on the exploitation of increased knowledge and on the reaction to changes in the business environment during a project to ensure project success. The relevance of a successful project to a company or a business unit tends to increase the higher the reference level of project goals is set. However, normal performance or sometimes performance below this normal level is intentionally accepted. Success measures make project success quantifiable. There are result-oriented, process-oriented and resource-oriented success measures. The study also links result measurements to enablers that portray the key processes. The success measures can be classified into success domains determining the areas on which success is assessed. Empiricalevidence is gained on six success domains: strategy, project implementation, product, stakeholder relationships, learning situation and company functions. However, some project goals, like safety, can be assessed using success measures that belong to two success domains. For example a safety index is used for assessing occupational safety during a project, which is related to project implementation. Product safety requirements, in turn, are connected to the product characteristics and thus to the product-related success domain. Strategic success measures can be used to weave the project phases together. Empirical evidence on their static nature is gained. In order-oriented projects the project phases are oftencontractually divided into different suppliers or contractors. A project from the supplier's perspective can represent only a part of the ¿whole project¿ viewed from the client's perspective. Therefore static success measures are mostly used within the contractually agreed project scope and duration. Proof is also acquired on the dynamic use of operational success measures. They help to focus on the key issues during each project phase. Furthermore, it is shown that the original success domains and success measures, their weights and target values can change dynamically. New success measures can replace the old ones to correspond better with the emphasis of the particular project phase. This adjustment concentrates on the key decision milestones. As a conclusion, the study suggests a combination of static and dynamic success measures. Their linkage to an incentive system can make the project management proactive, enable fast feedback and enhancethe motivation of the personnel. It is argued that the sequence of effective decisions is closely linked to the dynamic control of project success. According to the used definition, effective decisions aim at adequate decision quality and decision implementation. The findings support that project managers construct and use a chain of key decision milestones to evaluate and affect success during aproject. These milestones can be seen as a part of the business processes. Different managers prioritise the key decision milestones to a varying degree. Divergent managerial perspectives, power, responsibilities and involvement during a project offer some explanation for this. Finally, the study introduces the use ofHard Gate and Soft Gate decision milestones. The managers may use the former milestones to provide decision support on result measurements and ad hoc critical conditions. In the latter milestones they may make intermediate success evaluation also on the basis of other types of success measures, like process and resource measures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method for dealing with monotonicity constraints in optimal control problems is used to generalize some results in the context of monopoly theory, also extending the generalization to a large family of principal-agent programs. Our main conclusion is that many results on diverse economic topics, achieved under assumptions of continuity and piecewise differentiability in connection with the endogenous variables of the problem, still remain valid after replacing such assumptions by two minimal requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Extreme weight conditions (EWC) groups along a continuum may share some biological risk factors and intermediate neurocognitive phenotypes. A core cognitive trait in EWC appears to be executive dysfunction, with a focus on decision making, response inhibition and cognitive flexibility. Differences between individuals in these areas are likely to contribute to the differences in vulnerability to EWC. The aim of the study was to investigate whether there is a common pattern of executive dysfunction in EWC while comparing anorexia nervosa patients (AN), obese subjects (OB) and healthy eating/weight controls (HC).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Wiener system is a linear time-invariant filter, followed by an invertible nonlinear distortion. Assuming that the input signal is an independent and identically distributed (iid) sequence, we propose an algorithm for estimating the input signal only by observing the output of the Wiener system. The algorithm is based on minimizing the mutual information of the output samples, by means of a steepest descent gradient approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We all make decisions of varying levels of importance every day. Because making a decision implies that there are alternative choices to be considered, almost all decision involves some conflicts or dissatisfaction. Traditional economic models esteem that a person must weight the positive and negative outcomes of each option, and based on all these inferences, determines which option is the best for that particular situation. However, individuals rather act as irrational agents and tend to deviate from these rational choices. They somewhat evaluate the outcomes' subjective value, namely, when they face a risky choice leading to losses, people are inclined to have some preference for risk over certainty, while when facing a risky choice leading to gains, people often avoid to take risks and choose the most certain option. Yet, it is assumed that decision making is balanced between deliberative and emotional components. Distinct neural regions underpin these factors: the deliberative pathway that corresponds to executive functions, implies the activation of the prefrontal cortex, while the emotional pathway tends to activate the limbic system. These circuits appear to be altered in individuals with ADHD, and result, amongst others, in impaired decision making capacities. Their impulsive and inattentive behaviors are likely to be the cause of their irrational attitude towards risk taking. Still, a possible solution is to administrate these individuals a drug treatment, with the knowledge that it might have several side effects. However, an alternative treatment that relies on cognitive rehabilitation might be appropriate. This project was therefore aimed at investigate whether an intensive working memory training could have a spillover effect on decision making in adults with ADHD and in age-matched healthy controls. We designed a decision making task where the participants had to select an amount to gamble with the chance of 1/3 to win four times the chosen amount, while in the other cases they could loose their investment. Their performances were recorded using electroencephalography prior and after a one-month Dual N-Back training and the possible near and far transfer effects were investigated. Overall, we found that the performance during the gambling task was modulated by personality factors and by the importance of the symptoms at the pretest session. At posttest, we found that all individuals demonstrated an improvement on the Dual N-Back and on similar untrained dimensions. In addition, we discovered that not only the adults with ADHD showed a stable decrease of the symptomatology, as evaluated by the CAARS inventory, but this reduction was also detected in the control samples. In addition, Event-Related Potential (ERP) data are in favor of an change within prefrontal and parietal cortices. These results suggest that cognitive remediation can be effective in adults with ADHD, and in healthy controls. An important complement of this work would be the examination of the data in regard to the attentional networks, which could empower the fact that complex programs covering the remediation of several executive functions' dimensions is not required, a unique working memory training can be sufficient. -- Nous prenons tous chaque jour des décisions ayant des niveaux d'importance variables. Toutes les décisions ont une composante conflictuelle et d'insatisfaction, car prendre une décision implique qu'il y ait des choix alternatifs à considérer. Les modèles économiques traditionnels estiment qu'une personne doit peser les conséquences positives et négatives de chaque option et en se basant sur ces inférences, détermine quelle option est la meilleure dans une situation particulière. Cependant, les individus peuvent dévier de ces choix rationnels. Ils évaluent plutôt les valeur subjective des résultats, c'est-à-dire que lorsqu'ils sont face à un choix risqué pouvant les mener à des pertes, les gens ont tendance à avoir des préférences pour le risque à la place de la certitude, tandis que lorsqu'ils sont face à un choix risqué pouvant les conduire à un gain, ils évitent de prendre des risques et choisissent l'option la plus su^re. De nos jours, il est considéré que la prise de décision est balancée entre des composantes délibératives et émotionnelles. Ces facteurs sont sous-tendus par des régions neurales distinctes: le chemin délibératif, correspondant aux fonctions exécutives, implique l'activation du cortex préfrontal, tandis que le chemin émotionnel active le système limbique. Ces circuits semblent être dysfonctionnels chez les individus ayant un TDAH, et résulte, entre autres, en des capacités de prise de décision altérées. Leurs comportements impulsifs et inattentifs sont probablement la cause de ces attitudes irrationnelles face au risque. Cependant, une solution possible est de leur administrer un traitement médicamenteux, en prenant en compte les potentiels effets secondaires. Un traitement alternatif se reposant sur une réhabilitation cognitive pourrait être appropriée. Le but de ce projet est donc de déterminer si un entrainement intensif de la mémoire de travail peut avoir un effet sur la prise de décision chez des adultes ayant un TDAH et chez des contrôles sains du même âge. Nous avons conçu une tâche de prise de décision dans laquelle les participants devaient sélectionner un montant à jouer en ayant une chance sur trois de gagner quatre fois le montant choisi, alors que dans l'autre cas, ils pouvaient perdre leur investissement. Leurs performances ont été enregistrées en utilisant l'électroencéphalographie avant et après un entrainement d'un mois au Dual N-Back, et nous avons étudié les possibles effets de transfert. Dans l'ensemble, nous avons trouvé au pré-test que les performances au cours du jeu d'argent étaient modulées par les facteurs de personnalité, et par le degré des sympt^omes. Au post-test, nous avons non seulement trouvé que les adultes ayant un TDAH montraient une diminutions stable des symptômes, qui étaient évalués par le questionnaire du CAARS, mais que cette réduction était également perçue dans l'échantillon des contrôles. Les rsultats expérimentaux mesurés à l'aide de l'éléctroencéphalographie suggèrent un changement dans les cortex préfrontaux et pariétaux. Ces résultats suggèrent que la remédiation cognitive est efficace chez les adultes ayant un TDAH, mais produit aussi un effet chez les contrôles sains. Un complément important de ce travail pourrait examiner les données sur l'attention, qui pourraient renforcer l'idée qu'il n'est pas nécessaire d'utiliser des programmes complexes englobant la remédiation de plusieurs dimensions des fonctions exécutives, un simple entraiment de la mémoire de travail devrait suffire.