843 resultados para Reasoning (Psychology).
Resumo:
Dissertação para obtenção do Grau de Doutor em Informática
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação apresentada para obtenção do Grau de Doutor em Engenharia Informática, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Optimization is a very important field for getting the best possible value for the optimization function. Continuous optimization is optimization over real intervals. There are many global and local search techniques. Global search techniques try to get the global optima of the optimization problem. However, local search techniques are used more since they try to find a local minimal solution within an area of the search space. In Continuous Constraint Satisfaction Problems (CCSP)s, constraints are viewed as relations between variables, and the computations are supported by interval analysis. The continuous constraint programming framework provides branch-and-prune algorithms for covering sets of solutions for the constraints with sets of interval boxes which are the Cartesian product of intervals. These algorithms begin with an initial crude cover of the feasible space (the Cartesian product of the initial variable domains) which is recursively refined by interleaving pruning and branching steps until a stopping criterion is satisfied. In this work, we try to find a convenient way to use the advantages in CCSP branchand- prune with local search of global optimization applied locally over each pruned branch of the CCSP. We apply local search techniques of continuous optimization over the pruned boxes outputted by the CCSP techniques. We mainly use steepest descent technique with different characteristics such as penalty calculation and step length. We implement two main different local search algorithms. We use “Procure”, which is a constraint reasoning and global optimization framework, to implement our techniques, then we produce and introduce our results over a set of benchmarks.
Resumo:
This work studies the combination of safe and probabilistic reasoning through the hybridization of Monte Carlo integration techniques with continuous constraint programming. In continuous constraint programming there are variables ranging over continuous domains (represented as intervals) together with constraints over them (relations between variables) and the goal is to find values for those variables that satisfy all the constraints (consistent scenarios). Constraint programming “branch-and-prune” algorithms produce safe enclosures of all consistent scenarios. Special proposed algorithms for probabilistic constraint reasoning compute the probability of sets of consistent scenarios which imply the calculation of an integral over these sets (quadrature). In this work we propose to extend the “branch-and-prune” algorithms with Monte Carlo integration techniques to compute such probabilities. This approach can be useful in robotics for localization problems. Traditional approaches are based on probabilistic techniques that search the most likely scenario, which may not satisfy the model constraints. We show how to apply our approach in order to cope with this problem and provide functionality in real time.
Resumo:
Ontologies formalized by means of Description Logics (DLs) and rules in the form of Logic Programs (LPs) are two prominent formalisms in the field of Knowledge Representation and Reasoning. While DLs adhere to the OpenWorld Assumption and are suited for taxonomic reasoning, LPs implement reasoning under the Closed World Assumption, so that default knowledge can be expressed. However, for many applications it is useful to have a means that allows reasoning over an open domain and expressing rules with exceptions at the same time. Hybrid MKNF knowledge bases make such a means available by formalizing DLs and LPs in a common logic, the Logic of Minimal Knowledge and Negation as Failure (MKNF). Since rules and ontologies are used in open environments such as the Semantic Web, inconsistencies cannot always be avoided. This poses a problem due to the Principle of Explosion, which holds in classical logics. Paraconsistent Logics offer a solution to this issue by assigning meaningful models even to contradictory sets of formulas. Consequently, paraconsistent semantics for DLs and LPs have been investigated intensively. Our goal is to apply the paraconsistent approach to the combination of DLs and LPs in hybrid MKNF knowledge bases. In this thesis, a new six-valued semantics for hybrid MKNF knowledge bases is introduced, extending the three-valued approach by Knorr et al., which is based on the wellfounded semantics for logic programs. Additionally, a procedural way of computing paraconsistent well-founded models for hybrid MKNF knowledge bases by means of an alternating fixpoint construction is presented and it is proven that the algorithm is sound and complete w.r.t. the model-theoretic characterization of the semantics. Moreover, it is shown that the new semantics is faithful w.r.t. well-studied paraconsistent semantics for DLs and LPs, respectively, and maintains the efficiency of the approach it extends.
Resumo:
Machine ethics is an interdisciplinary field of inquiry that emerges from the need of imbuing autonomous agents with the capacity of moral decision-making. While some approaches provide implementations in Logic Programming (LP) systems, they have not exploited LP-based reasoning features that appear essential for moral reasoning. This PhD thesis aims at investigating further the appropriateness of LP, notably a combination of LP-based reasoning features, including techniques available in LP systems, to machine ethics. Moral facets, as studied in moral philosophy and psychology, that are amenable to computational modeling are identified, and mapped to appropriate LP concepts for representing and reasoning about them. The main contributions of the thesis are twofold. First, novel approaches are proposed for employing tabling in contextual abduction and updating – individually and combined – plus a LP approach of counterfactual reasoning; the latter being implemented on top of the aforementioned combined abduction and updating technique with tabling. They are all important to model various issues of the aforementioned moral facets. Second, a variety of LP-based reasoning features are applied to model the identified moral facets, through moral examples taken off-the-shelf from the morality literature. These applications include: (1) Modeling moral permissibility according to the Doctrines of Double Effect (DDE) and Triple Effect (DTE), demonstrating deontological and utilitarian judgments via integrity constraints (in abduction) and preferences over abductive scenarios; (2) Modeling moral reasoning under uncertainty of actions, via abduction and probabilistic LP; (3) Modeling moral updating (that allows other – possibly overriding – moral rules to be adopted by an agent, on top of those it currently follows) via the integration of tabling in contextual abduction and updating; and (4) Modeling moral permissibility and its justification via counterfactuals, where counterfactuals are used for formulating DDE.
Resumo:
This thesis justifies the need for and develops a new integrated model of practical reasoning and argumentation. After framing the work in terms of what is reasonable rather than what is rational (chapter 1), I apply the model for practical argumentation analysis and evaluation provided by Fairclough and Fairclough (2012) to a paradigm case of unreasonable individual practical argumentation provided by mass murderer Anders Behring Breivik (chapter 2). The application shows that by following the model, Breivik is relatively easily able to conclude that his reasoning to mass murder is reasonable – which is understood to be an unacceptable result. Causes for the model to allow such a conclusion are identified as conceptual confusions ingrained in the model, a tension in how values function within the model, and a lack of creativity from Breivik. Distinguishing between dialectical and dialogical, reasoning and argumentation, for individual and multiple participants, chapter 3 addresses these conceptual confusions and helps lay the foundation for the design of a new integrated model for practical reasoning and argumentation (chapter 4). After laying out the theoretical aspects of the new model, it is then used to re-test Breivik’s reasoning in light of a developed discussion regarding the motivation for the new place and role of moral considerations (chapter 5). The application of the new model shows ways that Breivik could have been able to conclude that his practical argumentation was unreasonable and is thus argued to have improved upon the Fairclough and Fairclough model. It is acknowledged, however, that since the model cannot guarantee a reasonable conclusion, improving the critical creative capacity of the individual using it is also of paramount importance (chapter 6). The thesis concludes by discussing the contemporary importance of improving practical reasoning and by pointing to areas for further research (chapter 7).
Resumo:
OBJECTIVE: To characterize eating habits and possible risk factors associated with eating disorders among psychology students, a segment at risk for eating disorders. METHOD: This is a cross-sectional study. The questionnaires Bulimic Investigatory Test Edinburgh (BITE), Eating Attitudes Test (EAT-26), Body Shape Questionnaire (BSQ) and a variety that considers related issues were applied. Statistical Package for the Social Sciences (SPSS) 11.0 was utilized in analysis. The study population was composed of 175 female students, with a mean age of 21.2 (DP ± 3.6 years). RESULTS: A positive result was detected on the EAT-26 for 6.9% of the cases (CI95%: 3.6-11.7%). The prevalence of increased symptoms and intense gravity, according to the BITE questionnaire was 5% (CI95%: 2.4-9.5%) and 2.5% (CI95%: 0.7-6.3%), respectively. According to the findings, 26.29% of the students presented abnormal eating behavior. The population with moderate/severe BSQ scores presented dissatisfaction with corporal weight. CONCLUSION: The results indicate that attention must be given to eating behavior risks within this group. A differentiated gaze is justified with respect to these future professionals, whose practice is jeopardized in cases in which they are themselves the bearers of installed symptoms or precursory behavior.
Resumo:
"Published online: 29 March 2016"
Resumo:
Préface My thesis consists of three essays where I consider equilibrium asset prices and investment strategies when the market is likely to experience crashes and possibly sharp windfalls. Although each part is written as an independent and self contained article, the papers share a common behavioral approach in representing investors preferences regarding to extremal returns. Investors utility is defined over their relative performance rather than over their final wealth position, a method first proposed by Markowitz (1952b) and by Kahneman and Tversky (1979), that I extend to incorporate preferences over extremal outcomes. With the failure of the traditional expected utility models in reproducing the observed stylized features of financial markets, the Prospect theory of Kahneman and Tversky (1979) offered the first significant alternative to the expected utility paradigm by considering that people focus on gains and losses rather than on final positions. Under this setting, Barberis, Huang, and Santos (2000) and McQueen and Vorkink (2004) were able to build a representative agent optimization model which solution reproduced some of the observed risk premium and excess volatility. The research in behavioral finance is relatively new and its potential still to explore. The three essays composing my thesis propose to use and extend this setting to study investors behavior and investment strategies in a market where crashes and sharp windfalls are likely to occur. In the first paper, the preferences of a representative agent, relative to time varying positive and negative extremal thresholds are modelled and estimated. A new utility function that conciliates between expected utility maximization and tail-related performance measures is proposed. The model estimation shows that the representative agent preferences reveals a significant level of crash aversion and lottery-pursuit. Assuming a single risky asset economy the proposed specification is able to reproduce some of the distributional features exhibited by financial return series. The second part proposes and illustrates a preference-based asset allocation model taking into account investors crash aversion. Using the skewed t distribution, optimal allocations are characterized as a resulting tradeoff between the distribution four moments. The specification highlights the preference for odd moments and the aversion for even moments. Qualitatively, optimal portfolios are analyzed in terms of firm characteristics and in a setting that reflects real-time asset allocation, a systematic over-performance is obtained compared to the aggregate stock market. Finally, in my third article, dynamic option-based investment strategies are derived and illustrated for investors presenting downside loss aversion. The problem is solved in closed form when the stock market exhibits stochastic volatility and jumps. The specification of downside loss averse utility functions allows corresponding terminal wealth profiles to be expressed as options on the stochastic discount factor contingent on the loss aversion level. Therefore dynamic strategies reduce to the replicating portfolio using exchange traded and well selected options, and the risky stock.
Resumo:
BACKGROUND: Nowadays, cognitive remediation is widely accepted as an effective treatment for patients with schizophrenia. In French-speaking countries, techniques used in cognitive remediation for patients with schizophrenia have been applied from those used for patients with cerebral injury. As cognitive impairment is a core feature of schizophrenia, the Département de psychiatrie du CHUV in Lausanne (DP-CHUV) intended to develop a cognitive remediation program for patients with a schizophrenia spectrum disease (Recos-Vianin, 2007). Numerous studies show that the specific cognitive deficits greatly differ from one patient to another. Consequently, Recos aims at providing individualized cognitive remediation therapy. In this feasibility trial, we measured the benefits of this individualized therapy for patients with schizophrenia. Before treatment, the patients were evaluated with a large battery of cognitive tests in order to determine which of the five specific training modules - Verbal memory, visuospatial memory and attention, working memory, selective attention, reasoning - could provide the best benefit depending on their deficit. OBJECTIVES: The study was designed to evaluate the benefits of the Recos program by comparing cognitive functioning before and after treatment. METHOD: Twenty-eight patients with schizophrenia spectrum disorders (schizophrenia [n=18], schizoaffective disorder [n=5], schizotypal disorder [n=4], schizophreniform disorder [n=1], DSM-IV-TR) participated in between one and three of the cognitive modules. The choice of the training module was based on the results of the cognitive tests obtained during the first evaluation. The patients participated in 20 training sessions per module (one session per week). At the end of the training period, the cognitive functioning of each patient was reevaluated by using the same neuropsychological battery. RESULTS: The results showed a greater improvement in the cognitive functions, which were specifically trained, compared to the cognitive functions, which were not trained. However, an improvement was also observed in both types of cognitive functions, suggesting an indirect cognitive gain. CONCLUSION: In our view, the great heterogeneity of the observed cognitive deficits in schizophrenia necessitates a detailed neuropsychological investigation as well as an individualized cognitive remediation therapy. These preliminary results need to be confirmed with a more extended sample of patients.