926 resultados para Backtrack programming.


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since asset returns have been recognized as not normally distributed, the avenue of research regarding portfolio higher moments soon emerged. To account for uncertainty and vagueness of portfolio returns as well as of higher moment risks, we proposed a new portfolio selection model employing fuzzy sets in this paper. A fuzzy multi-objective linear programming (MOLP) for portfolio optimization is formulated using marginal impacts of assets on portfolio higher moments, which are modelled by trapezoidal fuzzy numbers. Through a consistent centroid-based ranking of fuzzy numbers, the fuzzy MOLP is transformed into an MOLP that is then solved by the maximin method. By taking portfolio higher moments into account, the approach enables investors to optimize not only the normal risk (variance) but also the asymmetric risk (skewness) and the risk of fat-tails (kurtosis). An illustrative example demonstrates the efficiency of the proposed methodology comparing to previous portfolio optimization models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High failure and drop-out rates from introductory programming courses continue to be of significant concern to computer science disciplines despite extensive research attempting to address the issue. In this study, we include the three entities of the didactic triangle, instructors, students and curriculum, to explore the learning difficulties that students encounter when studying introductory programming. We first explore students’ perceptions of the barriers and affordances to learning programming. A survey is conducted with introductory programming students to get their feedback on the topics and associated learning resources in the introductory programming course. The instructors’ perceptions are included by analyzing current teaching materials and assessment tools used in the course. As a result, an ADRI based approach is proposed to address the problems identified in the teaching and learning processes of an introductory programming course.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Questa Tesi prende in esame tutte le fasi che portano alla realizzazione di un generico videogioco applicandole per creare, dal principio, un gioco 3D con Unity. Se ne analizzerà l'ideazione, la progettazione degli ambienti ma anche degli algoritmi implementati, la produzione e quindi la scrittura del codice per poi terminare con i test effettuati.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pitch Estimation, also known as Fundamental Frequency (F0) estimation, has been a popular research topic for many years, and is still investigated nowadays. The goal of Pitch Estimation is to find the pitch or fundamental frequency of a digital recording of a speech or musical notes. It plays an important role, because it is the key to identify which notes are being played and at what time. Pitch Estimation of real instruments is a very hard task to address. Each instrument has its own physical characteristics, which reflects in different spectral characteristics. Furthermore, the recording conditions can vary from studio to studio and background noises must be considered. This dissertation presents a novel approach to the problem of Pitch Estimation, using Cartesian Genetic Programming (CGP).We take advantage of evolutionary algorithms, in particular CGP, to explore and evolve complex mathematical functions that act as classifiers. These classifiers are used to identify piano notes pitches in an audio signal. To help us with the codification of the problem, we built a highly flexible CGP Toolbox, generic enough to encode different kind of programs. The encoded evolutionary algorithm is the one known as 1 + , and we can choose the value for . The toolbox is very simple to use. Settings such as the mutation probability, number of runs and generations are configurable. The cartesian representation of CGP can take multiple forms and it is able to encode function parameters. It is prepared to handle with different type of fitness functions: minimization of f(x) and maximization of f(x) and has a useful system of callbacks. We trained 61 classifiers corresponding to 61 piano notes. A training set of audio signals was used for each of the classifiers: half were signals with the same pitch as the classifier (true positive signals) and the other half were signals with different pitches (true negative signals). F-measure was used for the fitness function. Signals with the same pitch of the classifier that were correctly identified by the classifier, count as a true positives. Signals with the same pitch of the classifier that were not correctly identified by the classifier, count as a false negatives. Signals with different pitch of the classifier that were not identified by the classifier, count as a true negatives. Signals with different pitch of the classifier that were identified by the classifier, count as a false positives. Our first approach was to evolve classifiers for identifying artifical signals, created by mathematical functions: sine, sawtooth and square waves. Our function set is basically composed by filtering operations on vectors and by arithmetic operations with constants and vectors. All the classifiers correctly identified true positive signals and did not identify true negative signals. We then moved to real audio recordings. For testing the classifiers, we picked different audio signals from the ones used during the training phase. For a first approach, the obtained results were very promising, but could be improved. We have made slight changes to our approach and the number of false positives reduced 33%, compared to the first approach. We then applied the evolved classifiers to polyphonic audio signals, and the results indicate that our approach is a good starting point for addressing the problem of Pitch Estimation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Positive Youth Development (PYD) perspective is a strength-based conceptualization of youth. It highlights the importance of mutually beneficial relationships between youth and their environment to develop the “Five Cs”, key assets that include character. Character has long been a subject of programming due to its focus on helping children lead moral, empathic, and prosocial lives. There are, however, many limitations in character research, including poorly operationalized definitions of character; a failure to examine the developmental and broader social context in which character exists; and a lack of evaluation of more practical character programming. The goal of this dissertation was to address these gaps in knowledge and inform the character education programming literature. The first study examined the relationships among age, gender, the school social context, and character. Moral character was negatively associated with grade, and being a girl was positively associated with moral character. The relationships between positive peer interactions at school and character (fairness, integrity) were stronger among students who reported low initial moral character when positive peer interactions was high. In the second study, the Build Character: Build Success Program, a character education program, was evaluated over six months to examine its effects on character behaviours, victimization, and school climate. No program effects were found for students in grades 1 to 3, but a slight decrease in victimization in one experimental school was found for students in grades 4 to 8. This lack of general program effects may be due to the short-term nature of the intervention, which may not have been long enough to result in measurable behaviour change. Implementation data indicated that teachers did not teach all program elements, which also may have influenced the results of the program evaluation. The present dissertation contributes to knowledge about character and its programming by: introducing new measures to operationalize character, discovering developmental patterns in character in school-aged children, highlighting gender differences in character, examining character within its broad social context, and evaluating short-term character education programming.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Code patterns, including programming patterns and design patterns, are good references for programming language feature improvement and software re-engineering. However, to our knowledge, no existing research has attempted to detect code patterns based on code clone detection technology. In this study, we build upon the previous work and propose to detect and analyze code patterns from a collection of open source projects using NiPAT technology. Because design patterns are most closely associated with object-oriented languages, we choose Java and Python projects to conduct our study. The tool we use for detecting patterns is NiPAT, a pattern detecting tool originally developed for the TXL programming language based on the NiCad clone detector. We extend NiPAT for the Java and Python programming languages. Then, we try to identify all the patterns from the pattern report and classify them into several different categories. In the end of the study, we analyze all the patterns and compare the differences between Java and Python patterns.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lung cancer is a leading cause of cancer-related death worldwide. The early diagnosis of cancer has demonstrated to be greatly helpful for curing the disease effectively. Microarray technology provides a promising approach of exploiting gene profiles for cancer diagnosis. In this study, the authors propose a gene expression programming (GEP)-based model to predict lung cancer from microarray data. The authors use two gene selection methods to extract the significant lung cancer related genes, and accordingly propose different GEP-based prediction models. Prediction performance evaluations and comparisons between the authors' GEP models and three representative machine learning methods, support vector machine, multi-layer perceptron and radial basis function neural network, were conducted thoroughly on real microarray lung cancer datasets. Reliability was assessed by the cross-data set validation. The experimental results show that the GEP model using fewer feature genes outperformed other models in terms of accuracy, sensitivity, specificity and area under the receiver operating characteristic curve. It is concluded that GEP model is a better solution to lung cancer prediction problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study looked at the reasons why Vanier College students in computer programming are encountering difficulties in their learning process, Factors such as prior academic background, prior computer experience, mother tongue, and learning styles were examined to see how they play a role in students' success in programming courses. The initial research hypotheses were the following : Computer science students using understanding and integrating succeed better than students using following coding, or problem solving. Students using problem solving succeed better than those who use participating and enculturation. Students who use coding perform better than those who prefer participating ans enculturation. In addition, this study hoped to examine whether there is a gender difference in how students learn programming.||Résumé :||La présente étude a examiné les raisons pour lesquelles les étudiants en informatique du Collège Vanier rencontrent des difficultés dans leurs études en programmation. Les facteurs tel que le niveau des études précédentes, l'expérience en informatique, la langue maternelle e les méthodes d'apprentissage ont été considérés pour voir quel rôle ces facteurs jouent pour promouvoir la réussite dans les cours de programmation.Les hypothèses initiales de recherche ont été formulées comme suit : 1. Les étudiants en informatique utilisant la compréhension et l'intégration réussissent mieux que ceux utilisant «suivre», le codage ou la résolution des problèmes. 2, Les étudiants utilisant la résolution des problèmes réussissent mieux que ceux qui utilisent la participation dans la culture informatique. 3, Les étudiants utilisant le codage réussissent mieux que ceux qui utilisent la participation dans la culture informatique.