900 resultados para Procedural Programming
Resumo:
O presente trabalho pretende apresentar e descrever a metodologia processual e respectivas relexões que sustentam a criação de um artefato digital interativo construído de forma a que alguns dos elementos bidimensionais que o constituem sejam manipuladas de forma a que criem ao jogador a ilusão de tridimensionalidade. Em 1992 a Id Software com o jogo Wolfenstein 3D, introduziu uma referência visual à tridimensionalidade, utilizando para o efeito tecnologia 2D, a qual, através de um sistema de redimensionamento e posicionamento de imagens, consegui transmitir a noção de tridimensionalidade, utilizando na altura um tipo de jogo em primeira pessoa, ou seja, o jogador experiência uma campo visual que visa reproduzir a própria experiência do mundo táctil na relação que dispõe entre os espaços e objetos. Através do Processing, uma linguagem de programação que assenta no Java, estes objetivos conceptuais serão reproduzidos, que procuram, por um lado, transmitir esta aparente ilusão de tridimensionalidade e por outro não utilizar um tipo de artefacto digital que proporciona uma jogabilidade em primeira pessoa mas sim possibilitam ao jogador uma experiência visual que aborda todo o espaço em que é lhe permitido circular, no qual é lhe exposto as adversidades que precisa de superar para progredir. Para que isto seja possível o jogador assume o papel de um personagem e através da sua interação com o artefato, vai ediicando uma narrativa visual que visa o seu envolvimento com a temática representada. Como tema é utilizada uma representação da busca pelo Sarcófago do faraó da 18ª Dinastia Tutankamón (1332 - 1323 AC) pelo explorador britânico Howard Carter (1874 - 1939) cuja expedição no Vales do Reis em 1922 constitui ainda hoje a mais célebre descoberta arqueológica relacionada com Antigo Egipto. Ao longo desta Dissertação são abordados temas que visam resoluções tanto no campo técnico e tecnológico, através da programação e sua linguagem, como no campo visual e estético que visa uma conexão consciente com a temática a representar e a ser experienciada
Resumo:
The aim of this note is to formulate an envelope theorem for vector convex programs. This version corrects an earlier work, “The envelope theorem for multiobjective convex programming via contingent derivatives” by Jiménez Guerra et al. (2010) [3]. We first propose a necessary and sufficient condition allowing to restate the main result proved in the alluded paper. Second, we introduce a new Lagrange multiplier in order to obtain an envelope theorem avoiding the aforementioned error.
Resumo:
The aim of this paper is to extend the classical envelope theorem from scalar to vector differential programming. The obtained result allows us to measure the quantitative behaviour of a certain set of optimal values (not necessarily a singleton) characterized to become minimum when the objective function is composed with a positive function, according to changes of any of the parameters which appear in the constraints. We show that the sensitivity of the program depends on a Lagrange multiplier and its sensitivity.
Resumo:
Sequence problems belong to the most challenging interdisciplinary topics of the actuality. They are ubiquitous in science and daily life and occur, for example, in form of DNA sequences encoding all information of an organism, as a text (natural or formal) or in form of a computer program. Therefore, sequence problems occur in many variations in computational biology (drug development), coding theory, data compression, quantitative and computational linguistics (e.g. machine translation). In recent years appeared some proposals to formulate sequence problems like the closest string problem (CSP) and the farthest string problem (FSP) as an Integer Linear Programming Problem (ILPP). In the present talk we present a general novel approach to reduce the size of the ILPP by grouping isomorphous columns of the string matrix together. The approach is of practical use, since the solution of sequence problems is very time consuming, in particular when the sequences are long.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
Questa Tesi prende in esame tutte le fasi che portano alla realizzazione di un generico videogioco applicandole per creare, dal principio, un gioco 3D con Unity. Se ne analizzerà l'ideazione, la progettazione degli ambienti ma anche degli algoritmi implementati, la produzione e quindi la scrittura del codice per poi terminare con i test effettuati.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
Pitch Estimation, also known as Fundamental Frequency (F0) estimation, has been a popular research topic for many years, and is still investigated nowadays. The goal of Pitch Estimation is to find the pitch or fundamental frequency of a digital recording of a speech or musical notes. It plays an important role, because it is the key to identify which notes are being played and at what time. Pitch Estimation of real instruments is a very hard task to address. Each instrument has its own physical characteristics, which reflects in different spectral characteristics. Furthermore, the recording conditions can vary from studio to studio and background noises must be considered. This dissertation presents a novel approach to the problem of Pitch Estimation, using Cartesian Genetic Programming (CGP).We take advantage of evolutionary algorithms, in particular CGP, to explore and evolve complex mathematical functions that act as classifiers. These classifiers are used to identify piano notes pitches in an audio signal. To help us with the codification of the problem, we built a highly flexible CGP Toolbox, generic enough to encode different kind of programs. The encoded evolutionary algorithm is the one known as 1 + , and we can choose the value for . The toolbox is very simple to use. Settings such as the mutation probability, number of runs and generations are configurable. The cartesian representation of CGP can take multiple forms and it is able to encode function parameters. It is prepared to handle with different type of fitness functions: minimization of f(x) and maximization of f(x) and has a useful system of callbacks. We trained 61 classifiers corresponding to 61 piano notes. A training set of audio signals was used for each of the classifiers: half were signals with the same pitch as the classifier (true positive signals) and the other half were signals with different pitches (true negative signals). F-measure was used for the fitness function. Signals with the same pitch of the classifier that were correctly identified by the classifier, count as a true positives. Signals with the same pitch of the classifier that were not correctly identified by the classifier, count as a false negatives. Signals with different pitch of the classifier that were not identified by the classifier, count as a true negatives. Signals with different pitch of the classifier that were identified by the classifier, count as a false positives. Our first approach was to evolve classifiers for identifying artifical signals, created by mathematical functions: sine, sawtooth and square waves. Our function set is basically composed by filtering operations on vectors and by arithmetic operations with constants and vectors. All the classifiers correctly identified true positive signals and did not identify true negative signals. We then moved to real audio recordings. For testing the classifiers, we picked different audio signals from the ones used during the training phase. For a first approach, the obtained results were very promising, but could be improved. We have made slight changes to our approach and the number of false positives reduced 33%, compared to the first approach. We then applied the evolved classifiers to polyphonic audio signals, and the results indicate that our approach is a good starting point for addressing the problem of Pitch Estimation.
Resumo:
The Positive Youth Development (PYD) perspective is a strength-based conceptualization of youth. It highlights the importance of mutually beneficial relationships between youth and their environment to develop the “Five Cs”, key assets that include character. Character has long been a subject of programming due to its focus on helping children lead moral, empathic, and prosocial lives. There are, however, many limitations in character research, including poorly operationalized definitions of character; a failure to examine the developmental and broader social context in which character exists; and a lack of evaluation of more practical character programming. The goal of this dissertation was to address these gaps in knowledge and inform the character education programming literature. The first study examined the relationships among age, gender, the school social context, and character. Moral character was negatively associated with grade, and being a girl was positively associated with moral character. The relationships between positive peer interactions at school and character (fairness, integrity) were stronger among students who reported low initial moral character when positive peer interactions was high. In the second study, the Build Character: Build Success Program, a character education program, was evaluated over six months to examine its effects on character behaviours, victimization, and school climate. No program effects were found for students in grades 1 to 3, but a slight decrease in victimization in one experimental school was found for students in grades 4 to 8. This lack of general program effects may be due to the short-term nature of the intervention, which may not have been long enough to result in measurable behaviour change. Implementation data indicated that teachers did not teach all program elements, which also may have influenced the results of the program evaluation. The present dissertation contributes to knowledge about character and its programming by: introducing new measures to operationalize character, discovering developmental patterns in character in school-aged children, highlighting gender differences in character, examining character within its broad social context, and evaluating short-term character education programming.
Resumo:
Code patterns, including programming patterns and design patterns, are good references for programming language feature improvement and software re-engineering. However, to our knowledge, no existing research has attempted to detect code patterns based on code clone detection technology. In this study, we build upon the previous work and propose to detect and analyze code patterns from a collection of open source projects using NiPAT technology. Because design patterns are most closely associated with object-oriented languages, we choose Java and Python projects to conduct our study. The tool we use for detecting patterns is NiPAT, a pattern detecting tool originally developed for the TXL programming language based on the NiCad clone detector. We extend NiPAT for the Java and Python programming languages. Then, we try to identify all the patterns from the pattern report and classify them into several different categories. In the end of the study, we analyze all the patterns and compare the differences between Java and Python patterns.
Resumo:
Process systems design, operation and synthesis problems under uncertainty can readily be formulated as two-stage stochastic mixed-integer linear and nonlinear (nonconvex) programming (MILP and MINLP) problems. These problems, with a scenario based formulation, lead to large-scale MILPs/MINLPs that are well structured. The first part of the thesis proposes a new finitely convergent cross decomposition method (CD), where Benders decomposition (BD) and Dantzig-Wolfe decomposition (DWD) are combined in a unified framework to improve the solution of scenario based two-stage stochastic MILPs. This method alternates between DWD iterations and BD iterations, where DWD restricted master problems and BD primal problems yield a sequence of upper bounds, and BD relaxed master problems yield a sequence of lower bounds. A variant of CD, which includes multiple columns per iteration of DW restricted master problem and multiple cuts per iteration of BD relaxed master problem, called multicolumn-multicut CD is then developed to improve solution time. Finally, an extended cross decomposition method (ECD) for solving two-stage stochastic programs with risk constraints is proposed. In this approach, a CD approach at the first level and DWD at a second level is used to solve the original problem to optimality. ECD has a computational advantage over a bilevel decomposition strategy or solving the monolith problem using an MILP solver. The second part of the thesis develops a joint decomposition approach combining Lagrangian decomposition (LD) and generalized Benders decomposition (GBD), to efficiently solve stochastic mixed-integer nonlinear nonconvex programming problems to global optimality, without the need for explicit branch and bound search. In this approach, LD subproblems and GBD subproblems are systematically solved in a single framework. The relaxed master problem obtained from the reformulation of the original problem, is solved only when necessary. A convexification of the relaxed master problem and a domain reduction procedure are integrated into the decomposition framework to improve solution efficiency. Using case studies taken from renewable resource and fossil-fuel based application in process systems engineering, it can be seen that these novel decomposition approaches have significant benefit over classical decomposition methods and state-of-the-art MILP/MINLP global optimization solvers.
Resumo:
This study looked at the reasons why Vanier College students in computer programming are encountering difficulties in their learning process, Factors such as prior academic background, prior computer experience, mother tongue, and learning styles were examined to see how they play a role in students' success in programming courses. The initial research hypotheses were the following : Computer science students using understanding and integrating succeed better than students using following coding, or problem solving. Students using problem solving succeed better than those who use participating and enculturation. Students who use coding perform better than those who prefer participating ans enculturation. In addition, this study hoped to examine whether there is a gender difference in how students learn programming.||Résumé :||La présente étude a examiné les raisons pour lesquelles les étudiants en informatique du Collège Vanier rencontrent des difficultés dans leurs études en programmation. Les facteurs tel que le niveau des études précédentes, l'expérience en informatique, la langue maternelle e les méthodes d'apprentissage ont été considérés pour voir quel rôle ces facteurs jouent pour promouvoir la réussite dans les cours de programmation.Les hypothèses initiales de recherche ont été formulées comme suit : 1. Les étudiants en informatique utilisant la compréhension et l'intégration réussissent mieux que ceux utilisant «suivre», le codage ou la résolution des problèmes. 2, Les étudiants utilisant la résolution des problèmes réussissent mieux que ceux qui utilisent la participation dans la culture informatique. 3, Les étudiants utilisant le codage réussissent mieux que ceux qui utilisent la participation dans la culture informatique.
Resumo:
A High-Performance Computing job dispatcher is a critical software that assigns the finite computing resources to submitted jobs. This resource assignment over time is known as the on-line job dispatching problem in HPC systems. The fact the problem is on-line means that solutions must be computed in real-time, and their required time cannot exceed some threshold to do not affect the normal system functioning. In addition, a job dispatcher must deal with a lot of uncertainty: submission times, the number of requested resources, and duration of jobs. Heuristic-based techniques have been broadly used in HPC systems, at the cost of achieving (sub-)optimal solutions in a short time. However, the scheduling and resource allocation components are separated, thus generates a decoupled decision that may cause a performance loss. Optimization-based techniques are less used for this problem, although they can significantly improve the performance of HPC systems at the expense of higher computation time. Nowadays, HPC systems are being used for modern applications, such as big data analytics and predictive model building, that employ, in general, many short jobs. However, this information is unknown at dispatching time, and job dispatchers need to process large numbers of them quickly while ensuring high Quality-of-Service (QoS) levels. Constraint Programming (CP) has been shown to be an effective approach to tackle job dispatching problems. However, state-of-the-art CP-based job dispatchers are unable to satisfy the challenges of on-line dispatching, such as generate dispatching decisions in a brief period and integrate current and past information of the housing system. Given the previous reasons, we propose CP-based dispatchers that are more suitable for HPC systems running modern applications, generating on-line dispatching decisions in a proper time and are able to make effective use of job duration predictions to improve QoS levels, especially for workloads dominated by short jobs.
Resumo:
The thesis deals with standing and justiciability in climate litigation against governments and the private sector. The first part addresses the impacts of climate change on human rights, the major developments in international climate law, and the historical reasons for climate litigation. The second part analyses several cases, divided into categories. It then draws to a comparative conclusion with regard to each category. The third part deals with the Italian legal tradition on standing and justiciability – starting from the historical roots of such rules. The fourth part introduces the ‘Model Statute’ drafted by the International Bar Association, arguing that the 'ratio legis' of this proposal could be implemented in Italy or the EU. The thesis develops arguments, based on the existing legal framework, to help plaintiffs establish standing and justiciability in proceedings pending before Italian courts. It further proposes the idea that 'citizen suits' are consistent with the Italian and EU legal tradition and that the EU could rely on citizen suits to privately enforce its climate law and policies under the ‘European Green Deal.’
Resumo:
Embedded systems are increasingly integral to daily life, improving and facilitating the efficiency of modern Cyber-Physical Systems which provide access to sensor data, and actuators. As modern architectures become increasingly complex and heterogeneous, their optimization becomes a challenging task. Additionally, ensuring platform security is important to avoid harm to individuals and assets. This study primarily addresses challenges in contemporary Embedded Systems, focusing on platform optimization and security enforcement. The initial section of this study delves into the application of machine learning methods to efficiently determine the optimal number of cores for a parallel RISC-V cluster to minimize energy consumption using static source code analysis. Results demonstrate that automated platform configuration is not only viable but also that there is a moderate performance trade-off when relying solely on static features. The second part focuses on addressing the problem of heterogeneous device mapping, which involves assigning tasks to the most suitable computational device in a heterogeneous platform for optimal runtime. The contribution of this section lies in the introduction of novel pre-processing techniques, along with a training framework called Siamese Networks, that enhances the classification performance of DeepLLVM, an advanced approach for task mapping. Importantly, these proposed approaches are independent from the specific deep-learning model used. Finally, this research work focuses on addressing issues concerning the binary exploitation of software running in modern Embedded Systems. It proposes an architecture to implement Control-Flow Integrity in embedded platforms with a Root-of-Trust, aiming to enhance security guarantees with limited hardware modifications. The approach involves enhancing the architecture of a modern RISC-V platform for autonomous vehicles by implementing a side-channel communication mechanism that relays control-flow changes executed by the process running on the host core to the Root-of-Trust. This approach has limited impact on performance and it is effective in enhancing the security of embedded platforms.