675 resultados para ”real world mathematics”


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of my research was to examine how community-based organizations in the Niagara region provide programs for children with Autism Spectrum Disorder (ASD), who are considered to represent “extreme” or “severe” cases. A qualitative, comparative case study was conducted that focused on three organizations who provide summer recreation and activity programs, in order to examine the issues these organizations face when determining program structure and staff training; and to understand what the threshold for physical activity is in this type of setting, and how the unique needs surrounding these “severe” cases are met while attending the program. Purposeful sampling was employed to select a supervisor and senior staff member from each organization to discuss the training process, program development and implementation, and the resources and strategies used within their organization’s community-based program. A confirming comparative analysis was comparative analysis of a parents survey with six mothers whose children are considered “severe” indicated that camp staffs’ expectations are unrealistic where as the parents and supervisors have more realistic expectations within the “real world” of camp. There is no definition of “severe” or “extreme” and therefore severity is dependent upon the context.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A complex network is an abstract representation of an intricate system of interrelated elements where the patterns of connection hold significant meaning. One particular complex network is a social network whereby the vertices represent people and edges denote their daily interactions. Understanding social network dynamics can be vital to the mitigation of disease spread as these networks model the interactions, and thus avenues of spread, between individuals. To better understand complex networks, algorithms which generate graphs exhibiting observed properties of real-world networks, known as graph models, are often constructed. While various efforts to aid with the construction of graph models have been proposed using statistical and probabilistic methods, genetic programming (GP) has only recently been considered. However, determining that a graph model of a complex network accurately describes the target network(s) is not a trivial task as the graph models are often stochastic in nature and the notion of similarity is dependent upon the expected behavior of the network. This thesis examines a number of well-known network properties to determine which measures best allowed networks generated by different graph models, and thus the models themselves, to be distinguished. A proposed meta-analysis procedure was used to demonstrate how these network measures interact when used together as classifiers to determine network, and thus model, (dis)similarity. The analytical results form the basis of the fitness evaluation for a GP system used to automatically construct graph models for complex networks. The GP-based automatic inference system was used to reproduce existing, well-known graph models as well as a real-world network. Results indicated that the automatically inferred models exemplified functional similarity when compared to their respective target networks. This approach also showed promise when used to infer a model for a mammalian brain network.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Self-regulation is considered a powerful predictor of behavioral and mental health outcomes during adolescence and emerging adulthood. In this dissertation I address some electrophysiological and genetic correlates of this important skill set in a series of four studies. Across all studies event-related potentials (ERPs) were recorded as participants responded to tones presented in attended and unattended channels in an auditory selective attention task. In Study 1, examining these ERPs in relation to parental reports on the Behavior Rating Inventory of Executive Function (BRIEF) revealed that an early frontal positivity (EFP) elicited by to-be-ignored/unattended tones was larger in those with poorer self-regulation. As is traditionally found, N1 amplitudes were more negative for the to-be-attended rather than unattended tones. Additionally, N1 latencies to unattended tones correlated with parent-ratings on the BRIEF, where shorter latencies predicted better self-regulation. In Study 2 I tested a model of the associations between selfregulation scores and allelic variations in monoamine neurotransmitter genes, and their concurrent links to ERP markers of attentional control. Allelic variations in dopaminerelated genes predicted both my ERP markers and self-regulatory variables, and played a moderating role in the association between the two. In Study 3 I examined whether training in Integra Mindfulness Martial Arts, an intervention program which trains elements of self-regulation, would lead to improvement in ERP markers of attentional control and parent-report BRIEF scores in a group of adolescents with self-regulatory difficulties. I found that those in the treatment group amplified their processing of attended relative to unattended stimuli over time, and reduced their levels of problematic behaviour whereas those in the waitlist control group showed little to no change on both of these metrics. In Study 4 I examined potential associations between self-regulation and attentional control in a group of emerging adults. Both event-related spectral perturbations (ERSPs) and intertrial coherence (ITC) in the alpha and theta range predicted individual differences in self-regulation. Across the four studies I was able to conclude that real-world self-regulation is indeed associated with the neural markers of attentional control. Targeted interventions focusing on attentional control may improve self-regulation in those experiencing difficulties in this regard.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Population-based metaheuristics, such as particle swarm optimization (PSO), have been employed to solve many real-world optimization problems. Although it is of- ten sufficient to find a single solution to these problems, there does exist those cases where identifying multiple, diverse solutions can be beneficial or even required. Some of these problems are further complicated by a change in their objective function over time. This type of optimization is referred to as dynamic, multi-modal optimization. Algorithms which exploit multiple optima in a search space are identified as niching algorithms. Although numerous dynamic, niching algorithms have been developed, their performance is often measured solely on their ability to find a single, global optimum. Furthermore, the comparisons often use synthetic benchmarks whose landscape characteristics are generally limited and unknown. This thesis provides a landscape analysis of the dynamic benchmark functions commonly developed for multi-modal optimization. The benchmark analysis results reveal that the mechanisms responsible for dynamism in the current dynamic bench- marks do not significantly affect landscape features, thus suggesting a lack of representation for problems whose landscape features vary over time. This analysis is used in a comparison of current niching algorithms to identify the effects that specific landscape features have on niching performance. Two performance metrics are proposed to measure both the scalability and accuracy of the niching algorithms. The algorithm comparison results demonstrate the algorithms best suited for a variety of dynamic environments. This comparison also examines each of the algorithms in terms of their niching behaviours and analyzing the range and trade-off between scalability and accuracy when tuning the algorithms respective parameters. These results contribute to the understanding of current niching techniques as well as the problem features that ultimately dictate their success.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increasing variety and complexity of video games allows players to choose how to behave and represent themselves within these virtual environments. The focus of this dissertation was to examine the connections between the personality traits (specifically, HEXACO traits and psychopathic traits) of video game players and player-created and controlled game-characters (i.e., avatars), and the link between traits and behavior in video games. In Study 1 (n = 198), the connections between player personality traits and behavior in a Massively Multiplayer Online Roleplaying Game (World of Warcraft) were examined. Six behavior components were found (i.e., Player-versus-Player, Social Player-versus-Environment, Working, Helping, Immersion, and Core Content), and each was related to relevant personality traits. For example, Player-versus-Player behaviors were negatively related to Honesty-Humility and positively related to psychopathic traits, and Immersion behaviors (i.e., exploring, role-playing) were positively related to Openness to Experience. In Study 2 (n = 219), the connections between player personality traits and in-game behavior in video games were examined in university students. Four behavior components were found (i.e., Aggressing, Winning, Creating, and Helping), and each was related to at least one personality trait. For example, Aggressing was negatively related to Honesty-Humility and positively related to psychopathic traits. In Study 3 (n = 90), the connections between player personality traits and avatar personality traits were examined in World of Warcraft. Positive player-avatar correlations were observed for all personality traits except Extraversion. Significant mean differences between players and avatars were observed for all traits except Conscientiousness; avatars had higher mean scores on Extraversion and psychopathic traits, but lower mean scores on the remaining traits. In Study 4, the connections between player personality traits, avatar traits, and observed behaviors in a life-simulation video game (The Sims 3) were examined in university students (n = 93). Participants created two avatars and used these avatars to play The Sims 3. Results showed that the selection of certain avatar traits was related to relevant player personality traits (e.g., participants who chose the Friendly avatar trait were higher in Honesty-Humility, Emotionality, and Agreeableness, and lower in psychopathic traits). Selection of certain character-interaction behaviors was related to relevant player personality traits (e.g., participants with higher levels of psychopathic traits used more Mean and fewer Friendly interactions). Together, the results of the four studies suggest that individuals generally behave and represent themselves in video games in ways that are consistent with their real-world tendencies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Classical relational databases lack proper ways to manage certain real-world situations including imprecise or uncertain data. Fuzzy databases overcome this limitation by allowing each entry in the table to be a fuzzy set where each element of the corresponding domain is assigned a membership degree from the real interval [0…1]. But this fuzzy mechanism becomes inappropriate in modelling scenarios where data might be incomparable. Therefore, we become interested in further generalization of fuzzy database into L-fuzzy database. In such a database, the characteristic function for a fuzzy set maps to an arbitrary complete Brouwerian lattice L. From the query language perspectives, the language of fuzzy database, FSQL extends the regular Structured Query Language (SQL) by adding fuzzy specific constructions. In addition to that, L-fuzzy query language LFSQL introduces appropriate linguistic operations to define and manipulate inexact data in an L-fuzzy database. This research mainly focuses on defining the semantics of LFSQL. However, it requires an abstract algebraic theory which can be used to prove all the properties of, and operations on, L-fuzzy relations. In our study, we show that the theory of arrow categories forms a suitable framework for that. Therefore, we define the semantics of LFSQL in the abstract notion of an arrow category. In addition, we implement the operations of L-fuzzy relations in Haskell and develop a parser that translates algebraic expressions into our implementation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Many real-world optimization problems contain multiple (often conflicting) goals to be optimized concurrently, commonly referred to as multi-objective problems (MOPs). Over the past few decades, a plethora of multi-objective algorithms have been proposed, often tested on MOPs possessing two or three objectives. Unfortunately, when tasked with solving MOPs with four or more objectives, referred to as many-objective problems (MaOPs), a large majority of optimizers experience significant performance degradation. The downfall of these optimizers is that simultaneously maintaining a well-spread set of solutions along with appropriate selection pressure to converge becomes difficult as the number of objectives increase. This difficulty is further compounded for large-scale MaOPs, i.e., MaOPs possessing large amounts of decision variables. In this thesis, we explore the challenges of many-objective optimization and propose three new promising algorithms designed to efficiently solve MaOPs. Experimental results demonstrate the proposed optimizers to perform very well, often outperforming state-of-the-art many-objective algorithms.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Analyses of trade quotas typically assume that the quota restricts the flow of some nondurable good. Many real-world quotas, however, restrict the stock of durable imports. We consider the cases where (1) anyone is free to export against such quotas and where (2) only those allocated portions of the total quota are free to export against such quotas. Recent econometric investigations of such quotas have focused on the price of the durable as an indicator of tightness induced by the quota. We show why this is an inappropriate indicator and suggest alternatives.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

"Mémoire présenté à la Faculté des études supérieures En vue de l'obtention du grade de Maîtrise en droit (L.L.M.)"

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La technologie est une manière pour la science d’exprimer son côté pratique en plus d’être un moyen de traduire les connaissances lors de vraies applications scientifiques, mais ce processus peut engendrer une variété de défis moraux et éthiques. Le champ des biotechnologies laisse entrevoir de grandes réalisations pour les sociétés, y compris des traitements médicaux révolutionnaires et des aliments modifiés génétiquement, lesquels seraient sécuritaires, accessibles et largement disponibles. Mais peu de produits ont réussi le saut dans le panier du consommateur. Dans un des domaines d'application les plus prometteurs, tel celui des biotechnologies agricoles, certaines technologies n’ont pas encore entièrement émergé des laboratoires et ces produits, qui sont à l’heure actuelle sur le marché, ont été la source de polémiques significatives. L’étude présente se concentre sur le cas des vaccins faits à partir de plantes transgéniques qui, au cours des 15 dernières années, a peine à passer outre l’étape de la preuve de conception. Ces vaccins stagnent là où ils auraient dû accomplir la « promesse d'or » de fournir à bas coût une inoculation efficace pour les populations pauvres des pays en voie de développement. La question examinée dans cet essai est pourquoi, au-delà du processus de la découverte et de la conceptualisation, de telles technologies éprouvent des difficultés à atteindre leur maturité et ainsi retarde l’implantation dans les sociétés contemporaines ? Quels facteurs particuliers, sous l’angle de la bioéthique, auront besoin d’une reconsidération dans le cas échéant d’une mise en application de ces technologies pour être acceptées par les consommateurs, et avoir ainsi un impact positif sur la santé globale et l’accès équitable aux soins de santé ?

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Daniel Weinstock, director of CRÉUM, interviews two professors that were invited to pursue their work at CRÉUM during the summer of 2008. His invitees are Lisa Eckenwiler, Associate Professor of Philosophy in the Department of Philosophy and in the Department of Health Administration and Policy at George Mason University; and Chris Macdonald, Associate Professor of Philosophy at Saint Mary’s University in Halifax. You will also hear General International, an experimental/avant-garde music band that was formed only a few months ago.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La zeitgesit contemporaine sur la reconnaissance des visages suggère que le processus de reconnaissance reposerait essentiellement sur le traitement des distances entre les attributs internes du visage. Il est toutefois surprenant de noter que cette hypothèse n’a jamais été évaluée directement dans la littérature. Pour ce faire, 515 photographies de visages ont été annotées afin d’évaluer l’information véhiculée par de telles distances. Les résultats obtenus suggèrent que les études précédentes ayant utilisé des modifications de ces distances ont présenté 4 fois plus d’informations que les distances inter-attributs du monde réel. De plus, il semblerait que les observateurs humains utilisent difficilement les distances inter-attributs issues de visages réels pour reconnaître leurs semblables à plusieurs distances de visionnement (pourcentage correct maximal de 65%). Qui plus est, la performance des observateurs est presque parfaitement restaurée lorsque l’information des distances inter-attributs n’est pas utilisable mais que les observateurs peuvent utiliser les autres sources d’information de visages réels. Nous concluons que des indices faciaux autre que les distances inter-attributs tel que la forme des attributs et les propriétés de la peau véhiculent l’information utilisée par le système visuel pour opérer la reconnaissance des visages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse envisage un ensemble de méthodes permettant aux algorithmes d'apprentissage statistique de mieux traiter la nature séquentielle des problèmes de gestion de portefeuilles financiers. Nous débutons par une considération du problème général de la composition d'algorithmes d'apprentissage devant gérer des tâches séquentielles, en particulier celui de la mise-à-jour efficace des ensembles d'apprentissage dans un cadre de validation séquentielle. Nous énumérons les desiderata que des primitives de composition doivent satisfaire, et faisons ressortir la difficulté de les atteindre de façon rigoureuse et efficace. Nous poursuivons en présentant un ensemble d'algorithmes qui atteignent ces objectifs et présentons une étude de cas d'un système complexe de prise de décision financière utilisant ces techniques. Nous décrivons ensuite une méthode générale permettant de transformer un problème de décision séquentielle non-Markovien en un problème d'apprentissage supervisé en employant un algorithme de recherche basé sur les K meilleurs chemins. Nous traitons d'une application en gestion de portefeuille où nous entraînons un algorithme d'apprentissage à optimiser directement un ratio de Sharpe (ou autre critère non-additif incorporant une aversion au risque). Nous illustrons l'approche par une étude expérimentale approfondie, proposant une architecture de réseaux de neurones spécialisée à la gestion de portefeuille et la comparant à plusieurs alternatives. Finalement, nous introduisons une représentation fonctionnelle de séries chronologiques permettant à des prévisions d'être effectuées sur un horizon variable, tout en utilisant un ensemble informationnel révélé de manière progressive. L'approche est basée sur l'utilisation des processus Gaussiens, lesquels fournissent une matrice de covariance complète entre tous les points pour lesquels une prévision est demandée. Cette information est utilisée à bon escient par un algorithme qui transige activement des écarts de cours (price spreads) entre des contrats à terme sur commodités. L'approche proposée produit, hors échantillon, un rendement ajusté pour le risque significatif, après frais de transactions, sur un portefeuille de 30 actifs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse porte sur une classe d'algorithmes d'apprentissage appelés architectures profondes. Il existe des résultats qui indiquent que les représentations peu profondes et locales ne sont pas suffisantes pour la modélisation des fonctions comportant plusieurs facteurs de variation. Nous sommes particulièrement intéressés par ce genre de données car nous espérons qu'un agent intelligent sera en mesure d'apprendre à les modéliser automatiquement; l'hypothèse est que les architectures profondes sont mieux adaptées pour les modéliser. Les travaux de Hinton (2006) furent une véritable percée, car l'idée d'utiliser un algorithme d'apprentissage non-supervisé, les machines de Boltzmann restreintes, pour l'initialisation des poids d'un réseau de neurones supervisé a été cruciale pour entraîner l'architecture profonde la plus populaire, soit les réseaux de neurones artificiels avec des poids totalement connectés. Cette idée a été reprise et reproduite avec succès dans plusieurs contextes et avec une variété de modèles. Dans le cadre de cette thèse, nous considérons les architectures profondes comme des biais inductifs. Ces biais sont représentés non seulement par les modèles eux-mêmes, mais aussi par les méthodes d'entraînement qui sont souvent utilisés en conjonction avec ceux-ci. Nous désirons définir les raisons pour lesquelles cette classe de fonctions généralise bien, les situations auxquelles ces fonctions pourront être appliquées, ainsi que les descriptions qualitatives de telles fonctions. L'objectif de cette thèse est d'obtenir une meilleure compréhension du succès des architectures profondes. Dans le premier article, nous testons la concordance entre nos intuitions---que les réseaux profonds sont nécessaires pour mieux apprendre avec des données comportant plusieurs facteurs de variation---et les résultats empiriques. Le second article est une étude approfondie de la question: pourquoi l'apprentissage non-supervisé aide à mieux généraliser dans un réseau profond? Nous explorons et évaluons plusieurs hypothèses tentant d'élucider le fonctionnement de ces modèles. Finalement, le troisième article cherche à définir de façon qualitative les fonctions modélisées par un réseau profond. Ces visualisations facilitent l'interprétation des représentations et invariances modélisées par une architecture profonde.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Contexte: Bien que plusieurs algorithmes pharmacogénétiques de prédiction de doses de warfarine aient été publiés, peu d’études ont comparé la validité de ces algorithmes en pratique clinique réelle. Objectif: Évaluer trois algorithmes pharmacogénomiques dans une population de patients qui initient un traitement à la warfarine et qui souffrent de fibrillation auriculaire ou de problèmes de valves cardiaques. Analyser la performance des algorithmes de Gage et al., de Michaud et al. ainsi que de l’IWPC quant à la prédiction de la dose de warfarine permettant d’atteindre l’INR thérapeutique. Méthodes: Un devis de cohorte rétrospectif fut utilisé afin d’évaluer la validité des algorithmes chez 605 patients ayant débuté une thérapie de warfarine à l’Institut de Cardiologie de Montréal. Le coefficient de corrélation de Pearson ainsi que l’erreur absolue moyenne ont été utilisés pour évaluer la précision des algorithmes. L’exactitude clinique des prédictions de doses fut évaluée en calculant le nombre de patients pour qui la dose prédite était sous-estimée, idéalement estimée ou surestimée. Enfin, la régression linéaire multiple a été utilisée pour évaluer la validité d’un modèle de prédiction de doses de warfarine obtenu en ajoutant de nouvelles covariables. Résultats : L’algorithme de Gage a obtenu la proportion de variation expliquée la plus élevée (R2 ajusté = 44 %) ainsi que la plus faible erreur absolue moyenne (MAE = 1.41 ± 0.06). De plus, la comparaison des proportions de patients ayant une dose prédite à moins de 20 % de la dose observée a confirmé que l’algorithme de Gage était également le plus performant. Conclusion : Le modèle publié par Gage en 2008 est l’algorithme pharmacogénétique le plus exact dans notre population pour prédire des doses thérapeutiques de warfarine.