773 resultados para interactive mapping
Resumo:
This thesis develops and evaluates statistical methods for different types of genetic analyses, including quantitative trait loci (QTL) analysis, genome-wide association study (GWAS), and genomic evaluation. The main contribution of the thesis is to provide novel insights in modeling genetic variance, especially via random effects models. In variance component QTL analysis, a full likelihood model accounting for uncertainty in the identity-by-descent (IBD) matrix was developed. It was found to be able to correctly adjust the bias in genetic variance component estimation and gain power in QTL mapping in terms of precision. Double hierarchical generalized linear models, and a non-iterative simplified version, were implemented and applied to fit data of an entire genome. These whole genome models were shown to have good performance in both QTL mapping and genomic prediction. A re-analysis of a publicly available GWAS data set identified significant loci in Arabidopsis that control phenotypic variance instead of mean, which validated the idea of variance-controlling genes. The works in the thesis are accompanied by R packages available online, including a general statistical tool for fitting random effects models (hglm), an efficient generalized ridge regression for high-dimensional data (bigRR), a double-layer mixed model for genomic data analysis (iQTL), a stochastic IBD matrix calculator (MCIBD), a computational interface for QTL mapping (qtl.outbred), and a GWAS analysis tool for mapping variance-controlling loci (vGWAS).
Resumo:
MAPfastR is a software package developed to analyze QTL data from inbred and outbred line-crosses. The package includes a number of modules for fast and accurate QTL analyses. It has been developed in the R language for fast and comprehensive analyses of large datasets. MAPfastR is freely available at: http://www.computationalgenetics.se/?page_id=7.
Resumo:
The purpose of this presentation is to introduce the research project progress in “the mapping of pedagogical methods in web-based language teaching" by Högskolan Dalarna (Dalarna University). This project will identify the differences in pedagogical methods that are used for online language classes. The pedagogical method defined in this project is what the teachers do to ensure students attain the learning outcomes, for example, planning, designing courses, leading students, knowing students' abilities, implementing activities, etc. So far the members of this project have analyzed the course plans (in the language department at Dalarna University) and categorized the learning outcomes. A questionnaire was constructed based on the learning outcomes and then either sent out remotely to teachers or completed face to face through interviews. The answers provided to the questionnaires enabled the project to identify many differences in how language teachers interact with their students but also, the way of giving feedback, motivating and helping students, types of class activities and materials used. This presentation introduces the progress of the project and identifies the challenges at the language department at Dalarna University. Finally, the advantages and problems of online language proficiency courses will be discussed and suggestions made for future improvement.
Resumo:
When newly immigrated children and young people begin school in Sweden, certain challengesarise. These may result from weak Swedish-language skills and different schooling backgrounds,as well as organizational and pedagogical limitations in the schools. This generates demands onschool leaders to lead and develop the organization and teachers competences to meet these pupils’needs. This situation was behind the initiation of the project “New Immigrants and Learning—Competence Development for Teachers and School Principals.” The project ran in schools infour Swedish municipalities, its aim was to develop leadership, organizational and pedagogicalskills that would facilitate the schooling and integration of newly arrived pupils. This article aimsto describe and discuss a Participant Action Research (PAR) based on a think tank and researchcircles, drawing special attention to the role of the school leaders. It will also examine whether theresearch circles and the project overall served to develop educational and intercultural leadership,organizational conditions, collegial learning, pedagogical methods and competence in terms ofschooling for this pupil group.
Resumo:
Research objectives Poker and responsible gambling both entail the use of the executive functions (EF), which are higher-level cognitive abilities. The main objective of this work was to assess if online poker players of different ability show different performances in their EF and if so, which functions are the most discriminating ones. The secondary objective was to assess if the EF performance can predict the quality of gambling, according to the Gambling Related Cognition Scale (GRCS), the South Oaks Gambling Screen (SOGS) and the Problem Gambling Severity Index (PGSI). Sample and methods The study design consisted of two stages: 46 Italian active players (41m, 5f; age 32±7,1ys; education 14,8±3ys) fulfilled the PGSI in a secure IT web system and uploaded their own hand history files, which were anonymized and then evaluated by two poker experts. 36 of these players (31m, 5f; age 33±7,3ys; education 15±3ys) accepted to take part in the second stage: the administration of an extensive neuropsychological test battery by a blinded trained professional. To answer the main research question we collected all final and intermediate scores of the EF tests on each player together with the scoring on the playing ability. To answer the secondary research question, we referred to GRCS, PGSI and SOGS scores. We determined which variables that are good predictors of the playing ability score using statistical techniques able to deal with many regressors and few observations (LASSO, best subset algorithms and CART). In this context information criteria and cross-validation errors play a key role for the selection of the relevant regressors, while significance testing and goodness-of-fit measures can lead to wrong conclusions. Preliminary findings We found significant predictors of the poker ability score in various tests. In particular, there are good predictors 1) in some Wisconsin Card Sorting Test items that measure flexibility in choosing strategy of problem-solving, strategic planning, modulating impulsive responding, goal setting and self-monitoring, 2) in those Cognitive Estimates Test variables related to deductive reasoning, problem solving, development of an appropriate strategy and self-monitoring, 3) in the Emotional Quotient Inventory Short (EQ-i:S) Stress Management score, composed by the Stress Tolerance and Impulse Control scores, and in the Interpersonal score (Empathy, Social Responsibility, Interpersonal Relationship). As for the quality of gambling, some EQ-i:S scales scores provide the best predictors: General Mood for the PGSI; Intrapersonal (Self-Regard; Emotional Self-Awareness, Assertiveness, Independence, Self-Actualization) and Adaptability (Reality Testing, Flexibility, Problem Solving) for the SOGS, Adaptability for the GRCS. Implications for the field Through PokerMapper we gathered knowledge and evaluated the feasibility of the construction of short tasks/card games in online poker environments for profiling users’ executive functions. These card games will be part of an IT system able to dynamically profile EF and provide players with a feedback on their expected performance and ability to gamble responsibly in that particular moment. The implementation of such system in existing gambling platforms could lead to an effective proactive tool for supporting responsible gambling.
Resumo:
Understanding the genetic basis of traits involved in adaptation is a major challenge in evolutionary biology but remains poorly understood. Here, we use genome-wide association mapping using a custom 50 k single nucleotide polymorphism (SNP) array in a natural population of collared flycatchers to examine the genetic basis of clutch size, an important life-history trait in many animal species. We found evidence for an association on chromosome 18 where one SNP significant at the genome-wide level explained 3.9% of the phenotypic variance. We also detected two suggestive quantitative trait loci (QTLs) on chromosomes 9 and 26. Fitness differences among genotypes were generally weak and not significant, although there was some indication of a sex-by-genotype interaction for lifetime reproductive success at the suggestive QTL on chromosome 26. This implies that sexual antagonism may play a role in maintaining genetic variation at this QTL. Our findings provide candidate regions for a classic avian life-history trait that will be useful for future studies examining the molecular and cellular function of, as well as evolutionary mechanisms operating at, these loci.
Resumo:
A description of a data item's provenance can be provided in dierent forms, and which form is best depends on the intended use of that description. Because of this, dierent communities have made quite distinct underlying assumptions in their models for electronically representing provenance. Approaches deriving from the library and archiving communities emphasise agreed vocabulary by which resources can be described and, in particular, assert their attribution (who created the resource, who modied it, where it was stored etc.) The primary purpose here is to provide intuitive metadata by which users can search for and index resources. In comparison, models for representing the results of scientific workflows have been developed with the assumption that each event or piece of intermediary data in a process' execution can and should be documented, to give a full account of the experiment undertaken. These occurrences are connected together by stating where one derived from, triggered, or otherwise caused another, and so form a causal graph. Mapping between the two approaches would be benecial in integrating systems and exploiting the strengths of each. In this paper, we specify such a mapping between Dublin Core and the Open Provenance Model. We further explain the technical issues to overcome and the rationale behind the approach, to allow the same method to apply in mapping similar schemes.
Resumo:
Diepoxybutane (DEB), a known industrial carcinogen, reacts with DNA primarily at the N7 position of deoxyguanosine residues and creates interstrand cross-links at the sequence 5'-GNC. Since N7-N7 cross-links cause DNA to fragment upon heating, quantative polymerase chain reaction (QPCR) is being used in this experiment to measure the amount of DEB damage (lesion frequency) with three different targets-mitochondrial (unpackaged), open chromatin region, and closed chromatin region. Initial measurements of DEB damage within these three targets were not consistent because the template DNA was not the limiting reagent in the PCR. Follow-up PCR trials using a limiting amount of DNA are still in progress although initial experimentation looks promising. Sequencing of these three targets to confirm the primer targets has only been successfully performed for the closed chromatin target and does not match the sequence from NIH used to design that primer pair. Further sequencing trials need to be conducted on all three targets to assure that a mitochondrial, open chromatin, and closed chromatin region are actually being amplified in this experimental series.
Resumo:
The movement of graphics and audio programming towards three dimensions is to better simulate the way we experience our world. In this project I looked to use methods for coming closer to such simulation via realistic graphics and sound combined with a natural interface. I did most of my work on a Dell OptiPlex with an 800 MHz Pentium III processor and an NVIDlA GeForce 256 AGP Plus graphics accelerator -high end products in the consumer market as of April 2000. For graphics, I used OpenGL [1], an open·source, multi-platform set of graphics libraries that is relatively easy to use, coded in C . The basic engine I first put together was a system to place objects in a scene and to navigate around the scene in real time. Once I accomplished this, I was able to investigate specific techniques for making parts of a scene more appealing.
Resumo:
Running hydrodynamic models interactively allows both visual exploration and change of model state during simulation. One of the main characteristics of an interactive model is that it should provide immediate feedback to the user, for example respond to changes in model state or view settings. For this reason, such features are usually only available for models with a relatively small number of computational cells, which are used mainly for demonstration and educational purposes. It would be useful if interactive modeling would also work for models typically used in consultancy projects involving large scale simulations. This results in a number of technical challenges related to the combination of the model itself and the visualisation tools (scalability, implementation of an appropriate API for control and access to the internal state). While model parallelisation is increasingly addressed by the environmental modeling community, little effort has been spent on developing a high-performance interactive environment. What can we learn from other high-end visualisation domains such as 3D animation, gaming, virtual globes (Autodesk 3ds Max, Second Life, Google Earth) that also focus on efficient interaction with 3D environments? In these domains high efficiency is usually achieved by the use of computer graphics algorithms such as surface simplification depending on current view, distance to objects, and efficient caching of the aggregated representation of object meshes. We investigate how these algorithms can be re-used in the context of interactive hydrodynamic modeling without significant changes to the model code and allowing model operation on both multi-core CPU personal computers and high-performance computer clusters.
Resumo:
FGV Direito Rio
Resumo:
The impact of digitization was felt before it could be described and explained. The Mapping Digital Media project is a way of catching up, an ambitious attempt at depicting and understanding the progress and effects of digitization on media and communications systems across the world. The publication of over 50 country reports provides the most comprehensive picture to date on the changes undergone by journalism, news production, and the media as a result of the transition of broadcasting from analog to digital and the advent of the internet. These extensive reports, all sharing the same structure, cover issues such as media consumption, public media, changes in journalism, digital activism, new regulation, and business models. Reports have been published from nine Latin American countries: Mexico, Argentina, Colombia, Peru, Chile, Brazil, Guatemala, Nicaragua, and Uruguay. Given the recent evolution of Brazil’s media landscape and regulation, and its position as a regional reference, few reports have generated as much expectation as the Brazilian one. This excellent text is key to understanding digitization in Brazil, in Latin America, and in the world at large.
Resumo:
In June 2014 Brazil hosted the FIFA World Cup and in August 2016 Rio de Janeiro hosts the Summer Olympics. These two seminal sporting events will draw tens of thousands of air travelers through Brazil’s airports, airports that are currently in the midst of a national modernization program to address years of infrastructure neglect and insufficient capacity. Raising Brazil’s major airports up to the standards air travelers experience at major airports elsewhere in the world is more than just a case of building or remodeling facilities, processes must also be examined and reworked to enhance traveler experience and satisfaction. This research paper examines the key interface between airports and airline passengers—airport check-in procedures—according to how much value and waste there is associated with them. In particular, the paper makes use of a value stream mapping construct for services proposed by Martins, Cantanhede, and Jardim (2010). The uniqueness of this construct is that it attributes each activity with a certain percentage and magnitude of value or waste which can then be ordered and prioritized for improvement. Working against a fairly commonly expressed notion in Brazil that Brazil’s airports are inferior to the airports of economically advanced countries, the paper examines Rio’s two major airports, Galeão International and Santos Dumont in comparison to Washington D.C.’s Washington National and Dulles International airports. The paper seeks to accomplish three goals: - Determine whether there are differences in airport passenger check-in procedures between U.S. and Brazilian airports in terms of passenger value - Present options for Brazilian government or private sector authorities to consider adopting or implementing at Brazilian airports to maximize passenger value - Validate the Martins et al. construct for use in evaluating the airport check-in procedures Observations and analysis proved surprising in that all airports and service providers follow essentially the same check-in processes but execute them differently yet still result in similar overall performance in terms of value and waste. Although only a few activities are categorized as completely wasteful (and therefore removed in the revised value stream map of check-in activities), the weighting and categorization of individual activities according to their value (or waste) presents decision-makers a means to prioritize possible corrective actions. Various overall recommendations are presented based on this analysis. Most importantly, this paper demonstrates the viability of using the construct developed by Martins et al to examine airport operations, as well as its applicability to the study of other service industry processes.
Resumo:
Every time more we hear in our everyday statements like "I'm stressed!", "Don´t worry me more than I am." But in what sense can we use technology to combat these congestions that we deal with daily? Well, one way would be to use technology to create objects, systems or applications that can spoil us and preferably be imperceptible by the user and, for this we have the ubiquitous computing and nurturant technologies. The ubiquitous computing is increasingly discussed as well as ways to make your computer more subtle in the view of the user, which is subject of research and development. The use of technology as a source of relaxation and spoil us is a strand that is being explored in the context of nurturant technologies. Accordingly, this thesis is focused on the development of an object and several applications with which we can interact. The object and applications have the purpose to spoil us and help us relax after a long day at work or in some situation more stressful. The object developed employs technologies like the use of accelerometers and the applications developed employs communications between computers and Web cameras. This thesis begins with a brief introduction to the areas of research and others that we can include in this thesis, such as ubiquitous computing and the nurturant technologies, providing yet general information on stress and ways to mitigate it. Later is described some of the work already done and that influenced this thesis as well as the prototypes developed and the experiences performed, ending with a general conclusion and future work.