846 resultados para Ontology Mapping
Resumo:
Background qtl.outbred is an extendible interface in the statistical environment, R, for combining quantitative trait loci (QTL) mapping tools. It is built as an umbrella package that enables outbred genotype probabilities to be calculated and/or imported into the software package R/qtl. Findings Using qtl.outbred, the genotype probabilities from outbred line cross data can be calculated by interfacing with a new and efficient algorithm developed for analyzing arbitrarily large datasets (included in the package) or imported from other sources such as the web-based tool, GridQTL. Conclusion qtl.outbred will improve the speed for calculating probabilities and the ability to analyse large future datasets. This package enables the user to analyse outbred line cross data accurately, but with similar effort than inbred line cross data.
Resumo:
This thesis develops and evaluates statistical methods for different types of genetic analyses, including quantitative trait loci (QTL) analysis, genome-wide association study (GWAS), and genomic evaluation. The main contribution of the thesis is to provide novel insights in modeling genetic variance, especially via random effects models. In variance component QTL analysis, a full likelihood model accounting for uncertainty in the identity-by-descent (IBD) matrix was developed. It was found to be able to correctly adjust the bias in genetic variance component estimation and gain power in QTL mapping in terms of precision. Double hierarchical generalized linear models, and a non-iterative simplified version, were implemented and applied to fit data of an entire genome. These whole genome models were shown to have good performance in both QTL mapping and genomic prediction. A re-analysis of a publicly available GWAS data set identified significant loci in Arabidopsis that control phenotypic variance instead of mean, which validated the idea of variance-controlling genes. The works in the thesis are accompanied by R packages available online, including a general statistical tool for fitting random effects models (hglm), an efficient generalized ridge regression for high-dimensional data (bigRR), a double-layer mixed model for genomic data analysis (iQTL), a stochastic IBD matrix calculator (MCIBD), a computational interface for QTL mapping (qtl.outbred), and a GWAS analysis tool for mapping variance-controlling loci (vGWAS).
Resumo:
MAPfastR is a software package developed to analyze QTL data from inbred and outbred line-crosses. The package includes a number of modules for fast and accurate QTL analyses. It has been developed in the R language for fast and comprehensive analyses of large datasets. MAPfastR is freely available at: http://www.computationalgenetics.se/?page_id=7.
Resumo:
The purpose of this presentation is to introduce the research project progress in “the mapping of pedagogical methods in web-based language teaching" by Högskolan Dalarna (Dalarna University). This project will identify the differences in pedagogical methods that are used for online language classes. The pedagogical method defined in this project is what the teachers do to ensure students attain the learning outcomes, for example, planning, designing courses, leading students, knowing students' abilities, implementing activities, etc. So far the members of this project have analyzed the course plans (in the language department at Dalarna University) and categorized the learning outcomes. A questionnaire was constructed based on the learning outcomes and then either sent out remotely to teachers or completed face to face through interviews. The answers provided to the questionnaires enabled the project to identify many differences in how language teachers interact with their students but also, the way of giving feedback, motivating and helping students, types of class activities and materials used. This presentation introduces the progress of the project and identifies the challenges at the language department at Dalarna University. Finally, the advantages and problems of online language proficiency courses will be discussed and suggestions made for future improvement.
Resumo:
Research objectives Poker and responsible gambling both entail the use of the executive functions (EF), which are higher-level cognitive abilities. The main objective of this work was to assess if online poker players of different ability show different performances in their EF and if so, which functions are the most discriminating ones. The secondary objective was to assess if the EF performance can predict the quality of gambling, according to the Gambling Related Cognition Scale (GRCS), the South Oaks Gambling Screen (SOGS) and the Problem Gambling Severity Index (PGSI). Sample and methods The study design consisted of two stages: 46 Italian active players (41m, 5f; age 32±7,1ys; education 14,8±3ys) fulfilled the PGSI in a secure IT web system and uploaded their own hand history files, which were anonymized and then evaluated by two poker experts. 36 of these players (31m, 5f; age 33±7,3ys; education 15±3ys) accepted to take part in the second stage: the administration of an extensive neuropsychological test battery by a blinded trained professional. To answer the main research question we collected all final and intermediate scores of the EF tests on each player together with the scoring on the playing ability. To answer the secondary research question, we referred to GRCS, PGSI and SOGS scores. We determined which variables that are good predictors of the playing ability score using statistical techniques able to deal with many regressors and few observations (LASSO, best subset algorithms and CART). In this context information criteria and cross-validation errors play a key role for the selection of the relevant regressors, while significance testing and goodness-of-fit measures can lead to wrong conclusions. Preliminary findings We found significant predictors of the poker ability score in various tests. In particular, there are good predictors 1) in some Wisconsin Card Sorting Test items that measure flexibility in choosing strategy of problem-solving, strategic planning, modulating impulsive responding, goal setting and self-monitoring, 2) in those Cognitive Estimates Test variables related to deductive reasoning, problem solving, development of an appropriate strategy and self-monitoring, 3) in the Emotional Quotient Inventory Short (EQ-i:S) Stress Management score, composed by the Stress Tolerance and Impulse Control scores, and in the Interpersonal score (Empathy, Social Responsibility, Interpersonal Relationship). As for the quality of gambling, some EQ-i:S scales scores provide the best predictors: General Mood for the PGSI; Intrapersonal (Self-Regard; Emotional Self-Awareness, Assertiveness, Independence, Self-Actualization) and Adaptability (Reality Testing, Flexibility, Problem Solving) for the SOGS, Adaptability for the GRCS. Implications for the field Through PokerMapper we gathered knowledge and evaluated the feasibility of the construction of short tasks/card games in online poker environments for profiling users’ executive functions. These card games will be part of an IT system able to dynamically profile EF and provide players with a feedback on their expected performance and ability to gamble responsibly in that particular moment. The implementation of such system in existing gambling platforms could lead to an effective proactive tool for supporting responsible gambling.
Resumo:
Understanding the genetic basis of traits involved in adaptation is a major challenge in evolutionary biology but remains poorly understood. Here, we use genome-wide association mapping using a custom 50 k single nucleotide polymorphism (SNP) array in a natural population of collared flycatchers to examine the genetic basis of clutch size, an important life-history trait in many animal species. We found evidence for an association on chromosome 18 where one SNP significant at the genome-wide level explained 3.9% of the phenotypic variance. We also detected two suggestive quantitative trait loci (QTLs) on chromosomes 9 and 26. Fitness differences among genotypes were generally weak and not significant, although there was some indication of a sex-by-genotype interaction for lifetime reproductive success at the suggestive QTL on chromosome 26. This implies that sexual antagonism may play a role in maintaining genetic variation at this QTL. Our findings provide candidate regions for a classic avian life-history trait that will be useful for future studies examining the molecular and cellular function of, as well as evolutionary mechanisms operating at, these loci.
Resumo:
Semantic Analysis is a business analysis method designed to capture system requirements. While these requirements may be represented as text, the method also advocates the use of Ontology Charts to formally denote the system's required roles, relationships and forms of communication. Following model driven engineering techniques, Ontology Charts can be transformed to temporal Database schemas, class diagrams and component diagrams, which can then be used to produce software systems. A nice property of these transformations is that resulting system design models lend themselves to complicated extensions that do not require changes to the design models. For example, resulting databases can be extended with new types of data without the need to modify the database schema of the legacy system. Semantic Analysis is not widely used in software engineering, so there is a lack of experts in the field and no design patterns are available. This make it difficult for the analysts to pass organizational knowledge to the engineers. This study describes an implementation that is readily usable by engineers, which includes an automated technique that can produce a prototype from an Ontology Chart. The use of such tools should enable developers to make use of Semantic Analysis with minimal expertise of ontologies and MDA.
Resumo:
A description of a data item's provenance can be provided in dierent forms, and which form is best depends on the intended use of that description. Because of this, dierent communities have made quite distinct underlying assumptions in their models for electronically representing provenance. Approaches deriving from the library and archiving communities emphasise agreed vocabulary by which resources can be described and, in particular, assert their attribution (who created the resource, who modied it, where it was stored etc.) The primary purpose here is to provide intuitive metadata by which users can search for and index resources. In comparison, models for representing the results of scientific workflows have been developed with the assumption that each event or piece of intermediary data in a process' execution can and should be documented, to give a full account of the experiment undertaken. These occurrences are connected together by stating where one derived from, triggered, or otherwise caused another, and so form a causal graph. Mapping between the two approaches would be benecial in integrating systems and exploiting the strengths of each. In this paper, we specify such a mapping between Dublin Core and the Open Provenance Model. We further explain the technical issues to overcome and the rationale behind the approach, to allow the same method to apply in mapping similar schemes.
Resumo:
Diepoxybutane (DEB), a known industrial carcinogen, reacts with DNA primarily at the N7 position of deoxyguanosine residues and creates interstrand cross-links at the sequence 5'-GNC. Since N7-N7 cross-links cause DNA to fragment upon heating, quantative polymerase chain reaction (QPCR) is being used in this experiment to measure the amount of DEB damage (lesion frequency) with three different targets-mitochondrial (unpackaged), open chromatin region, and closed chromatin region. Initial measurements of DEB damage within these three targets were not consistent because the template DNA was not the limiting reagent in the PCR. Follow-up PCR trials using a limiting amount of DNA are still in progress although initial experimentation looks promising. Sequencing of these three targets to confirm the primer targets has only been successfully performed for the closed chromatin target and does not match the sequence from NIH used to design that primer pair. Further sequencing trials need to be conducted on all three targets to assure that a mitochondrial, open chromatin, and closed chromatin region are actually being amplified in this experimental series.
Resumo:
Observational data encodes values of properties associated with a feature of interest, estimated by a specified procedure. For water the properties are physical parameters like level, volume, flow and pressure, and concentrations and counts of chemicals, substances and organisms. Water property vocabularies have been assembled at project, agency and jurisdictional level. Organizations such as EPA, USGS, CEH, GA and BoM maintain vocabularies for internal use, and may make them available externally as text files. BODC and MMI have harvested many water vocabularies alongside others of interest in their domain, formalized the content using SKOS, and published them through web interfaces. Scope is highly variable both within and between vocabularies. Individual items may conflate multiple concerns (e.g. property, instrument, statistical procedure, units). There is significant duplication between vocabularies. Semantic web technologies provide the opportunity both to publish vocabularies more effectively, and achieve harmonization to support greater interoperability between datasets. - Models for vocabulary items (property, substance/taxon, process, unit-of-measure, etc) may be formalized OWL ontologies, supporting semantic relations between items in related vocabularies; - By specializing the ontology elements from SKOS concepts and properties, diverse vocabularies may be published through a common interface; - Properties from standard vocabularies (e.g. OWL, SKOS, PROV-O and VAEM) support mappings between vocabularies having a similar scope - Existing items from various sources may be assembled into new virtual vocabularies However, there are a number of challenges: - use of standard properties such as sameAs/exactMatch/equivalentClass require reasoning support; - items have been conceptualised as both classes and individuals, complicating the mapping mechanics; - re-use of items across vocabularies may conflict with expectations concerning URI patterns; - versioning complicates cross-references and re-use. This presentation will discuss ways to harness semantic web technologies to publish harmonized vocabularies, and will summarise how many of the challenges may be addressed.
Resumo:
Nowadays, the popularity of the Web encourages the development of Hypermedia Systems dedicated to e-learning. Nevertheless, most of the available Web teaching systems apply the traditional paper-based learning resources presented as HTML pages making no use of the new capabilities provided by the Web. There is a challenge to develop educative systems that adapt the educative content to the style of learning, context and background of each student. Another research issue is the capacity to interoperate on the Web reusing learning objects. This work presents an approach to address these two issues by using the technologies of the Semantic Web. The approach presented here models the knowledge of the educative content and the learner’s profile with ontologies whose vocabularies are a refinement of those defined on standards situated on the Web as reference points to provide semantics. Ontologies enable the representation of metadata concerning simple learning objects and the rules that define the way that they can feasibly be assembled to configure more complex ones. These complex learning objects could be created dynamically according to the learners’ profile by intelligent agents that use the ontologies as the source of their beliefs. Interoperability issues were addressed by using an application profile of the IEEE LOM- Learning Object Metadata standard.
Resumo:
FGV Direito Rio
Resumo:
The impact of digitization was felt before it could be described and explained. The Mapping Digital Media project is a way of catching up, an ambitious attempt at depicting and understanding the progress and effects of digitization on media and communications systems across the world. The publication of over 50 country reports provides the most comprehensive picture to date on the changes undergone by journalism, news production, and the media as a result of the transition of broadcasting from analog to digital and the advent of the internet. These extensive reports, all sharing the same structure, cover issues such as media consumption, public media, changes in journalism, digital activism, new regulation, and business models. Reports have been published from nine Latin American countries: Mexico, Argentina, Colombia, Peru, Chile, Brazil, Guatemala, Nicaragua, and Uruguay. Given the recent evolution of Brazil’s media landscape and regulation, and its position as a regional reference, few reports have generated as much expectation as the Brazilian one. This excellent text is key to understanding digitization in Brazil, in Latin America, and in the world at large.
Resumo:
In June 2014 Brazil hosted the FIFA World Cup and in August 2016 Rio de Janeiro hosts the Summer Olympics. These two seminal sporting events will draw tens of thousands of air travelers through Brazil’s airports, airports that are currently in the midst of a national modernization program to address years of infrastructure neglect and insufficient capacity. Raising Brazil’s major airports up to the standards air travelers experience at major airports elsewhere in the world is more than just a case of building or remodeling facilities, processes must also be examined and reworked to enhance traveler experience and satisfaction. This research paper examines the key interface between airports and airline passengers—airport check-in procedures—according to how much value and waste there is associated with them. In particular, the paper makes use of a value stream mapping construct for services proposed by Martins, Cantanhede, and Jardim (2010). The uniqueness of this construct is that it attributes each activity with a certain percentage and magnitude of value or waste which can then be ordered and prioritized for improvement. Working against a fairly commonly expressed notion in Brazil that Brazil’s airports are inferior to the airports of economically advanced countries, the paper examines Rio’s two major airports, Galeão International and Santos Dumont in comparison to Washington D.C.’s Washington National and Dulles International airports. The paper seeks to accomplish three goals: - Determine whether there are differences in airport passenger check-in procedures between U.S. and Brazilian airports in terms of passenger value - Present options for Brazilian government or private sector authorities to consider adopting or implementing at Brazilian airports to maximize passenger value - Validate the Martins et al. construct for use in evaluating the airport check-in procedures Observations and analysis proved surprising in that all airports and service providers follow essentially the same check-in processes but execute them differently yet still result in similar overall performance in terms of value and waste. Although only a few activities are categorized as completely wasteful (and therefore removed in the revised value stream map of check-in activities), the weighting and categorization of individual activities according to their value (or waste) presents decision-makers a means to prioritize possible corrective actions. Various overall recommendations are presented based on this analysis. Most importantly, this paper demonstrates the viability of using the construct developed by Martins et al to examine airport operations, as well as its applicability to the study of other service industry processes.