926 resultados para Genomic data integration


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research is part of continued efforts to correlate the hydrology of East Fork Poplar Creek (EFPC) and Bear Creek (BC) with the long term distribution of mercury within the overland, subsurface, and river sub-domains. The main objective of this study was to add a sedimentation module (ECO Lab) capable of simulating the reactive transport mercury exchange mechanisms within sediments and porewater throughout the watershed. The enhanced model was then applied to a Total Maximum Daily Load (TMDL) mercury analysis for EFPC. That application used historical precipitation, groundwater levels, river discharges, and mercury concentrations data that were retrieved from government databases and input to the model. The model was executed to reduce computational time, predict flow discharges, total mercury concentration, flow duration and mercury mass rate curves at key monitoring stations under various hydrological and environmental conditions and scenarios. The computational results provided insight on the relationship between discharges and mercury mass rate curves at various stations throughout EFPC, which is important to best understand and support the management mercury contamination and remediation efforts within EFPC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Lactococcus garvieae is a bacterial pathogen that affects different animal species in addition to humans. Despite the widespread distribution and emerging clinical significance of L. garvieae in both veterinary and human medicine, there is almost a complete lack of knowledge about the genetic content of this microorganism. In the present study, the genomic content of L. garvieae CECT 4531 was analysed using bioinformatics tools and microarray-based comparative genomic hybridization (CGH) experiments. Lactococcus lactis subsp. lactis IL1403 and Streptococcus pneumoniae TIGR4 were used as reference microorganisms. RESULTS The combination and integration of in silico analyses and in vitro CGH experiments, performed in comparison with the reference microorganisms, allowed establishment of an inter-species hybridization framework with a detection threshold based on a sequence similarity of >or= 70%. With this threshold value, 267 genes were identified as having an analogue in L. garvieae, most of which (n = 258) have been documented for the first time in this pathogen. Most of the genes are related to ribosomal, sugar metabolism or energy conversion systems. Some of the identified genes, such as als and mycA, could be involved in the pathogenesis of L. garvieae infections. CONCLUSIONS In this study, we identified 267 genes that were potentially present in L. garvieae CECT 4531. Some of the identified genes could be involved in the pathogenesis of L. garvieae infections. These results provide the first insight into the genome content of L. garvieae.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study aims to understand whether the foreign students who have different nationalities but the Portuguese are integrated into the school of the 1st Cycle of Basic Education. With this purpose, a descriptive and phenomenological research was conducted, making use of documental analysis, as well as semi-structured interviews and sociometric tests. These two data collecting tools were applied to students attending from the 1st to the 4th school years, in three 1st Cycle of Basic Education schools, within a school grouping in Viseu. The data obtained through the interviews allow us to conclude that foreign students, in general, feel integrated both in the school and in the class they belong to. However, the analysis of the results of the sociometric tests reveals other data, allowing us to conclude that one of the students is neither integrated in the school, nor in the class he is part of.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The key functional operability in the pre-Lisbon PJCCM pillar of the EU is the exchange of intelligence and information amongst the law enforcement bodies of the EU. The twin issues of data protection and data security within what was the EU’s third pillar legal framework therefore come to the fore. With the Lisbon Treaty reform of the EU, and the increased role of the Commission in PJCCM policy areas, and the integration of the PJCCM provisions with what have traditionally been the pillar I activities of Frontex, the opportunity for streamlining the data protection and data security provisions of the law enforcement bodies of the post-Lisbon EU arises. This is recognised by the Commission in their drafting of an amending regulation for Frontex , when they say that they would prefer “to return to the question of personal data in the context of the overall strategy for information exchange to be presented later this year and also taking into account the reflection to be carried out on how to further develop cooperation between agencies in the justice and home affairs field as requested by the Stockholm programme.” The focus of the literature published on this topic, has for the most part, been on the data protection provisions in Pillar I, EC. While the focus of research has recently sifted to the previously Pillar III PJCCM provisions on data protection, a more focused analysis of the interlocking issues of data protection and data security needs to be made in the context of the law enforcement bodies, particularly with regard to those which were based in the pre-Lisbon third pillar. This paper will make a contribution to that debate, arguing that a review of both the data protection and security provision post-Lisbon is required, not only in order to reinforce individual rights, but also inter-agency operability in combating cross-border EU crime. The EC’s provisions on data protection, as enshrined by Directive 95/46/EC, do not apply to the legal frameworks covering developments within the third pillar of the EU. Even Council Framework Decision 2008/977/JHA, which is supposed to cover data protection provisions within PJCCM expressly states that its provisions do not apply to “Europol, Eurojust, the Schengen Information System (SIS)” or to the Customs Information System (CIS). In addition, the post Treaty of Prüm provisions covering the sharing of DNA profiles, dactyloscopic data and vehicle registration data pursuant to Council Decision 2008/615/JHA, are not to be covered by the provisions of the 2008 Framework Decision. As stated by Hijmans and Scirocco, the regime is “best defined as a patchwork of data protection regimes”, with “no legal framework which is stable and unequivocal, like Directive 95/46/EC in the First pillar”. Data security issues are also key to the sharing of data in organised crime or counterterrorism situations. This article will critically analyse the current legal framework for data protection and security within the third pillar of the EU.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Effective and efficient implementation of intelligent and/or recently emerged networked manufacturing systems require an enterprise level integration. The networked manufacturing offers several advantages in the current competitive atmosphere by way to reduce, by shortening manufacturing cycle time and maintaining the production flexibility thereby achieving several feasible process plans. The first step in this direction is to integrate manufacturing functions such as process planning and scheduling for multi-jobs in a network based manufacturing system. It is difficult to determine a proper plan that meets conflicting objectives simultaneously. This paper describes a mobile-agent based negotiation approach to integrate manufacturing functions in a distributed manner; and its fundamental framework and functions are presented. Moreover, ontology has been constructed by using the Protégé software which possesses the flexibility to convert knowledge into Extensible Markup Language (XML) schema of Web Ontology Language (OWL) documents. The generated XML schemas have been used to transfer information throughout the manufacturing network for the intelligent interoperable integration of product data models and manufacturing resources. To validate the feasibility of the proposed approach, an illustrative example along with varied production environments that includes production demand fluctuations is presented and compared the proposed approach performance and its effectiveness with evolutionary algorithm based Hybrid Dynamic-DNA (HD-DNA) algorithm. The results show that the proposed scheme is very effective and reasonably acceptable for integration of manufacturing functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le rétrécissement valvulaire aortique (RVA) est causé par une calcification et une fibrose progressive de la valve aortique. Le risque de développer la maladie augmente avec l’âge. À cause de l’augmentation de l’espérance de vie, le RVA est devenu un problème de santé publique. Le RVA est fatal en absence de traitement médical. Actuellement, la chirurgie est le seul traitement pour le stade sévère de la maladie, mais près de 50% des individus avec RVA n’y sont pas éligibles, principalement due à la présence de comorbidités. Plusieurs processus biologiques ont été associés à la maladie, mais les voies moléculaires spécifiques et les gènes impliqués dans le développement et la progression du RVA ne sont pas connus. Il est donc urgent de découvrir les gènes de susceptibilité pour le RVA afin d’identifier les personnes à risque ainsi que les biomarqueurs et les cibles thérapeutiques pouvant mener au développement de médicaments pour inverser ou limiter la progression de la maladie. L’objectif de cette thèse de doctorat était d’identifier la base moléculaire du RVA. Des approches modernes en génomique, incluant l’étude de gènes candidats et le criblage génomique par association (GWAS), ont été réalisées à l’aide de collections d’ADN provenant d’un grand nombre de patients bien caractérisés pour le RVA. Des études complémentaires en transciptomique ont comparé le profil d’expression global des gènes entre des valves calcifiées et non-calcifiées à l’aide de biopuces à ADN et de séquençage de l’ARN. Une première étude a identifié des variations dans le gène NOTCH1 et suggère pour la première fois la présence d’un polymorphisme commun dans ce gène conférant une susceptibilité au RVA. La deuxième étude a combiné par méta-analyse deux GWAS de patients provenant de la ville de Québec et Paris (France) aux données transcriptomiques. Cette étude de génomique intégrative a confirmé le rôle de RUNX2 dans le RVA et a permis l’identification d’un nouveau gène de susceptibilité, CACNA1C. Les troisième et quatrième études sur l’expression des gènes ont permis de mieux comprendre les bases moléculaires de la calcification des valves aortiques bicuspides et ainsi d’identifier de nouvelles cibles thérapeutiques pour le RVA. Les données générées par ce projet sont la base de futures découvertes importantes qui permettront d’améliorer les options de traitement et la qualité de vie des patients atteints du RVA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract : Information and communication technologies (ICTs, henceforth) have become ubiquitous in our society. The plethora of devices competing with the computer, from iPads to the Interactive whiteboard, just to name a few, has provided teachers and students alike with the ability to communicate and access information with unprecedented accessibility and speed. It is only logical that schools reflect these changes given that their purpose is to prepare students for the future. Surprisingly enough, research indicates that ICT integration into teaching activities is still marginal. Many elementary and secondary schoolteachers are not making effective use of ICTs in their teaching activities as well as in their assessment practices. The purpose of the current study is a) to describe Quebec ESL teachers’ profiles of using ICTs in their daily teaching activities; b) to describe teachers’ ICT integration and assessment practices; and c) to describe teachers’ social representations regarding the utility and relevance of ICT use in their daily teaching activities and assessment practices. In order to attain our objectives, we based our theoretical framework, principally, on the social representations (SR, henceforth) theory and we defined most related constructs which were deemed fundamental to the current thesis. We also collected data from 28 ESL elementary and secondary school teachers working in public and private sectors. The interview guide used to that end included a range of items to elicit teachers’ SR in terms of ICT daily use in teaching activities as well as in assessment practices. In addition, we carried out our data analyses from a textual statistics perspective, a particular mode of content analysis, in order to extract the indicators underlying teachers’ representations of the teachers. The findings suggest that although almost all participants use a wide range of ICT tools in their practices, ICT implementation is seemingly not exploited to its fullest potential and, correspondingly, is likely to produce limited effects on students’ learning. Moreover, none of the interviewees claim that they use ICTs in their assessment practices and they still hold to the traditional paper-based assessment (PBA, henceforth) approach of assessing students’ learning. Teachers’ common discourse reveals a gap between the positive standpoint with regards to ICT integration, on the one hand, and the actual uses of instructional technology, on the other. These results are useful for better understanding the way ESL teachers in Quebec currently view their use of ICTs, particularly for evaluation purposes. In fact, they provide a starting place for reconsidering the implementation of ICTs in elementary and secondary schools. They may also be useful to open up avenues for the development of a future research program in this regard.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

If marine management policies and actions are to achieve long-term sustainable use and management of the marine environment and its resources, they need to be informed by data giving the spatial distribution of seafloor habitats over large areas. Broad-scale seafloor habitat mapping is an approachwhich has the benefit of producing maps covering large extents at a reasonable cost. This approach was first investigated by Roff et al. (2003), who, acknowledging that benthic communities are strongly influenced by the physical characteristics of the seafloor, proposed overlaying mapped physical variables using a geographic information system (GIS) to produce an integrated map of the physical characteristics of the seafloor. In Europe the method was adapted to the marine section of the EUNIS (European Nature Information System) classification of habitat types under the MESH project, andwas applied at an operational level in 2011 under the EUSeaMap project. The present study compiled GIS layers for fundamental physical parameters in the northeast Atlantic, including (i) bathymetry, (ii) substrate type, (iii) light penetration depth and (iv) exposure to near-seafloor currents andwave action. Based on analyses of biological occurrences, significant thresholds were fine-tuned for each of the abiotic layers and later used in multi-criteria raster algebra for the integration of the layers into a seafloor habitat map. The final result was a harmonised broad-scale seafloor habitat map with a 250 m pixel size covering four extensive areas, i.e. Ireland, the Bay of Biscay, the Iberian Peninsula and the Azores. The map provided the first comprehensive perception of habitat spatial distribution for the Iberian Peninsula and the Azores, and fed into the initiative for a pan- European map initiated by the EUSeaMap project for Baltic, North, Celtic and Mediterranean seas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data sharing between organizations through interoperability initiatives involving multiple information systems is fundamental to promote the collaboration and integration of services. However, in terms of data, the considerable increase in its exposure to additional risks, require a special attention to issues related to privacy of these data. For the Portuguese healthcare sector, where the sharing of health data is, nowadays, a reality at national level, data privacy is a central issue, which needs solutions according to the agreed level of interoperability between organizations. This context led the authors to study the factors with influence on data privacy in a context of interoperability, through a qualitative and interpretative research, based on the method of case study. This article presents the final results of the research that successfully identifies 10 subdomains of factors with influence on data privacy, which should be the basis for the development of a joint protection program, targeted at issues associated with data privacy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nelore is the major beef cattle breed in Brazil with more than 130 million heads. Genome-wide association studies (GWAS) are often used to associate markers and genomic regions to growth and meat quality traits that can be used to assist selection programs. An alternative methodology to traditional GWAS that involves the construction of gene network interactions, derived from results of several GWAS is the AWM (Association Weight Matrices)/PCIT (Partial Correlation and Information Theory). With the aim of evaluating the genetic architecture of Brazilian Nelore cattle, we used high-density SNP genotyping data (~770,000 SNP) from 780 Nelore animals comprising 34 half-sibling families derived from highly disseminated and unrelated sires from across Brazil. The AWM/PCIT methodology was employed to evaluate the genes that participate in a series of eight phenotypes related to growth and meat quality obtained from this Nelore sample.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The research activities involved the application of the Geomatic techniques in the Cultural Heritage field, following the development of two themes: Firstly, the application of high precision surveying techniques for the restoration and interpretation of relevant monuments and archaeological finds. The main case regards the activities for the generation of a high-fidelity 3D model of the Fountain of Neptune in Bologna. In this work, aimed to the restoration of the manufacture, both the geometrical and radiometrical aspects were crucial. The final product was the base of a 3D information system representing a shared tool where the different figures involved in the restoration activities shared their contribution in a multidisciplinary approach. Secondly, the arrangement of 3D databases for a Building Information Modeling (BIM) approach, in a process which involves the generation and management of digital representations of physical and functional characteristics of historical buildings, towards a so-called Historical Building Information Model (HBIM). A first application was conducted for the San Michele in Acerboli’s church in Santarcangelo di Romagna. The survey was performed by the integration of the classical and modern Geomatic techniques and the point cloud representing the church was used for the development of a HBIM model, where the relevant information connected to the building could be stored and georeferenced. A second application regards the domus of Obellio Firmo in Pompeii, surveyed by the integration of the classical and modern Geomatic techniques. An historical analysis permitted the definitions of phases and the organization of a database of materials and constructive elements. The goal is the obtaining of a federate model able to manage the different aspects: documental, analytic and reconstructive ones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we discuss in what ways computational logic (CL) and data science (DS) can jointly contribute to the management of knowledge within the scope of modern and future artificial intelligence (AI), and how technically-sound software technologies can be realised along the path. An agent-oriented mindset permeates the whole discussion, by stressing pivotal role of autonomous agents in exploiting both means to reach higher degrees of intelligence. Accordingly, the goals of this thesis are manifold. First, we elicit the analogies and differences among CL and DS, hence looking for possible synergies and complementarities along 4 major knowledge-related dimensions, namely representation, acquisition (a.k.a. learning), inference (a.k.a. reasoning), and explanation. In this regard, we propose a conceptual framework through which bridges these disciplines can be described and designed. We then survey the current state of the art of AI technologies, w.r.t. their capability to support bridging CL and DS in practice. After detecting lacks and opportunities, we propose the notion of logic ecosystem as the new conceptual, architectural, and technological solution supporting the incremental integration of symbolic and sub-symbolic AI. Finally, we discuss how our notion of logic ecosys- tem can be reified into actual software technology and extended towards many DS-related directions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advent of omic data production has opened many new perspectives in the quest for modelling complexity in biophysical systems. With the capability of characterizing a complex organism through the patterns of its molecular states, observed at different levels through various omics, a new paradigm of investigation is arising. In this thesis, we investigate the links between perturbations of the human organism, described as the ensemble of crosstalk of its molecular states, and health. Machine learning plays a key role within this picture, both in omic data analysis and model building. We propose and discuss different frameworks developed by the author using machine learning for data reduction, integration, projection on latent features, pattern analysis, classification and clustering of omic data, with a focus on 1H NMR metabolomic spectral data. The aim is to link different levels of omic observations of molecular states, from nanoscale to macroscale, to study perturbations such as diseases and diet interpreted as changes in molecular patterns. The first part of this work focuses on the fingerprinting of diseases, linking cellular and systemic metabolomics with genomic to asses and predict the downstream of perturbations all the way down to the enzymatic network. The second part is a set of frameworks and models, developed with 1H NMR metabolomic at its core, to study the exposure of the human organism to diet and food intake in its full complexity, from epidemiological data analysis to molecular characterization of food structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decades, Artificial Intelligence has witnessed multiple breakthroughs in deep learning. In particular, purely data-driven approaches have opened to a wide variety of successful applications due to the large availability of data. Nonetheless, the integration of prior knowledge is still required to compensate for specific issues like lack of generalization from limited data, fairness, robustness, and biases. In this thesis, we analyze the methodology of integrating knowledge into deep learning models in the field of Natural Language Processing (NLP). We start by remarking on the importance of knowledge integration. We highlight the possible shortcomings of these approaches and investigate the implications of integrating unstructured textual knowledge. We introduce Unstructured Knowledge Integration (UKI) as the process of integrating unstructured knowledge into machine learning models. We discuss UKI in the field of NLP, where knowledge is represented in a natural language format. We identify UKI as a complex process comprised of multiple sub-processes, different knowledge types, and knowledge integration properties to guarantee. We remark on the challenges of integrating unstructured textual knowledge and bridge connections with well-known research areas in NLP. We provide a unified vision of structured knowledge extraction (KE) and UKI by identifying KE as a sub-process of UKI. We investigate some challenging scenarios where structured knowledge is not a feasible prior assumption and formulate each task from the point of view of UKI. We adopt simple yet effective neural architectures and discuss the challenges of such an approach. Finally, we identify KE as a form of symbolic representation. From this perspective, we remark on the need of defining sophisticated UKI processes to verify the validity of knowledge integration. To this end, we foresee frameworks capable of combining symbolic and sub-symbolic representations for learning as a solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Whole Exome Sequencing (WES) is rapidly becoming the first-tier test in clinics, both thanks to its declining costs and the development of new platforms that help clinicians in the analysis and interpretation of SNV and InDels. However, we still know very little on how CNV detection could increase WES diagnostic yield. A plethora of exome CNV callers have been published over the years, all showing good performances towards specific CNV classes and sizes, suggesting that the combination of multiple tools is needed to obtain an overall good detection performance. Here we present TrainX, a ML-based method for calling heterozygous CNVs in WES data using EXCAVATOR2 Normalized Read Counts. We select males and females’ non pseudo-autosomal chromosome X alignments to construct our dataset and train our model, make predictions on autosomes target regions and use HMM to call CNVs. We compared TrainX against a set of CNV tools differing for the detection method (GATK4 gCNV, ExomeDepth, DECoN, CNVkit and EXCAVATOR2) and found that our algorithm outperformed them in terms of stability, as we identified both deletions and duplications with good scores (0.87 and 0.82 F1-scores respectively) and for sizes reaching the minimum resolution of 2 target regions. We also evaluated the method robustness using a set of WES and SNP array data (n=251), part of the Italian cohort of Epi25 collaborative, and were able to retrieve all clinical CNVs previously identified by the SNP array. TrainX showed good accuracy in detecting heterozygous CNVs of different sizes, making it a promising tool to use in a diagnostic setting.