835 resultados para Data processing and analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses the results of applied research on the eco-driving domain based on a huge data set produced from a fleet of Lisbon's public transportation buses for a three-year period. This data set is based on events automatically extracted from the control area network bus and enriched with GPS coordinates, weather conditions, and road information. We apply online analytical processing (OLAP) and knowledge discovery (KD) techniques to deal with the high volume of this data set and to determine the major factors that influence the average fuel consumption, and then classify the drivers involved according to their driving efficiency. Consequently, we identify the most appropriate driving practices and styles. Our findings show that introducing simple practices, such as optimal clutch, engine rotation, and engine running in idle, can reduce fuel consumption on average from 3 to 5l/100 km, meaning a saving of 30 l per bus on one day. These findings have been strongly considered in the drivers' training sessions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An overwhelming problem in Math Curriculums in Higher Education Institutions (HEI), we are daily facing in the last decade, is the substantial differences in Math background of our students. When you try to transmit, engage and teach subjects/contents that your “audience” is unable to respond to and/or even understand what we are trying to convey, it is somehow frustrating. In this sense, the Math projects and other didactic strategies, developed through Learning Management System Moodle, which include an array of activities that combine higher order thinking skills with math subjects and technology, for students of HE, appear as remedial but important, proactive and innovative measures in order to face and try to overcome these considerable problems. In this paper we will present some of these strategies, developed in some organic units of the Polytechnic Institute of Porto (IPP). But, how “fruitful” are the endless number of hours teachers spent in developing and implementing these platforms? Do students react to them as we would expect? Do they embrace this opportunity to overcome their difficulties? How do they use/interact individually with LMS platforms? Can this environment that provides the teacher with many interesting tools to improve the teaching – learning process, encourages students to reinforce their abilities and knowledge? In what way do they use each available material – videos, interactive tasks, texts, among others? What is the best way to assess student’s performance in these online learning environments? Learning Analytics tools provides us a huge amount of data, but how can we extract “good” and helpful information from them? These and many other questions still remain unanswered but we look forward to get some help in, at least, “get some drafts” for them because we feel that this “learning analysis”, that tackles the path from the objectives to the actual results, is perhaps the only way we have to move forward in the “best” learning and teaching direction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertation submitted in the fufillment of the requirements for the Degree of Master in Biomedical Engineering

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Companies are increasingly more and more dependent on distributed web-based software systems to support their businesses. This increases the need to maintain and extend software systems with up-to-date new features. Thus, the development process to introduce new features usually needs to be swift and agile, and the supporting software evolution process needs to be safe, fast, and efficient. However, this is usually a difficult and challenging task for a developer due to the lack of support offered by programming environments, frameworks, and database management systems. Changes needed at the code level, database model, and the actual data contained in the database must be planned and developed together and executed in a synchronized way. Even under a careful development discipline, the impact of changing an application data model is hard to predict. The lifetime of an application comprises changes and updates designed and tested using data, which is usually far from the real, production, data. So, coding DDL and DML SQL scripts to update database schema and data, is the usual (and hard) approach taken by developers. Such manual approach is error prone and disconnected from the real data in production, because developers may not know the exact impact of their changes. This work aims to improve the maintenance process in the context of Agile Platform by Outsystems. Our goal is to design and implement new data-model evolution features that ensure a safe support for change and a sound migration process. Our solution includes impact analysis mechanisms targeting the data model and the data itself. This provides, to developers, a safe, simple, and guided evolution process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Retail services are a main contributor to municipal budget and are an activity that affects perceived quality-of-life, especially for those with mobility difficulties (e.g. the elderly, low income citizens). However, there is evidence of a decline in some of the services market towns provide to their citizens. In market towns, this decline has been reported all over the western world, from North America to Australia. The aim of this research was to understand retail decline and enlighten on some ways of addressing this decline, using a case study, Thornbury, a small town in the Southwest of England. Data collected came from two participatory approaches: photo-surveys and multicriteria mapping. The interpretation of data came from using participants as analysts, but also, using systems thinking (systems diagramming and social trap theory) for theory building. This research moves away from mainstream economic and town planning perspectives by making use of different methods and concepts used in anthropology and visual sociology (photo-surveys), decision-making and ecological economics (multicriteria mapping and social trap theory). In sum, this research has experimented with different methods, out of their context, to analyse retail decline in a small town. This research developed a conceptual model for retail decline and identified the existence of conflicting goals and interests and their implications for retail decline, as well as causes for these. Most of the potential causes have had little attention in the literature. This research also identified that some of the measures commonly used for dealing with retail decline may be contributing to the causes of retail decline itself. Additionally, this research reviewed some of the measures that can be used to deal with retail decline, implications for policy-making and reflected on the use of the data collection and analysis methods in the context of small to medium towns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Fundação de Medicina Tropical Dr. Heitor Vieira Dourado (FMT-HVD), located in Manaus, the capital of the State of Amazonas (Western Brazilian Amazon), is a pioneering institution in this region regarding the syndromic surveillance of acute febrile illness, including arboviral infections. Based on the data from patients at the FMT-HVD, we have detected recurrent outbreaks in Manaus by the four dengue serotypes in the past 15 years, with increasing severity of the disease. This endemicity has culminated in the simultaneous circulation of all four serotypes in 2011, the first time this has been reported in Brazil. Between 1996 and 2009, 42 cases of yellow fever (YF) were registered in the State of Amazonas, and 71.4% (30/42) were fatal. Since 2010, no cases have been reported. Because the introduction of the yellow fever virus into a large city such as Manaus, which is widely infested by Aedes mosquitoes, may pose a real risk of a yellow fever outbreak, efforts to maintain an appropriate immunization policy for the populace are critical. Manaus has also suffered silent outbreaks of Mayaro and Oropouche fevers lately, most of which were misdiagnosed as dengue fever. The tropical conditions of the State of Amazonas favor the existence of other arboviruses capable of producing human disease. Under this real threat, represented by at least 4 arboviruses producing human infections in Manaus and in other neighboring countries, it is important to develop an efficient public health surveillance strategy, including laboratories that are able to make proper diagnoses of arboviruses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As increasingly more sophisticated materials and products are being developed and times-to-market need to be minimized, it is important to make available fast response characterization tools using small amounts of sample, capable of conveying data on the relationships between rheological response, process-induced material structure and product characteristics. For this purpose, a single / twin-screw mini-extrusion system of modular construction, with well-controlled outputs in the range 30-300 g/h, was coupled to a in- house developed rheo-optical slit die able to measure shear viscosity and normal-stress differences, as well as performing rheo-optical experiments, namely small angle light scattering (SALS) and polarized optical microscopy (POM). In addition, the mini-extruder is equipped with ports that allow sample collection, and the extrudate can be further processed into products to be tested later. Here, we present the concept and experimental set-up [1, 2]. As a typical application, we report on the characterization of the processing of a polymer blend and of the properties of extruded sheets. The morphological evolution of a PS/PMMA industrial blend along the extruder, the flow-induced structures developed and the corresponding rheological characteristics are presented, together with the mechanical and structural characteristics of produced sheets. The application of this experimental tool to other material systems will also be discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de Mestrado em Engenharia Informática

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is described data processing at the flaw detector with combined multisectional eddy-current transducer and heterofrequency magnetic field. The application of this method for detecting flaws in rods and pipes under the conditions of significant transverse displacements is described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Merozoite surface protein-1 (MSP-1, also referred to as P195, PMMSA or MSA 1) is one of the most studied of all malaria proteins. The proteins. The protein is found in all malaria species investigated and structural studies on the gene indicate that parts of the molecule are well-conserved. Studies on Plasmodium falciparum have shown that the protein is in a processed form on the merozoite surface, a result of proteolytic cleavage of the large percursor molecule. Recent studies have identified some of these cleavage sites. During invasion of the new red cell most of the MSP1 molecule is shed from the parasite surface except for a small C-terminal fragment which can be detected in ring stages. Analysis of the structure of this fragment suggests that it contains two growth factor-like domains that may have a functional role.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report the generation and analysis of functional data from multiple, diverse experiments performed on a targeted 1% of the human genome as part of the pilot phase of the ENCODE Project. These data have been further integrated and augmented by a number of evolutionary and computational analyses. Together, our results advance the collective knowledge about human genome function in several major areas. First, our studies provide convincing evidence that the genome is pervasively transcribed, such that the majority of its bases can be found in primary transcripts, including non-protein-coding transcripts, and those that extensively overlap one another. Second, systematic examination of transcriptional regulation has yielded new understanding about transcription start sites, including their relationship to specific regulatory sequences and features of chromatin accessibility and histone modification. Third, a more sophisticated view of chromatin structure has emerged, including its inter-relationship with DNA replication and transcriptional regulation. Finally, integration of these new sources of information, in particular with respect to mammalian evolution based on inter- and intra-species sequence comparisons, has yielded new mechanistic and evolutionary insights concerning the functional landscape of the human genome. Together, these studies are defining a path for pursuit of a more comprehensive characterization of human genome function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Proteomics has come a long way from the initial qualitative analysis of proteins present in a given sample at a given time ("cataloguing") to large-scale characterization of proteomes, their interactions and dynamic behavior. Originally enabled by breakthroughs in protein separation and visualization (by two-dimensional gels) and protein identification (by mass spectrometry), the discipline now encompasses a large body of protein and peptide separation, labeling, detection and sequencing tools supported by computational data processing. The decisive mass spectrometric developments and most recent instrumentation news are briefly mentioned accompanied by a short review of gel and chromatographic techniques for protein/peptide separation, depletion and enrichment. Special emphasis is placed on quantification techniques: gel-based, and label-free techniques are briefly discussed whereas stable-isotope coding and internal peptide standards are extensively reviewed. Another special chapter is dedicated to software and computing tools for proteomic data processing and validation. A short assessment of the status quo and recommendations for future developments round up this journey through quantitative proteomics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE: Stroke registries are valuable tools for obtaining information about stroke epidemiology and management. The Acute STroke Registry and Analysis of Lausanne (ASTRAL) prospectively collects epidemiological, clinical, laboratory and multimodal brain imaging data of acute ischemic stroke patients in the Centre Hospitalier Universitaire Vaudois (CHUV). Here, we provide design and methods used to create ASTRAL and present baseline data of our patients (2003 to 2008). METHODS: All consecutive patients admitted to CHUV between January 1, 2003 and December 31, 2008 with acute ischemic stroke within 24 hours of symptom onset were included in ASTRAL. Patients arriving beyond 24 hours, with transient ischemic attack, intracerebral hemorrhage, subarachnoidal hemorrhage, or cerebral sinus venous thrombosis, were excluded. Recurrent ischemic strokes were registered as new events. RESULTS: Between 2003 and 2008, 1633 patients and 1742 events were registered in ASTRAL. There was a preponderance of males, even in the elderly. Cardioembolic stroke was the most frequent type of stroke. Most strokes were of minor severity (National Institute of Health Stroke Scale [NIHSS] score ≤ 4 in 40.8% of patients). Cardioembolic stroke and dissections presented with the most severe clinical picture. There was a significant number of patients with unknown onset stroke, including wake-up stroke (n=568, 33.1%). Median time from last-well time to hospital arrival was 142 minutes for known onset and 759 minutes for unknown-onset stroke. The rate of intravenous or intraarterial thrombolysis between 2003 and 2008 increased from 10.8% to 20.8% in patients admitted within 24 hours of last-well time. Acute brain imaging was performed in 1695 patients (97.3%) within 24 hours. In 1358 patients (78%) who underwent acute computed tomography angiography, 717 patients (52.8%) had significant abnormalities. Of the 1068 supratentorial stroke patients who underwent acute perfusion computed tomography (61.3%), focal hypoperfusion was demonstrated in 786 patients (73.6%). CONCLUSIONS: This hospital-based prospective registry of consecutive acute ischemic strokes incorporates demographic, clinical, metabolic, acute perfusion, and arterial imaging. It is characterized by a high proportion of minor and unknown-onset strokes, short onset-to-admission time for known-onset patients, rapidly increasing thrombolysis rates, and significant vascular and perfusion imaging abnormalities in the majority of patients.