938 resultados para Optimistic data replication system
Resumo:
A oportunidade de produção de biomassa microalgal tem despertado interesse pelos diversos destinos que a mesma pode ter, seja na produção de bioenergia, como fonte de alimento ou servindo como produto da biofixação de dióxido de carbono. Em geral, a produção em larga escala de cianobactérias e microalgas é feita com acompanhamento através de análises físicoquímicas offline. Neste contexto, o objetivo deste trabalho foi monitorar a concentração celular em fotobiorreator raceway para produção de biomassa microalgal usando técnicas de aquisição digital de dados e controle de processos, pela aquisição de dados inline de iluminância, concentração de biomassa, temperatura e pH. Para tal fim foi necessário construir sensor baseado em software capaz de determinar a concentração de biomassa microalgal a partir de medidas ópticas de intensidade de radiação monocromática espalhada e desenvolver modelo matemático para a produção da biomassa microalgal no microcontrolador, utilizando algoritmo de computação natural no ajuste do modelo. Foi projetado, construído e testado durante cultivos de Spirulina sp. LEB 18, em escala piloto outdoor, um sistema autônomo de registro de informações advindas do cultivo. Foi testado um sensor de concentração de biomassa baseado na medição da radiação passante. Em uma segunda etapa foi concebido, construído e testado um sensor óptico de concentração de biomassa de Spirulina sp. LEB 18 baseado na medição da intensidade da radiação que sofre espalhamento pela suspensão da cianobactéria, em experimento no laboratório, sob condições controladas de luminosidade, temperatura e fluxo de suspensão de biomassa. A partir das medidas de espalhamento da radiação luminosa, foi construído um sistema de inferência neurofuzzy, que serve como um sensor por software da concentração de biomassa em cultivo. Por fim, a partir das concentrações de biomassa de cultivo, ao longo do tempo, foi prospectado o uso da plataforma Arduino na modelagem empírica da cinética de crescimento, usando a Equação de Verhulst. As medidas realizadas no sensor óptico baseado na medida da intensidade da radiação monocromática passante através da suspensão, usado em condições outdoor, apresentaram baixa correlação entre a concentração de biomassa e a radiação, mesmo para concentrações abaixo de 0,6 g/L. Quando da investigação do espalhamento óptico pela suspensão do cultivo, para os ângulos de 45º e 90º a radiação monocromática em 530 nm apresentou um comportamento linear crescente com a concentração, apresentando coeficiente de determinação, nos dois casos, 0,95. Foi possível construir um sensor de concentração de biomassa baseado em software, usando as informações combinadas de intensidade de radiação espalhada nos ângulos de 45º e 135º com coeficiente de determinação de 0,99. É factível realizar simultaneamente a determinação inline de variáveis do processo de cultivo de Spirulina e a modelagem cinética empírica do crescimento do micro-organismo através da equação de Verhulst, em microcontrolador Arduino.
Resumo:
Résumé : Les eaux souterraines ont un impact majeur sur la vie terrestre, les besoins domestiques et le climat. Elles sont aussi un maillon essentiel du cycle hydrologique. Au Canada par exemple, plus de 30 % de la population est tributaire des eaux souterraines pour leur alimentation en eau potable. Ces ressources subissent de nombreuses pressions sous l’influence de certains facteurs comme la salinisation, la contamination et l’épuisement. La variabilité du climat et la demande croissante sur ces ressources imposent l'amélioration de nos connaissances sur les eaux souterraines. L’objectif principal du projet de recherche est d’exploiter les données d’anomalies (TWS) de la mission Gravity Recovery And Climate Experiment (GRACE) pour localiser, quantifier et analyser les variations des eaux souterraines à travers les bassins versants du Bas-Mackenzie, du Saint-Laurent, du Nord-Québec et du Labrador. Il s’agit aussi d’analyser l’influence des cycles d’accumulation et de fonte de neige sur les variations du niveau des eaux souterraines. Pour estimer les variations des eaux souterraines, la connaissance des autres paramètres du bilan hydrologique est nécessaire. Ces paramètres sont estimés à l’aide des sorties du modèles de surface CLM du Système Global d’Assimilation des Données de la Terre (GLDAS). Les données GRACE qui ont été utilisées sont celles acquises durant la période allant de mars 2002 à août 2012. Les résultats ont été évalués à partir d’enregistrements de niveaux piézométriques provenant de 1841 puits localisés dans les aquifères libres du bassin des réseaux de suivi des eaux souterraines au Canada. Les valeurs de rendements spécifiques des différents types d’aquifères de chaque puits et celles des variations mensuelles du niveau d’eau dans ces puits ont été utilisées pour estimer les variations des anomalies des eaux souterraines in-situ. L’étude de corrélation entre les variations des anomalies des eaux souterraines estimées à partir de la combinaison GRACE-GLDAS et celles issues de données in-situ révèle des concordances significatives avec des valeurs de
Resumo:
In this work, mixed oxides were synthesized by two methods: polymeric precursor and gel-combustion. The oxides, Niquelate of Lanthanum, Cobaltate of Lanthanum and Cuprate of Lanthanum were synthesized by the polymeric precursor method, and treated at 300 º C for 2 hours, calcined at 800 º C for 6h in air atmosphere. In gel-combustion method were produced and oxides using urea and citric acid as fuel, forming for each fuel the following oxides Ferrate of Lanthanum, Cobaltato of Lanthanum and Ferrato of Cobalt and Lanthanum, which were submitted to the combustion process assisted by microwave power maximum of 10min. The samples were characterized by: thermogravimetric analysis, X-ray diffraction; fisisorção of N2 (BET method) and scanning electron microscopy. The reactions catalytic of depolymerization of poly (methyl methacrylate), were performed in a reactor of silica, with catalytic and heating system equipped with a data acquisition system and the gas chromatograph. For the catalysts synthesized using the polymeric precursor method, the cuprate of lanthanum was best for the depolymerization of the recycled polymer, obtaining 100% conversion in less time 554 (min), and the pure polymer, was the Niquelate of Lanthanum, with 100% conversion in less time 314 (min). By gel-combustion method using urea as fuel which was the best result obtained Ferrate of Lanthanum for the pure polymer with 100% conversion in less time 657 (min), and the recycled polymer was Cobaltate of Lanthanum with 100 % conversion in less time 779 (min). And using citric acid to obtain the best result for the pure polymer, was Ferrate of Lanthanum with 100% conversion in less time 821 (min and) for the recycled polymer, was Ferrate of Lanthanum with 98.28% conversion in less time 635 (min)
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Civil e Ambiental, 2016.
Resumo:
Dissertação de mest. em Observação e Análise da Relação Educativa, Faculdade de Ciências Humanas e Sociais, Univer, do Algarve, 2004
Resumo:
The use of infrared burners in industrial applications has many advantages in terms of technical-operational, for example, uniformity in the heat supply in the form of radiation and convection, with greater control of emissions due to the passage of exhaust gases through a macro-porous ceramic bed. This paper presents an infrared burner commercial, which was adapted an experimental ejector, capable of promoting a mixture of liquefied petroleum gas (LPG) and glycerin. By varying the percentage of dual-fuel, it was evaluated the performance of the infrared burner by performing an energy balance and atmospheric emissions. It was introduced a temperature controller with thermocouple modulating two-stage (low heat / high heat), using solenoid valves for each fuel. The infrared burner has been tested and tests by varying the amount of glycerin inserted by a gravity feed system. The method of thermodynamic analysis to estimate the load was used an aluminum plate located at the exit of combustion gases and the distribution of temperatures measured by a data acquisition system which recorded real-time measurements of the thermocouples attached. The burner had a stable combustion at levels of 15, 20 and 25% of adding glycerin in mass ratio of LPG gas, increasing the supply of heat to the plate. According to data obtained showed that there was an improvement in the efficiency of the 1st Law of infrared burner with increasing addition of glycerin. The emission levels of greenhouse gases produced by combustion (CO, NOx, SO2 and HC) met the environmental limits set by resolution No. 382/2006 of CONAMA
Resumo:
A replicação de base de dados tem como objectivo a cópia de dados entre bases de dados distribuídas numa rede de computadores. A replicação de dados é importante em várias situações, desde a realização de cópias de segurança da informação, ao balanceamento de carga, à distribuição da informação por vários locais, até à integração de sistemas heterogéneos. A replicação possibilita uma diminuição do tráfego de rede, pois os dados ficam disponíveis localmente possibilitando também o seu acesso no caso de indisponibilidade da rede. Esta dissertação baseia-se na realização de um trabalho que consistiu no desenvolvimento de uma aplicação genérica para a replicação de bases de dados a disponibilizar como open source software. A aplicação desenvolvida possibilita a integração de dados entre vários sistemas, com foco na integração de dados heterogéneos, na fragmentação de dados e também na possibilidade de adaptação a várias situações. ABSTRACT: Data replication is a mechanism to synchronize and integrate data between distributed databases over a computer network. Data replication is an important tool in several situations, such as the creation of backup systems, load balancing between various nodes, distribution of information between various locations, integration of heterogeneous systems. Replication enables a reduction in network traffic, because data remains available locally even in the event of a temporary network failure. This thesis is based on the work carried out to develop an application for database replication to be made accessible as open source software. The application that was built allows for data integration between various systems, with particular focus on, amongst others, the integration of heterogeneous data, the fragmentation of data, replication in cascade, data format changes between replicas, master/slave and multi master synchronization.
Resumo:
Deep Neural Networks (DNNs) have revolutionized a wide range of applications beyond traditional machine learning and artificial intelligence fields, e.g., computer vision, healthcare, natural language processing and others. At the same time, edge devices have become central in our society, generating an unprecedented amount of data which could be used to train data-hungry models such as DNNs. However, the potentially sensitive or confidential nature of gathered data poses privacy concerns when storing and processing them in centralized locations. To this purpose, decentralized learning decouples model training from the need of directly accessing raw data, by alternating on-device training and periodic communications. The ability of distilling knowledge from decentralized data, however, comes at the cost of facing more challenging learning settings, such as coping with heterogeneous hardware and network connectivity, statistical diversity of data, and ensuring verifiable privacy guarantees. This Thesis proposes an extensive overview of decentralized learning literature, including a novel taxonomy and a detailed description of the most relevant system-level contributions in the related literature for privacy, communication efficiency, data and system heterogeneity, and poisoning defense. Next, this Thesis presents the design of an original solution to tackle communication efficiency and system heterogeneity, and empirically evaluates it on federated settings. For communication efficiency, an original method, specifically designed for Convolutional Neural Networks, is also described and evaluated against the state-of-the-art. Furthermore, this Thesis provides an in-depth review of recently proposed methods to tackle the performance degradation introduced by data heterogeneity, followed by empirical evaluations on challenging data distributions, highlighting strengths and possible weaknesses of the considered solutions. Finally, this Thesis presents a novel perspective on the usage of Knowledge Distillation as a mean for optimizing decentralized learning systems in settings characterized by data heterogeneity or system heterogeneity. Our vision on relevant future research directions close the manuscript.
Resumo:
In the near future, the LHC experiments will continue to be upgraded as the LHC luminosity will increase from the design 1034 to 7.5 × 1034, with the HL-LHC project, to reach 3000 × f b−1 of accumulated statistics. After the end of a period of data collection, CERN will face a long shutdown to improve overall performance by upgrading the experiments and implementing more advanced technologies and infrastructures. In particular, ATLAS will upgrade parts of the detector, the trigger, and the data acquisition system. It will also implement new strategies and algorithms for processing and transferring the data to the final storage. This PhD thesis presents a study of a new pattern recognition algorithm to be used in the trigger system, which is a software designed to provide the information necessary to select physical events from background data. The idea is to use the well-known Hough Transform mathematical formula as an algorithm for detecting particle trajectories. The effectiveness of the algorithm has already been validated in the past, independently of particle physics applications, to detect generic shapes in images. Here, a software emulation tool is proposed for the hardware implementation of the Hough Transform, to reconstruct the tracks in the ATLAS Trigger and Data Acquisition system. Until now, it has never been implemented on electronics in particle physics experiments, and as a hardware implementation it would provide overall latency benefits. A comparison between the simulated data and the physical system was performed on a Xilinx UltraScale+ FPGA device.
Resumo:
I raggi cosmici sono una fonte naturale di particelle ad alta energia di origine galattica e extragalattica. I prodotti della loro interazione con l’atmosfera terrestre giungono fino alla superficie terrestre, dove vengono rilevati dagli esperimenti di fisica delle particelle. Si vuole quindi identificare e rimuovere questo segnale. Gli apparati sperimentali usati in fisica delle particelle prevedono dei sistemi di selezione dei segnali in ingresso (detti trigger) per rigettare segnali sotto una certa soglia di energia. Il progredire delle prestazioni dei calcolatori permette oggi di sostituire l’elettronica dei sistemi di trigger con implementazioni software (triggerless) in grado di selezionare i dati secondo criteri più complessi. TriDAS (Triggerless Data Acquisition System) è un sistema di acquisizione triggerless sviluppato per l’esperimento KM3NeT e utilizzato recentemente per gestire l’acquisizione di esperimenti di collisione di fascio ai Jefferson Lab (Newport News, VA). Il presente lavoro ha come scopo la definizione di un algoritmo di selezione di eventi generati da raggi cosmici e la sua implementazione come trigger software all’interno di TriDAS. Quindi si mostrano alcuni strumenti software sviluppati per costruire un ambiente di test del suddetto algoritmo e analizzare i dati prodotti da TriDAS.
Resumo:
This research presents several components encompassing the scope of the objective of Data Partitioning and Replication Management in Distributed GIS Database. Modern Geographic Information Systems (GIS) databases are often large and complicated. Therefore data partitioning and replication management problems need to be addresses in development of an efficient and scalable solution. ^ Part of the research is to study the patterns of geographical raster data processing and to propose the algorithms to improve availability of such data. These algorithms and approaches are targeting granularity of geographic data objects as well as data partitioning in geographic databases to achieve high data availability and Quality of Service(QoS) considering distributed data delivery and processing. To achieve this goal a dynamic, real-time approach for mosaicking digital images of different temporal and spatial characteristics into tiles is proposed. This dynamic approach reuses digital images upon demand and generates mosaicked tiles only for the required region according to user's requirements such as resolution, temporal range, and target bands to reduce redundancy in storage and to utilize available computing and storage resources more efficiently. ^ Another part of the research pursued methods for efficient acquiring of GIS data from external heterogeneous databases and Web services as well as end-user GIS data delivery enhancements, automation and 3D virtual reality presentation. ^ There are vast numbers of computing, network, and storage resources idling or not fully utilized available on the Internet. Proposed "Crawling Distributed Operating System "(CDOS) approach employs such resources and creates benefits for the hosts that lend their CPU, network, and storage resources to be used in GIS database context. ^ The results of this dissertation demonstrate effective ways to develop a highly scalable GIS database. The approach developed in this dissertation has resulted in creation of TerraFly GIS database that is used by US government, researchers, and general public to facilitate Web access to remotely-sensed imagery and GIS vector information. ^
Resumo:
High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.
Resumo:
To assess the completeness and reliability of the Information System on Live Births (Sinasc) data. A cross-sectional analysis of the reliability and completeness of Sinasc's data was performed using a sample of Live Birth Certificate (LBC) from 2009, related to births from Campinas, Southeast Brazil. For data analysis, hospitals were grouped according to category of service (Unified National Health System, private or both), 600 LBCs were randomly selected and the data were collected in LBC-copies through mothers and newborns' hospital records and by telephone interviews. The completeness of LBCs was evaluated, calculating the percentage of blank fields, and the LBCs agreement comparing the originals with the copies was evaluated by Kappa and intraclass correlation coefficients. The percentage of completeness of LBCs ranged from 99.8%-100%. For the most items, the agreement was excellent. However, the agreement was acceptable for marital status, maternal education and newborn infants' race/color, low for prenatal visits and presence of birth defects, and very low for the number of deceased children. The results showed that the municipality Sinasc is reliable for most of the studied variables. Investments in training of the professionals are suggested in an attempt to improve system capacity to support planning and implementation of health activities for the benefit of maternal and child population.
Resumo:
Allele frequency distributions and population data for 12 Y-chromosomal short tandem repeats (STRs) included in the PowerPlex (R) Y Systems (Promega) were obtained for a sample of 200 healthy unrelated males living in S (a) over tildeo Paulo State (Southeast of Brazil). A total of 192 haplotypes were identified, of which 184 were unique and 8 were found in 2 individuals. The average gene diversity of the 12 Y-STR was 0.6746 and the haplotype diversity was 0.9996. Pairwise analysis confirmed that our population is more similar with the Italy, North Portugal and Spain, being more distant of the Japan. (c) 2007 Elsevier Ireland Ltd. All rights reserved.
Resumo:
The use of computational fluid dynamics simulations for calibrating a flush air data system is described, In particular, the flush air data system of the HYFLEX hypersonic vehicle is used as a case study. The HYFLEX air data system consists of nine pressure ports located flush with the vehicle nose surface, connected to onboard pressure transducers, After appropriate processing, surface pressure measurements can he converted into useful air data parameters. The processing algorithm requires an accurate pressure model, which relates air data parameters to the measured pressures. In the past, such pressure models have been calibrated using combinations of flight data, ground-based experimental results, and numerical simulation. We perform a calibration of the HYFLEX flush air data system using computational fluid dynamics simulations exclusively, The simulations are used to build an empirical pressure model that accurately describes the HYFLEX nose pressure distribution ol cr a range of flight conditions. We believe that computational fluid dynamics provides a quick and inexpensive way to calibrate the air data system and is applicable to a broad range of flight conditions, When tested with HYFLEX flight data, the calibrated system is found to work well. It predicts vehicle angle of attack and angle of sideslip to accuracy levels that generally satisfy flight control requirements. Dynamic pressure is predicted to within the resolution of the onboard inertial measurement unit. We find that wind-tunnel experiments and flight data are not necessary to accurately calibrate the HYFLEX flush air data system for hypersonic flight.