886 resultados para Distributed File System
Resumo:
Pós-graduação em Engenharia Mecânica - FEG
Resumo:
Given the exponential growth in the spread of the virus world wide web (Internet) and its increasing complexity, it is necessary to adopt more complex systems for the extraction of malware finger-prints (malware fingerprints - malicious software; is the name given to extracting unique information leading to identification of the virus, equivalent to humans, the fingerprint). The architecture and protocol proposed here aim to achieve more efficient fingerprints, using techniques that make a single fingerprint enough to compromise an entire group of viruses. This efficiency is given by the use of a hybrid approach of extracting fingerprints, taking into account the analysis of the code and the behavior of the sample, so called viruses. The main targets of this proposed system are Polymorphics and Metamorphics Malwares, given the difficulty in creating fingerprints that identify an entire family from these viruses. This difficulty is created by the use of techniques that have as their main objective compromise analysis by experts. The parameters chosen for the behavioral analysis are: File System; Records Windows; RAM Dump and API calls. As for the analysis of the code, the objective is to create, in binary virus, divisions in blocks, where it is possible to extract hashes. This technique considers the instruction there and its neighborhood, characterized as being accurate. In short, with this information is intended to predict and draw a profile of action of the virus and then create a fingerprint based on the degree of kinship between them (threshold), whose goal is to increase the ability to detect viruses that do not make part of the same family
Resumo:
This work presents a study about distributed generation using photovoltaic systems in the context of smart grids. The characteristics of a Smart Grid and the several aspects this concept involves - distributed generation among them - are discussed. There are also examples of equipment, like smart meters, and of national and international projects. The specificities of distributed generation and the rules and standards necessary in this sort of installation are talked through with focus in the solar energy generation method. Regarding photovoltaic systems, the working principles of the panels are presented, along with its main electrical characteristics and the technologies available. Finally there is a study concerning the sizing of a distributed generation system that involves photovoltaic panels in a residential plant. An analysis of the costs and return of investment period is made about the specific case in consideration.
Resumo:
Networked control systems (NCSs) are distributed control system in which sensors, actuators and controllers are physically separated and connected through communication networks. NCS represent the evolution of networked control architectures providing greater modularity and control decentralization, ease maintenance and diagnosis and lower cost of implementation. A recent trend in this research topic is the development of NCS using wireless networks(WNCS)which enable interoperability between existing wiredand wireless systems. This paper presents the feasibility analysis of using serial to wireless converter as a wireless sensor link in NCS. In order to support this investigation, relevant performance metrics for wireless control applications such as jitter, time delay and messages lost are highlighted and calculated to evaluate the wireless converter capabilities. In addition the control performance of an implemented motor control system using the converter is analyzed. Experimental results led to the conclusion that serial ZigBee device isrecommended against the Bluetooth as it provided better metrics for control applications. However, bothdevices can be used to implement WNCS providing transmission rates and closed control loop times which are acceptable for NCS applications.Moreoverthe use of thewireless device delay in the PID controller discretization can improve the control performance of the system.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The aim of this study was to evaluate the efficacy of three rotary instrument systems (K3, Pro Taper and Twisted File) in removing calcium hydroxide residues from root canal walls. Thirty-four human mandibular incisors were instrumented with the Pro Taper System up to the F2 instrument, irrigated with 2.5% NaOCl followed by 17% EDTA, and filled with a calcium hydroxide intracanal dressing. After 7 days, the calcium hydroxide dressing was removed using the following rotary instruments: G1. - NiTi size 25, 0.06 taper, of the K3 System; G2 - NiTi F2, of the Pro Taper System; or G3 - NiTi size 25, 0.06 taper, of the Twisted File System. The teeth were longitudinally grooved on the buccal and lingual root surfaces, split along their long axis, and their apical and cervical canal thirds were evaluated by SEM (x1000). The images were scored and the data were statistically analyzed using the Kruskall Wallis test. None of the instruments removed the calcium hydroxide dressing completely, either in the apical or cervical thirds, and no significant differences were observed among the rotary instruments tested (p > 0.05).
Resumo:
A mathematical model and numerical simulations are presented to investigate the dynamics of gas, oil and water flow in a pipeline-riser system. The pipeline is modeled as a lumped parameter system and considers two switchable states: one in which the gas is able to penetrate into the riser and another in which there is a liquid accumulation front, preventing the gas from penetrating the riser. The riser model considers a distributed parameter system, in which movable nodes are used to evaluate local conditions along the subsystem. Mass transfer effects are modeled by using a black oil approximation. The model predicts the liquid penetration length in the pipeline and the liquid level in the riser, so it is possible to determine which type of severe slugging occurs in the system. The method of characteristics is used to simplify the differentiation of the resulting hyperbolic system of equations. The equations are discretized and integrated using an implicit method with a predictor-corrector scheme for the treatment of the nonlinearities. Simulations corresponding to severe slugging conditions are presented and compared to results obtained with OLGA computer code, showing a very good agreement. A description of the types of severe slugging for the three-phase flow of gas, oil and water in a pipeline-riser system with mass transfer effects are presented, as well as a stability map. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
[ES] El principal objetivo de este Trabajo Final de Grado (TFG) fue la creación de un sistema de gestión de vídeo distribuido utilizando cámaras de videovigilancia IP. Esta propuesta surgió a partir de la idea de ofrecer un acceso simultáneo, tanto online como offline, a las secuencias de vídeo generadas por una red de cámaras IP en un entorno dado. El resultado obtenido fue una infraestructura software ampliable que ofrece al usuario una serie de funcionalidades con cámaras de red, abstrayéndolo de detalles internos. El trabajo está compuesto por tres elementos claramente diferenciados: integración de cámaras IP, almacenamiento en vídeo y creación del sistema de vídeo distribuido. La integración de cámaras IP tiene como objetivo comunicar al equipo con la cámara de red para la obtención del flujo de imágenes que transmite. Dicha comunicación se establece vía HTTP (Hypertext Transfer Protocol) gracias a la interfaz de programación (API) de la que disponen estos dispositivos. El segundo elemento, el almacenamiento en vídeo, tiene como función guardar las imágenes de la cámara IP en archivos de vídeo. De esta manera se ofrece su posterior visualización en diferido. Finalmente, el sistema de vídeo distribuido permite la reproducción simultánea de múltiples vídeos grabados por la red de cámaras IP. Adicionalmente, vídeos grabados por otros dispositivos también son admitidos. El material desarrollado dispone del potencial necesario para convertirse en una herramienta libre de amplio uso en sistemas UNIX para cámaras IP, así como suponer la base de futuros proyectos relacionados con estos dispositivos.
Resumo:
Nello sviluppo di sistemi informatici si sono affermate numerose tecnologie, che vanno utilizzate in modo combinato e, possibilmente sinergico. Da una parte, i sistemi di gestione di basi di dati relazionali consentono una gestione efficiente ed efficace di dati persistenti, condivisi e transazionali. Dall'altra, gli strumenti e i metodi orientati agli oggetti (linguaggi di programmazione, ma anche metodologie di analisi e progettazione) consentono uno sviluppo efficace della logica applicativa delle applicazioni. E’ utile in questo contesto spiegare che cosa s'intende per sistema informativo e sistema informatico. Sistema informativo: L'insieme di persone, risorse tecnologiche, procedure aziendali il cui compito è quello di produrre e conservare le informazioni che servono per operare nell'impresa e gestirla. Sistema informatico: L'insieme degli strumenti informatici utilizzati per il trattamento automatico delle informazioni, al fine di agevolare le funzioni del sistema informativo. Ovvero, il sistema informatico raccoglie, elabora, archivia, scambia informazione mediante l'uso delle tecnologie proprie dell'Informazione e della Comunicazione (ICT): calcolatori, periferiche, mezzi di comunicazione, programmi. Il sistema informatico è quindi un componente del sistema informativo. Le informazioni ottenute dall'elaborazione dei dati devono essere salvate da qualche parte, in modo tale da durare nel tempo dopo l'elaborazione. Per realizzare questo scopo viene in aiuto l'informatica. I dati sono materiale informativo grezzo, non (ancora) elaborato da chi lo riceve, e possono essere scoperti, ricercati, raccolti e prodotti. Sono la materia prima che abbiamo a disposizione o produciamo per costruire i nostri processi comunicativi. L'insieme dei dati è il tesoro di un'azienda e ne rappresenta la storia evolutiva. All'inizio di questa introduzione è stato accennato che nello sviluppo dei sistemi informatici si sono affermate diverse tecnologie e che, in particolare, l'uso di sistemi di gestione di basi di dati relazionali comporta una gestione efficace ed efficiente di dati persistenti. Per persistenza di dati in informatica si intende la caratteristica dei dati di sopravvivere all'esecuzione del programma che li ha creati. Se non fosse cosi, i dati verrebbero salvati solo in memoria RAM e sarebbero persi allo spegnimento del computer. Nella programmazione informatica, per persistenza si intende la possibilità di far sopravvivere strutture dati all'esecuzione di un programma singolo. Occorre il salvataggio in un dispositivo di memorizzazione non volatile, come per esempio su un file system o su un database. In questa tesi si è sviluppato un sistema che è in grado di gestire una base di dati gerarchica o relazionale consentendo l'importazione di dati descritti da una grammatica DTD. Nel capitolo 1 si vedranno più in dettaglio cosa di intende per Sistema Informativo, modello client-server e sicurezza dei dati. Nel capitolo 2 parleremo del linguaggio di programmazione Java, dei database e dei file XML. Nel capitolo 3 descriveremo un linguaggio di analisi e modellazione UML con esplicito riferimento al progetto sviluppato. Nel capitolo 4 descriveremo il progetto che è stato implementato e le tecnologie e tools utilizzati.
Resumo:
Communication and coordination are two key-aspects in open distributed agent system, being both responsible for the system’s behaviour integrity. An infrastructure capable to handling these issues, like TuCSoN, should to be able to exploit modern technologies and tools provided by fast software engineering contexts. Thesis aims to demonstrate TuCSoN infrastructure’s abilities to cope new possibilities, hardware and software, offered by mobile technology. The scenarios are going to configure, are related to the distributed nature of multi-agent systems where an agent should be located and runned just on a mobile device. We deal new mobile technology frontiers concerned with smartphones using Android operating system by Google. Analysis and deployment of a distributed agent-based system so described go first to impact with quality and quantity considerations about available resources. Engineering issue at the base of our research is to use TuCSoN against to reduced memory and computing capability of a smartphone, without the loss of functionality, efficiency and integrity for the infrastructure. Thesis work is organized on two fronts simultaneously: the former is the rationalization process of the available hardware and software resources, the latter, totally orthogonal, is the adaptation and optimization process about TuCSoN architecture for an ad-hoc client side release.
Resumo:
Data deduplication describes a class of approaches that reduce the storage capacity needed to store data or the amount of data that has to be transferred over a network. These approaches detect coarse-grained redundancies within a data set, e.g. a file system, and remove them.rnrnOne of the most important applications of data deduplication are backup storage systems where these approaches are able to reduce the storage requirements to a small fraction of the logical backup data size.rnThis thesis introduces multiple new extensions of so-called fingerprinting-based data deduplication. It starts with the presentation of a novel system design, which allows using a cluster of servers to perform exact data deduplication with small chunks in a scalable way.rnrnAfterwards, a combination of compression approaches for an important, but often over- looked, data structure in data deduplication systems, so called block and file recipes, is introduced. Using these compression approaches that exploit unique properties of data deduplication systems, the size of these recipes can be reduced by more than 92% in all investigated data sets. As file recipes can occupy a significant fraction of the overall storage capacity of data deduplication systems, the compression enables significant savings.rnrnA technique to increase the write throughput of data deduplication systems, based on the aforementioned block and file recipes, is introduced next. The novel Block Locality Caching (BLC) uses properties of block and file recipes to overcome the chunk lookup disk bottleneck of data deduplication systems. This chunk lookup disk bottleneck either limits the scalability or the throughput of data deduplication systems. The presented BLC overcomes the disk bottleneck more efficiently than existing approaches. Furthermore, it is shown that it is less prone to aging effects.rnrnFinally, it is investigated if large HPC storage systems inhibit redundancies that can be found by fingerprinting-based data deduplication. Over 3 PB of HPC storage data from different data sets have been analyzed. In most data sets, between 20 and 30% of the data can be classified as redundant. According to these results, future work in HPC storage systems should further investigate how data deduplication can be integrated into future HPC storage systems.rnrnThis thesis presents important novel work in different area of data deduplication re- search.
Resumo:
BACKGROUND: Several adverse consequences are caused by mild perioperative hypothermia. Maintaining normothermia with patient warming systems, today mostly with forced air (FA), has thus become a standard procedure during anesthesia. Recently, a polymer-based resistive patient warming system was developed. We compared the efficacy of a widely distributed FA system with the resistive-polymer (RP) system in a prospective, randomized clinical study. METHODS: Eighty patients scheduled for orthopedic surgery were randomized to either FA warming (Bair Hugger warming blanket #522 and blower #750, Arizant, Eden Prairie, MN) or RP warming (Hot Dog Multi-Position Blanket and Hot Dog controller, Augustine Biomedical, Eden Prairie, MN). Core temperature, skin temperature (head, upper and lower arm, chest, abdomen, back, thigh, and calf), and room temperature (general and near the patient) were recorded continuously. RESULTS: After an initial decrease, core temperatures increased in both groups at comparable rates (FA: 0.33 degrees C/h +/- 0.34 degrees C/h; RP: 0.29 degrees C/h +/- 0.35 degrees C/h; P = 0.6). There was also no difference in the course of mean skin and mean body (core) temperature. FA warming increased the environment close to the patient (the workplace of anesthesiologists and surgeons) more than RP warming (24.4 degrees C +/- 5.2 degrees C for FA vs 22.6 degrees C +/- 1.9 degrees C for RP at 30 minutes; P(AUC) <0.01). CONCLUSION: RP warming performed as efficiently as FA warming in patients undergoing orthopedic surgery.
Resumo:
This thesis is composed of three life-cycle analysis (LCA) studies of manufacturing to determine cumulative energy demand (CED) and greenhouse gas emissions (GHG). The methods proposed could reduce the environmental impact by reducing the CED in three manufacturing processes. First, industrial symbiosis is proposed and a LCA is performed on both conventional 1 GW-scaled hydrogenated amorphous silicon (a-Si:H)-based single junction and a-Si:H/microcrystalline-Si:H tandem cell solar PV manufacturing plants and such plants coupled to silane recycling plants. Using a recycling process that results in a silane loss of only 17 versus 85 percent, this results in a CED savings of 81,700 GJ and 290,000 GJ per year for single and tandem junction plants, respectively. This recycling process reduces the cost of raw silane by 68 percent, or approximately $22.6 and $79 million per year for a single and tandem 1 GW PV production facility, respectively. The results show environmental benefits of silane recycling centered around a-Si:H-based PV manufacturing plants. Second, an open-source self-replicating rapid prototype or 3-D printer, the RepRap, has the potential to reduce the environmental impact of manufacturing of polymer-based products, using distributed manufacturing paradigm, which is further minimized by the use of PV and improvements in PV manufacturing. Using 3-D printers for manufacturing provides the ability to ultra-customize products and to change fill composition, which increases material efficiency. An LCA was performed on three polymer-based products to determine the CED and GHG from conventional large-scale production and are compared to experimental measurements on a RepRap producing identical products with ABS and PLA. The results of this LCA study indicate that the CED of manufacturing polymer products can possibly be reduced using distributed manufacturing with existing 3-D printers under 89% fill and reduced even further with a solar photovoltaic system. The results indicate that the ability of RepRaps to vary fill has the potential to diminish environmental impact on many products. Third, one additional way to improve the environmental performance of this distributed manufacturing system is to create the polymer filament feedstock for 3-D printers using post-consumer plastic bottles. An LCA was performed on the recycling of high density polyethylene (HDPE) using the RecycleBot. The results of the LCA showed that distributed recycling has a lower CED than the best-case scenario used for centralized recycling. If this process is applied to the HDPE currently recycled in the U.S., more than 100 million MJ of energy could be conserved per annum along with significant reductions in GHG. This presents a novel path to a future of distributed manufacturing suited for both the developed and developing world with reduced environmental impact. From improving manufacturing in the photovoltaic industry with the use of recycling to recycling and manufacturing plastic products within our own homes, each step reduces the impact on the environment. The three coupled projects presented here show a clear potential to reduce the environmental impact of manufacturing and other processes by implementing complimenting systems, which have environmental benefits of their own in order to achieve a compounding effect of reduced CED and GHG.
Resumo:
The University of Maine Ice Sheet Model was used to study basal conditions during retreat of the Laurentide ice sheet in Maine. Within 150 km of the margin, basal melt rates average similar to 5 mm a(-1) during retreat. They decline over the next 100km, so areas of frozen bed develop in northern Maine during retreat. By integrating the melt rate over the drainage area typically subtended by an esker, we obtained a discharge at the margin of similar to 1.2 m(3) s(-1). While such a discharge could have moved the material in the Katahdin esker, it was likely too low to build the esker in the time available. Additional water from the glacier surface was required. Temperature gradients in the basal ice increase rapidly with distance from the margin. By conducting upward into the ice all of the additional viscous heat produced by any perturbation that increases the depth of flow in a flat conduit in a distributed drainage system, these gradients inhibit the formation of sharply arched conduits in which an esker can form. This may explain why eskers commonly seem to form near the margin and are typically segmented, with later segments overlapping onto earlier ones.
Resumo:
This paper describes an ArcView extension that allows police planners to design patrol districts and to evaluate them by displaying various performance measures. It uses a spatially distributed queuing system (the Larson Hypercube) to calculate expected travel times, workloads, preventive patrol frequencies, and other variables; and it allows planners to see the unavoidable tradeoffs among their objectives. Using this tool, planners can experiment with various patrol patterns to find those that best meet their Department.s goals. For example, those patrol patterns which are best in terms of average response time don.t do as well as others in terms of workload balance, or those that are best in terms of achieving a uniform response time across different parts of the city don't do as well as others in terms of minimizing inter-district dispatches. There is, of course, no perfect solution for this problem: the facts of the situation force us to balance competing goals. Described here is a way of explicitly weighting the alternative objectives.