989 resultados para data flow diagram
Resumo:
Sähkömarkkinoiden vapautumisen jälkeen energia-alalle on muodostunut entistä suurempi kysyntä kehittyneille energiatiedon hallintaaan erikoistuneille tietojärjestelmille. Uudet lakisäädökset sekä tulevaisuuden kokonaisvaltaiset tiedonkeruujärjestelmät, kuten älykkäät mittarit ja älykkäät sähköverkot, tuovat mukanaan entistä suuremman prosessoitavan tietovirran. Nykyaikaisen energiatietojärjestelmän on kyettävä vastaamaan haasteeseen ja palveltava asiakkaan vaatimuksia tehokkaasti prosessien suorituskyvyn kärsimättä. Tietojärjestelmän prosessien on oltava myös skaalautuvia, jotta tulevaisuuden lisääntyneet prosessointitarpeet ovat hallittavissa. Tässä työssä kuvataan nykyaikaisen energiatietojärjestelmän keskeiset energiatiedon hallintaan ja varastointiin liittyvät komponentit. Työssä esitellään myös älykkäiden mittareiden perusperiaate ja niiden tuomat edut energia-alalla. Lisäksi työssä kuvataan visioita tulevaisuuden älykkäiden sähköverkkojen toteutusmahdollisuuksista. Diplomityössä esitellään keskeisiä suorituskykyyn liittyviä kokonaisuuksia. Lisäksi työssä kuvataan keskeiset suorituskyvyn mittarit sekä suorituskykyvaatimukset. Järjestelmän suorituskyvyn arvioinnin toteuttamiseen on erilaisia menetelmiä, joista tässä työssä kuvataan yksi sen keskeisine periaatteineen. Suorituskyvyn analysointiin käytetään erilaisia tekniikoita, joista tässä diplomityössä esitellään tarkemmin järjestelmän mittaus. Työssä toteutetaan myös case-tutkimus, jossa analysoidaan mittaustiedon sisääntuontiin käytettävän prosessin kahta eri kehitysversiota ja näiden suorituskykyominaisuuksia. Kehitysversioiden vertailussa havaitaan, että uusi versio on selkeästi edellistä versiota nopeampi. Case-tutkimuksessa määritetään myös suorituskyvyn kannalta optimaalinen rinnakkaisprosessien määrä ja tutkitaan prosessin skaalautuvuutta. Tutkimuksessa todetaan, että uusi kehitysversio skaalautuu lineaarisesti.
Resumo:
Tässä diplomityössä on tarkasteltu Porvoon öljynjalostamon vetyverkkoa ja pohdittu keinoja, joilla vedyn käyttöä jalostamolla voitaisiin tehostaan sekä polttokaasuverkkoon menevän vedyn määrä pienentää. Tarkastelun lähtökohtana toimii vetytaseen pohjalta laadittu vetypinch-analyysi. Kirjallisuusosassa on esitelty jalostamon vetyverkkoon kuuluvat yksiköt sekä käsitelty lyhyesti niiden toimintaa. Lisäksi on käsitelty vetypinch-analyysin periaate, sekä kuinka todelliset prosessirajoitteet voidaan huomioida sitä toteutettaessa. Kirjallisuusosan lopussa on esitetty miten vetyverkon vaiheittainen optimointi etenee. Työn soveltavassa osassa laadittiin vetyverkon virtauskaavio, jolla saatiin luotua kattava käsitys jalostamon vedynjakelusta. Virtauskaaviosta tehtiin yksinkertaistettu versio, jonka perusteella laadittiin vetytase. Vetytaseen pohjalta suoritettiin vetypinch-analyysi, jonka mukaan jalostamolla tuotettiin tasehetkellä ylimäärin vetyä. Vedyn käytön tehostamiseksi jalostamolla tulee rikkivedyn talteenottoyksikkö 2:n polttokaasuvirta pyrkiä minimoimaan tai hyödyntämään. Lisäksi virtausmittareiden mitoituspisteiden molekyylimassat tulisi muuttaa vastaamaan paremmin nykyistä ajotilannetta, sekä seurata niitä jatkossa säännöllisesti. Myös vetypitoisuutta mittaavien online-analysaattoreiden kalibroinnista tulee huolehtia, ja ottaa riittävästi kenttänäytteitä vetyverkosta. On huomattava, että öljynjalostamon vedyn tuotannon minimointi ei ole aina automaattisesti taloudellisin ratkaisu. Joissain tapauksissa vedyn osapaineen nostaminen vetyä kuluttavan yksikön reaktorissa voi lisätä yksikön tuottavuutta niin paljon, että se kompensoi lisääntyneestä vedyn tuotannosta aiheutuvat kustannukset.
Resumo:
Työssä tutkitaan tiedonsiirtoa eri modulaatioilla, bittinopeuksilla ja amplitudin voimakkuuksilla ja tuloksia tarkastellaan Bit Error Ration avulla. Signaaleja siirrettiiin myös koodattuna ja vertailtiin koodauksen etuja ja haittoja verrattuna koodaamattomaan tietoon. Datavirta kulkee AXMK-kaapelissa, joko tasasähkön mukana, tai maadoituskaapelissa. Tuloksissa havaittiin, että suurempi bittinopeus ei kasvattanut häviöiden määrää. Koodauksen käyttö toisaalta vähenti bittivirheiden määrää.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
The number of security violations is increasing and a security breach could have irreversible impacts to business. There are several ways to improve organization security, but some of them may be difficult to comprehend. This thesis demystifies threat modeling as part of secure system development. Threat modeling enables developers to reveal previously undetected security issues from computer systems. It offers a structured approach for organizations to find and address threats against vulnerabilities. When implemented correctly threat modeling will reduce the amount of defects and malicious attempts against the target environment. In this thesis Microsoft Security Development Lifecycle (SDL) is introduced as an effective methodology for reducing defects in the target system. SDL is traditionally meant to be used in software development, principles can be however partially adapted to IT-infrastructure development. Microsoft threat modeling methodology is an important part of SDL and it is utilized in this thesis to find threats from the Acme Corporation’s factory environment. Acme Corporation is used as a pseudonym for a company providing high-technology consumer electronics. Target for threat modeling is the IT-infrastructure of factory’s manufacturing execution system. Microsoft threat modeling methodology utilizes STRIDE –mnemonic and data flow diagrams to find threats. Threat modeling in this thesis returned results that were important for the organization. Acme Corporation now has more comprehensive understanding concerning IT-infrastructure of the manufacturing execution system. On top of vulnerability related results threat modeling provided coherent views of the target system. Subject matter experts from different areas can now agree upon functions and dependencies of the target system. Threat modeling was recognized as a useful activity for improving security.
Resumo:
Les systèmes multiprocesseurs sur puce électronique (On-Chip Multiprocessor [OCM]) sont considérés comme les meilleures structures pour occuper l'espace disponible sur les circuits intégrés actuels. Dans nos travaux, nous nous intéressons à un modèle architectural, appelé architecture isométrique de systèmes multiprocesseurs sur puce, qui permet d'évaluer, de prédire et d'optimiser les systèmes OCM en misant sur une organisation efficace des nœuds (processeurs et mémoires), et à des méthodologies qui permettent d'utiliser efficacement ces architectures. Dans la première partie de la thèse, nous nous intéressons à la topologie du modèle et nous proposons une architecture qui permet d'utiliser efficacement et massivement les mémoires sur la puce. Les processeurs et les mémoires sont organisés selon une approche isométrique qui consiste à rapprocher les données des processus plutôt que d'optimiser les transferts entre les processeurs et les mémoires disposés de manière conventionnelle. L'architecture est un modèle maillé en trois dimensions. La disposition des unités sur ce modèle est inspirée de la structure cristalline du chlorure de sodium (NaCl), où chaque processeur peut accéder à six mémoires à la fois et où chaque mémoire peut communiquer avec autant de processeurs à la fois. Dans la deuxième partie de notre travail, nous nous intéressons à une méthodologie de décomposition où le nombre de nœuds du modèle est idéal et peut être déterminé à partir d'une spécification matricielle de l'application qui est traitée par le modèle proposé. Sachant que la performance d'un modèle dépend de la quantité de flot de données échangées entre ses unités, en l'occurrence leur nombre, et notre but étant de garantir une bonne performance de calcul en fonction de l'application traitée, nous proposons de trouver le nombre idéal de processeurs et de mémoires du système à construire. Aussi, considérons-nous la décomposition de la spécification du modèle à construire ou de l'application à traiter en fonction de l'équilibre de charge des unités. Nous proposons ainsi une approche de décomposition sur trois points : la transformation de la spécification ou de l'application en une matrice d'incidence dont les éléments sont les flots de données entre les processus et les données, une nouvelle méthodologie basée sur le problème de la formation des cellules (Cell Formation Problem [CFP]), et un équilibre de charge de processus dans les processeurs et de données dans les mémoires. Dans la troisième partie, toujours dans le souci de concevoir un système efficace et performant, nous nous intéressons à l'affectation des processeurs et des mémoires par une méthodologie en deux étapes. Dans un premier temps, nous affectons des unités aux nœuds du système, considéré ici comme un graphe non orienté, et dans un deuxième temps, nous affectons des valeurs aux arcs de ce graphe. Pour l'affectation, nous proposons une modélisation des applications décomposées en utilisant une approche matricielle et l'utilisation du problème d'affectation quadratique (Quadratic Assignment Problem [QAP]). Pour l'affectation de valeurs aux arcs, nous proposons une approche de perturbation graduelle, afin de chercher la meilleure combinaison du coût de l'affectation, ceci en respectant certains paramètres comme la température, la dissipation de chaleur, la consommation d'énergie et la surface occupée par la puce. Le but ultime de ce travail est de proposer aux architectes de systèmes multiprocesseurs sur puce une méthodologie non traditionnelle et un outil systématique et efficace d'aide à la conception dès la phase de la spécification fonctionnelle du système.
Resumo:
Introducción: la pérdida auditiva inducida por ruido es el efecto nocivo del ruido más comúnmente estudiado, sin embargo, el ruido también produce trastornos digestivos y del sueño, cambios en los niveles de cortisol, efectos cardiovasculares e hipertensión arterial (HTA), entre otros. Objetivo: determinar si la exposición laboral a ruido induce hipertensión arterial. Materiales y métodos: se siguieron las recomendaciones del método PRISMA para revisiones sistemáticas. Se hizo una búsqueda de estudios en PUBMED utilizando los términos “occupational and noise and hypertension” y aplicando los filtros: 1) publicaciones incluidas entre 2005-2015; 2) estudios publicados en inglés; 3) revisión de títulos y resúmenes; 4) revisión completa de los textos para un total final de 32 estudios. Se hizo la revisión, análisis y resumen de todos los estudios. Resultados: los estudios concluyeron que aquellos portadores de los polimorfismos de la enzima convertidora de angiotensina expuestos a ruido, tuvieron una mayor susceptibilidad genética a tener HTA. Los estudios reportaron una asociación positiva entre ruido e HTA. Hay controversia acerca de la relación que existe entre HTA, ruido y coexposición a otros factores como calor, trabajo por turnos, presencia de solventes o plomo en el lugar de trabajo y carga física. Conclusiones: Se presume que solo los niveles de ruido ≥ 85 dBA tienen efectos negativos sobre la salud, pero se ha encontrado que los efectos no auditivos del ruido se producen por debajo de este parámetro. Recomendaciones: se sugiere el uso de la pérdida auditiva inducida por ruido entre población trabajadora como un método de tamizaje para detectar personas prehipertensas, con el fin prevenir la generación de HTA.
Resumo:
The proposal presented in this thesis is to provide designers of knowledge based supervisory systems of dynamic systems with a framework to facilitate their tasks avoiding interface problems among tools, data flow and management. The approach is thought to be useful to both control and process engineers in assisting their tasks. The use of AI technologies to diagnose and perform control loops and, of course, assist process supervisory tasks such as fault detection and diagnose, are in the scope of this work. Special effort has been put in integration of tools for assisting expert supervisory systems design. With this aim the experience of Computer Aided Control Systems Design (CACSD) frameworks have been analysed and used to design a Computer Aided Supervisory Systems (CASSD) framework. In this sense, some basic facilities are required to be available in this proposed framework: ·
Resumo:
Tungsten carbide/oxide particles have been prepared by the gel precipitation of tungstic acid in the presence of an organic gelling agent [10% ammonium poly(acrylic acid) in water, supplied by Ciba Specialty Chemicals]. The feed solution; a homogeneous mixture of sodium tungstate and ammonium poly(acrylic acid) in water, was dropped from a 1-mm jet into hydrochloric acid saturated hexanol/concentrated hydrochloric acid to give particles of a mixture of tungstic acid and poly(acrylic acid), which, after drying in air at 100 degrees C and heating to 900 degrees C in argon for 2 h, followed by heating in carbon dioxide for a further 2 h and cooling, gives a mixture of WO, WC, and a trace of NaxWO3, with the carbon for the formation of WC being provided by the thermal carbonization of poly(acrylic acid). The pyrolyzed product is friable and easily broken down in a pestle and mortar to a fine powder or by ultrasonics, in water, to form a stable colloid. The temperature of carbide formation by this process is significantly lower (900 degrees C) than that reported for the commercial preparation of tungsten carbide, typically > 1400 degrees C. In addition, the need for prolonged grinding of the constituents is obviated because the reacting moieties are already in intimate contact on a molecular basis. X-ray diffraction, particle sizing, transmission electron microscopy, surface area, and pore size distribution studies have been carried out, and possible uses are suggested. A flow diagram for the process is described.
Resumo:
We describe the public ESO near-IR variability survey (VVV) scanning the Milky Way bulge and an adjacent section of the mid-plane where star formation activity is high. The survey will take 1929 h of observations with the 4-m VISTA telescope during 5 years (2010-2014), covering similar to 10(9) point sources across an area of 520 deg(2), including 33 known globular clusters and similar to 350 open clusters. The final product will be a deep near-IR atlas in five passbands (0.9-2.5 mu m) and a catalogue of more than 106 variable point sources. Unlike single-epoch surveys that, in most cases, only produce 2-D maps, the VVV variable star survey will enable the construction of a 3-D map of the surveyed region using well-understood distance indicators such as RR Lyrae stars, and Cepheids. It will yield important information on the ages of the populations. The observations will be combined with data from MACHO, OGLE, EROS, VST, Spitzer, HST, Chandra, INTEGRAL, WISE, Fermi LAT, XMM-Newton, GAIA and ALMA for a complete understanding of the variable sources in the inner Milky Way. This public survey will provide data available to the whole community and therefore will enable further studies of the history of the Milky Way, its globular cluster evolution, and the population census of the Galactic Bulge and center, as well as the investigations of the star forming regions in the disk. The combined variable star catalogues will have important implications for theoretical investigations of pulsation properties of stars. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Single-page applications have historically been subject to strong market forces driving fast development and deployment in lieu of quality control and changeable code, which are important factors for maintainability. In this report we develop two functionally equivalent applications using AngularJS and React and compare their maintainability as defined by ISO/IEC 9126. AngularJS and React represent two distinct approaches to web development, with AngularJS being a general framework providing rich base functionality and React a small specialized library for efficient view rendering. The quality comparison was accomplished by calculating Maintainability Index for each application. Version control analysis was used to determine quality indicators during development and subsequent maintenance where new functionality was added in two steps. The results show no major differences in maintainability in the initial applications. As more functionality is added the Maintainability Index decreases faster in the AngularJS application, indicating a steeper increase in complexity compared to the React application. Source code analysis reveals that changes in data flow requires significantly larger modifications of the AngularJS application due to its inherent architecture for data flow. We conclude that frameworks are useful when they facilitate development of known requirements but less so when applications and systems grow in size.
Resumo:
In the last decades, the oil, gas and petrochemical industries have registered a series of huge accidents. Influenced by this context, companies have felt the necessity of engaging themselves in processes to protect the external environment, which can be understood as an ecological concern. In the particular case of the nuclear industry, sustainable education and training, which depend too much on the quality and applicability of the knowledge base, have been considered key points on the safely application of this energy source. As a consequence, this research was motivated by the use of the ontology concept as a tool to improve the knowledge management in a refinery, through the representation of a fuel gas sweetening plant, mixing many pieces of information associated with its normal operation mode. In terms of methodology, this research can be classified as an applied and descriptive research, where many pieces of information were analysed, classified and interpreted to create the ontology of a real plant. The DEA plant modeling was performed according to its process flow diagram, piping and instrumentation diagrams, descriptive documents of its normal operation mode, and the list of all the alarms associated to the instruments, which were complemented by a non-structured interview with a specialist in that plant operation. The ontology was verified by comparing its descriptive diagrams with the original plant documents and discussing with other members of the researchers group. All the concepts applied in this research can be expanded to represent other plants in the same refinery or even in other kind of industry. An ontology can be considered a knowledge base that, because of its formal representation nature, can be applied as one of the elements to develop tools to navigate through the plant, simulate its behavior, diagnose faults, among other possibilities
Resumo:
The increase of capacity to integrate transistors permitted to develop completed systems, with several components, in single chip, they are called SoC (System-on-Chip). However, the interconnection subsystem cans influence the scalability of SoCs, like buses, or can be an ad hoc solution, like bus hierarchy. Thus, the ideal interconnection subsystem to SoCs is the Network-on-Chip (NoC). The NoCs permit to use simultaneous point-to-point channels between components and they can be reused in other projects. However, the NoCs can raise the complexity of project, the area in chip and the dissipated power. Thus, it is necessary or to modify the way how to use them or to change the development paradigm. Thus, a system based on NoC is proposed, where the applications are described through packages and performed in each router between source and destination, without traditional processors. To perform applications, independent of number of instructions and of the NoC dimensions, it was developed the spiral complement algorithm, which finds other destination until all instructions has been performed. Therefore, the objective is to study the viability of development that system, denominated IPNoSys system. In this study, it was developed a tool in SystemC, using accurate cycle, to simulate the system that performs applications, which was implemented in a package description language, also developed to this study. Through the simulation tool, several result were obtained that could be used to evaluate the system performance. The methodology used to describe the application corresponds to transform the high level application in data-flow graph that become one or more packages. This methodology was used in three applications: a counter, DCT-2D and float add. The counter was used to evaluate a deadlock solution and to perform parallel application. The DCT was used to compare to STORM platform. Finally, the float add aimed to evaluate the efficiency of the software routine to perform a unimplemented hardware instruction. The results from simulation confirm the viability of development of IPNoSys system. They showed that is possible to perform application described in packages, sequentially or parallelly, without interruptions caused by deadlock, and also showed that the execution time of IPNoSys is more efficient than the STORM platform
Resumo:
The increase of applications complexity has demanded hardware even more flexible and able to achieve higher performance. Traditional hardware solutions have not been successful in providing these applications constraints. General purpose processors have inherent flexibility, since they perform several tasks, however, they can not reach high performance when compared to application-specific devices. Moreover, since application-specific devices perform only few tasks, they achieve high performance, although they have less flexibility. Reconfigurable architectures emerged as an alternative to traditional approaches and have become an area of rising interest over the last decades. The purpose of this new paradigm is to modify the device s behavior according to the application. Thus, it is possible to balance flexibility and performance and also to attend the applications constraints. This work presents the design and implementation of a coarse grained hybrid reconfigurable architecture to stream-based applications. The architecture, named RoSA, consists of a reconfigurable logic attached to a processor. Its goal is to exploit the instruction level parallelism from intensive data-flow applications to accelerate the application s execution on the reconfigurable logic. The instruction level parallelism extraction is done at compile time, thus, this work also presents an optimization phase to the RoSA architecture to be included in the GCC compiler. To design the architecture, this work also presents a methodology based on hardware reuse of datapaths, named RoSE. RoSE aims to visualize the reconfigurable units through reusability levels, which provides area saving and datapath simplification. The architecture presented was implemented in hardware description language (VHDL). It was validated through simulations and prototyping. To characterize performance analysis some benchmarks were used and they demonstrated a speedup of 11x on the execution of some applications