991 resultados para Data flow
Resumo:
Current technology permits connecting local networks via high-bandwidth telephone lines. Central coordinator nodes may use Intelligent Networks to manage data flow over dialed data lines, e.g. ISDN, and to establish connections between LANs. This dissertation focuses on cost minimization and on establishing operational policies for query distribution over heterogeneous, geographically distributed databases. Based on our study of query distribution strategies, public network tariff policies, and database interface standards we propose methods for communication cost estimation, strategies for the reduction of bandwidth allocation, and guidelines for central to node communication protocols. Our conclusion is that dialed data lines offer a cost effective alternative for the implementation of distributed database query systems, and that existing commercial software may be adapted to support query processing in heterogeneous distributed database systems. ^
Resumo:
The substantial increase in the number of applications offered through the computer networks, as well as in the volume of traffic forwarded through the network, have hampered to assure adequate service level to users. The Quality of Service (QoS) offer, honoring specified parameters in Service Level Agreements (SLA), established between the service providers and their clients, composes a traditional and extensive computer networks’ research area. Several schemes proposals for the provision of QoS were presented in the last three decades, but the acting scope of these proposals is always limited due to some factors, including the limited development of the network hardware and software, generally belonging to a single manufacturer. The advent of Software Defined Networking (SDN), along with the maturation of its main materialization, the OpenFlow protocol, allowed the decoupling between network hardware and software, through an architecture which provides a control plane and a data plane. This eases the computer networks scenario, allowing that new abstractions are applied in the hardware composing the data plane, through the development of new software pieces which are executed in the control plane. This dissertation investigates the QoS offer through the use and extension of the SDN architecture. Based on the proposal of two new modules, one to perform the data plane monitoring, SDNMon, and the second, MP-ROUTING, developed to determine the use of multiple paths in the forwarding of data referring to a flow, we demonstrated in this work that some QoS metrics specified in the SLAs, such as bandwidth, can be honored. Both modules were implemented and evaluated through a prototype. The evaluation results referring to several aspects of both proposed modules are presented in this dissertation, showing the obtained accuracy of the monitoring module SDNMon and the QoS gains due to the utilization of multiple paths defined by the MP-Routing, when forwarding data flow through the SDN.
Resumo:
Lately, various programming frameworks has been developed for developing web applications. These frameworks focus on increasing the user experience by performance improvements such as faster render times and response times. One of these frameworks are React, which has introduced a completely new architectural pattern for both managing the state and data flow of an application. React also offers support for native application development and makes server-side rendering possible. Something that is difficult to accomplish with an application developed with Angular 1.5, which is used by the company Dewire today. The aim of this thesis was to compare React with an existing Angular project, in order to determine whether React could be a potential replacement for Angular. To gain knowledge about the subject, a theoretical study of web- based sources has been made. While the practical part has been to rebuild a web application with React together with the architecture Flux, which is based on a view from the Angular project. The implementation process was repeated until the view was completed and a desired data flow, as in the Angular application, was reached. The resulting React application was later compared with the Angular application developed by the company, where the outcome of the comparison showed that the React performed better than Angular in all tests. In conclusion, due to the timeframe of the project, only the most important parts of the Angular project were implemented in order to carry out the measurements that were of interest to the company. By recreating most of the functionality, or the entire Angular application, more interesting comparisons could have been done.
Resumo:
Single-page applications have historically been subject to strong market forces driving fast development and deployment in lieu of quality control and changeable code, which are important factors for maintainability. In this report we develop two functionally equivalent applications using AngularJS and React and compare their maintainability as defined by ISO/IEC 9126. AngularJS and React represent two distinct approaches to web development, with AngularJS being a general framework providing rich base functionality and React a small specialized library for efficient view rendering. The quality comparison was accomplished by calculating Maintainability Index for each application. Version control analysis was used to determine quality indicators during development and subsequent maintenance where new functionality was added in two steps. The results show no major differences in maintainability in the initial applications. As more functionality is added the Maintainability Index decreases faster in the AngularJS application, indicating a steeper increase in complexity compared to the React application. Source code analysis reveals that changes in data flow requires significantly larger modifications of the AngularJS application due to its inherent architecture for data flow. We conclude that frameworks are useful when they facilitate development of known requirements but less so when applications and systems grow in size. Sammanfattning: Ensidesapplikationer har historiskt sett påverkats av starka marknadskrafter som pådriver snabba utvecklingscykler och leveranser. Detta medför att kvalitetskontroll och förändringsbar kod, som är viktiga faktorer för förvaltningsbarhet, blir lidande. I denna rapport utvecklar vi två funktionellt ekvi-valenta ensidesapplikationer med AngularJS och React samt jämför dessa applikationers förvaltningsbarhet enligt ISO/IEC 9126. AngularJS och React representerar två distinkta angreppsätt på webbutveckling, där AngularJS är ett ramverk med mycket färdig funktionalitet och React ett mindre bibliotek specialiserat på vyrendering. Kvalitetsjämförelsen utfördes genom att beräkna förvaltningsbarhetsindex för respektive applikation. Versionshanteringsanalys användes för att bestämma andra kvalitetsindikatorer efter den initiala utvecklingen samt två efterföljande underhållsarbeten. Resultaten visar inga markanta skillnader i förvaltningsbarhet för de initiala applikationerna. I takt med att mer funktionalitet lades till sjönk förvaltnings-barhetsindex snabbare för AngularJS-applikationen, vilket motsvarar en kraftigare ökning i komplexitet jämfört med React-applikationen. Versionshanteringsanalys visar att ändringar i dataflödet kräver större modifikationer för AngularJS-applikationen på grund av dess förbestämda arkitektur. Utifrån detta drar vi slutsatsen att ramverk är användbara när de understödjer utvecklingen mot kända krav men att deras nytta blir begränsad ju mer en applikation växer i storlek.
Resumo:
Reconfigurable platforms are a promising technology that offers an interesting trade-off between flexibility and performance, which many recent embedded system applications demand, especially in fields such as multimedia processing. These applications typically involve multiple ad-hoc tasks for hardware acceleration, which are usually represented using formalisms such as Data Flow Diagrams (DFDs), Data Flow Graphs (DFGs), Control and Data Flow Graphs (CDFGs) or Petri Nets. However, none of these models is able to capture at the same time the pipeline behavior between tasks (that therefore can coexist in order to minimize the application execution time), their communication patterns, and their data dependencies. This paper proves that the knowledge of all this information can be effectively exploited to reduce the resource requirements and the timing performance of modern reconfigurable systems, where a set of hardware accelerators is used to support the computation. For this purpose, this paper proposes a novel task representation model, named Temporal Constrained Data Flow Diagram (TCDFD), which includes all this information. This paper also presents a mapping-scheduling algorithm that is able to take advantage of the new TCDFD model. It aims at minimizing the dynamic reconfiguration overhead while meeting the communication requirements among the tasks. Experimental results show that the presented approach achieves up to 75% of resources saving and up to 89% of reconfiguration overhead reduction with respect to other state-of-the-art techniques for reconfigurable platforms.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
The assessment of wind energy resource for the development of deep offshore wind plants requires the use of every possible source of data and, in many cases, includes data gathered at meteorological stations installed at islands, islets or even oil platforms—all structures that interfere with, and change, the flow characteristics. This work aims to contribute to the evaluation of such changes in the flow by developing a correction methodology and applying it to the case of Berlenga island, Portugal. The study is performed using computational fluid dynamic simulations (CFD) validated by wind tunnel tests. In order to simulate the incoming offshore flow with CFD models a wind profile, unknown a priori, was established using observations from two coastal wind stations and a power law wind profile was fitted to the existing data (a=0.165). The results show that the resulting horizontal wind speed at 80 m above sea level is 16% lower than the wind speed at 80 m above the island for the dominant wind direction sector.
Resumo:
In this work we analyze how patchy distributions of CO2 and brine within sand reservoirs may lead to significant attenuation and velocity dispersion effects, which in turn may have a profound impact on surface seismic data. The ultimate goal of this paper is to contribute to the understanding of these processes within the framework of the seismic monitoring of CO2 sequestration, a key strategy to mitigate global warming. We first carry out a Monte Carlo analysis to study the statistical behavior of attenuation and velocity dispersion of compressional waves traveling through rocks with properties similar to those at the Utsira Sand, Sleipner field, containing quasi-fractal patchy distributions of CO2 and brine. These results show that the mean patch size and CO2 saturation play key roles in the observed wave-induced fluid flow effects. The latter can be remarkably important when CO2 concentrations are low and mean patch sizes are relatively large. To analyze these effects on the corresponding surface seismic data, we perform numerical simulations of wave propagation considering reservoir models and CO2 accumulation patterns similar to the CO2 injection site in the Sleipner field. These numerical experiments suggest that wave-induced fluid flow effects may produce changes in the reservoir's seismic response, modifying significantly the main seismic attributes usually employed in the characterization of these environments. Consequently, the determination of the nature of the fluid distributions as well as the proper modeling of the seismic data constitute important aspects that should not be ignored in the seismic monitoring of CO2 sequestration problems.
Resumo:
In October 1998, Hurricane Mitch triggered numerous landslides (mainly debris flows) in Honduras and Nicaragua, resulting in a high death toll and in considerable damage to property. The potential application of relatively simple and affordable spatial prediction models for landslide hazard mapping in developing countries was studied. Our attention was focused on a region in NW Nicaragua, one of the most severely hit places during the Mitch event. A landslide map was obtained at 1:10 000 scale in a Geographic Information System (GIS) environment from the interpretation of aerial photographs and detailed field work. In this map the terrain failure zones were distinguished from the areas within the reach of the mobilized materials. A Digital Elevation Model (DEM) with 20 m×20 m of pixel size was also employed in the study area. A comparative analysis of the terrain failures caused by Hurricane Mitch and a selection of 4 terrain factors extracted from the DEM which, contributed to the terrain instability, was carried out. Land propensity to failure was determined with the aid of a bivariate analysis and GIS tools in a terrain failure susceptibility map. In order to estimate the areas that could be affected by the path or deposition of the mobilized materials, we considered the fact that under intense rainfall events debris flows tend to travel long distances following the maximum slope and merging with the drainage network. Using the TauDEM extension for ArcGIS software we generated automatically flow lines following the maximum slope in the DEM starting from the areas prone to failure in the terrain failure susceptibility map. The areas crossed by the flow lines from each terrain failure susceptibility class correspond to the runout susceptibility classes represented in a runout susceptibility map. The study of terrain failure and runout susceptibility enabled us to obtain a spatial prediction for landslides, which could contribute to landslide risk mitigation.
Resumo:
Tämän diplomityön päämääränä oli kuvata tilaus-toimitusprosessin eri toimintojen työnkulku, kun tuotetiedonhallintajärjestelmä on osa työympäristöä. Työn teoreettisessa osassa tarkasteltiin liiketoimintaprosessien uudistamista ja prosessien määrittämistä sekä esiteltiin tuotetiedonhallinnan (PDM) keskeiset osa-alueet. Kohdeyrityksen tausta ja strategiat esiteltiin, minkä jälkeen muutoksia arvioitiin suhteessa teoriaosuuden tuloksiin. Nykyisten toimintatapojen määrittämistä varten haastateltiin henkilöitä jokaisesta tilaus-toimitusprosessin vaiheesta tuotantoyksikön sisällä. Lopuksi kuvattiin yrityksen tuotetiedonhallintaperiaatteet ja määritettiin työnkulku prosessin eri vaiheissa. Samalla kuin uusi tuotetiedonhallintajärjestelmä otetaan käyttöön, on yrityksessä omaksuttava tuotetiedonhallinnan ajatusmalli. Tuoterakenteen hallinta jakautuu nyt eri toimintojen kesken, jolloin suunnittelun rakenne, tuotannon rakenne ja huoltorakenne ovat eri ihmisten vastuulla. Näiden eri rakenteiden konfigurointi tilaus-toimitus prosessin aikana määrää missä järjestyksessä toiminnot on suoritettava eri järjestelmien välillä. Monikansallinen suunnitteluorganisaatio on myös otettava huomioon tilauksenkulun aikana. Tuotetiedonhallintajärjestelmää käytetään yhdessä tuttujen suunnitteluohjelmien sekä toiminnanohjausjärjestelmän (ERP) kanssa. Työnkulkukaaviossa määritellään koko yritystä koskeva malli siitä, miten ja missä järjestyksessä tehtävät on suoritettava eri järjestelmissä tilaus-toimitus prosessin aikana. Tässä työssä tutkittiin tuotteen määrittelyn ja suunnittelutiedon hallinnan kannalta oleellisimmat tilaus-toimitusprosessiin kuuluvat toiminnot; myynti, myynnin tuki, tuotannon ohjaus, sovellussuunnittelu ja dokumentointi. Tulevaisuudessa on suositeltavaa pohtia tuotetiedonhallintajärjestelmän käyttöönottoa myös tuotannossa ja ostoissa. Tilaus-toimitusprosessiin liittyvät kehitysmahdollisuudet kannattaisi seuraavaksi kohdistaa tilauksen määrittelyvaiheeseen myyjä-asiakas rajapinnassa, jossa tehdyt virheet kertautuvat jokaisessa prosessin vaiheessa.
Resumo:
Data centre is a centralized repository,either physical or virtual,for the storage,management and dissemination of data and information organized around a particular body and nerve centre of the present IT revolution.Data centre are expected to serve uniinterruptedly round the year enabling them to perform their functions,it consumes enormous energy in the present scenario.Tremendous growth in the demand from IT Industry made it customary to develop newer technologies for the better operation of data centre.Energy conservation activities in data centre mainly concentrate on the air conditioning system since it is the major mechanical sub-system which consumes considerable share of the total power consumption of the data centre.The data centre energy matrix is best represented by power utilization efficiency(PUE),which is defined as the ratio of the total facility power to the IT equipment power.Its value will be greater than one and a large value of PUE indicates that the sub-systems draw more power from the facility and the performance of the data will be poor from the stand point of energy conservation. PUE values of 1.4 to 1.6 are acievable by proper design and management techniques.Optimizing the air conditioning systems brings enormous opportunity in bringing down the PUE value.The air conditioning system can be optimized by two approaches namely,thermal management and air flow management.thermal management systems are now introduced by some companies but they are highly sophisticated and costly and do not catch much attention in the thumb rules.
Resumo:
This work identifies the importance of plenum pressure on the performance of the data centre. The present methodology followed in the industry considers the pressure drop across the tile as a dependant variable, but it is shown in this work that this is the only one independent variable that is responsible for the entire flow dynamics in the data centre, and any design or assessment procedure must consider the pressure difference across the tile as the primary independent variable. This concept is further explained by the studies on the effect of dampers on the flow characteristics. The dampers have found to introduce an additional pressure drop there by reducing the effective pressure drop across the tile. The effect of damper is to change the flow both in quantitative and qualitative aspects. But the effect of damper on the flow in the quantitative aspect is only considered while using the damper as an aid for capacity control. Results from the present study suggest that the use of dampers must be avoided in data centre and well designed tiles which give required flow rates must be used in the appropriate locations. In the present study the effect of hot air recirculation is studied with suitable assumptions. It identifies that, the pressure drop across the tile is a dominant parameter which governs the recirculation. The rack suction pressure of the hardware along with the pressure drop across the tile determines the point of recirculation in the cold aisle. The positioning of hardware in the racks play an important role in controlling the recirculation point. The present study is thus helpful in the design of data centre air flow, based on the theory of jets. The air flow can be modelled both quantitatively and qualitatively based on the results.
Resumo:
In the present study the effect of hot air recirculation is studied with suitable assumptions. It identifies that, the pressure drop across the tile is a dominant parameter which governs the recirculation. The rack suction pressure of the hardware along with the pressure drop across the tile determines the point of recirculation in the cold aisle. The positioning of hardware in the racks play an important role in controlling the recirculation point. The present study is thus helpful in the design of data centre air flow, based on the theory of jets. The air flow can be modelled both quantitatively and qualitatively based on the results
Resumo:
The background error covariance matrix, B, is often used in variational data assimilation for numerical weather prediction as a static and hence poor approximation to the fully dynamic forecast error covariance matrix, Pf. In this paper the concept of an Ensemble Reduced Rank Kalman Filter (EnRRKF) is outlined. In the EnRRKF the forecast error statistics in a subspace defined by an ensemble of states forecast by the dynamic model are found. These statistics are merged in a formal way with the static statistics, which apply in the remainder of the space. The combined statistics may then be used in a variational data assimilation setting. It is hoped that the nonlinear error growth of small-scale weather systems will be accurately captured by the EnRRKF, to produce accurate analyses and ultimately improved forecasts of extreme events.