60 resultados para Data-stream balancing
Resumo:
O rio Febros é um pequeno curso de água, situado no concelho de Vila Nova de Gaia, com cerca de 15 km de extensão, cuja bacia hidrográfica ocupa uma área de aproximadamente 35,4 km2. Nasce em Seixezelo e desagua na margem esquerda do Rio Douro no Cais do Esteiro, em Avintes. Em Maio de 2008, um acidente de viação teve como consequência o derrame de cerca de quatro toneladas de ácido clorídrico que rapidamente convergiu às águas do rio. Apenas um dia depois, o pH desceu para três e muitos foram os peixes que morreram. A solução adoptada para evitar o desaire foi introduzir milhares de litros de água de modo a diluir o ácido presente, ao longo de todo o curso de água. Tal facto não evitou a destruição de parte de um ecossistema, que ainda nos dias de hoje se encontra em recuperação. De forma a avaliar-se o impacto destas possíveis perturbações sejam estas de origem antropogénica ou natural é necessário possuir conhecimentos dos processos químicos tais como a advecção, a mistura devida à dispersão e a transferência de massa ar/água. Estes processos irão determinar o movimento e destino das substâncias que poderão ser descarregadas no rio. Para tal, recorrer-se-á ao estudo hidrogeométrico do curso de água assim como ao estudo do comportamento de um marcador, simulando uma possível descarga. A rodamina WT será o marcador a ser utilizado devido à panóplia de características ambientalmente favoráveis. Os estudos de campo com este corante, realizados em sequência de descarga previamente estudada, fornecem uma das melhores fontes de informação para verificação e validação de modelos hidráulicos utilizados em estudos de qualidade de águas e protecção ambiental. Escolheram-se dois pontos de descarga no Febros, um em Casal Drijo e outro no Parque Biológico de Gaia, possuindo cada um deles, a jusante, duas estações de monitorização. Pelo modelo ADE os valores obtidos para o coeficiente de dispersão longitudinal para as estações Pontão d’ Alheira, Pinheiral, Menesas e Giestas foram, respectivamente, 0,3622; 0,5468; 1,6832 e 1,7504 m2/s. Para a mesma sequência de estações, os valores da velocidade de escoamento obtidos neste trabalho experimental foram de 0,0633; 0,0684; 0,1548 e 0,1645 m/s. Quanto ao modelo TS, os valores obtidos para o coeficiente de dispersão longitudinal para as estações Pontão d’ Alheira, Pinheiral, Menesas e Giestas foram, respectivamente, 0,2339; 0,1618; 0,5057e 1,1320 m2/s. Para a mesma sequência de estações, os valores da velocidade de escoamento obtidos neste trabalho experimental foram de 0,0652; 0,0775; 0,1891 e 0,1676 m/s. Os resultados foram ajustados por um método directo, o método dos momentos, e por dois métodos indirectos, os modelos ADE e TS. O melhor ajuste corresponde ao modelo TS onde os valores do coeficiente de dispersão longitudinal e da velocidade de escoamento são aqueles que melhor se aproximam da realidade. Quanto ao método dos momentos, o valor estimado para a velocidade é de 0,162 m/s e para o coeficiente de dispersão longitudinal de 9,769 m2/s. Não obstante, a compreensão da hidrodinâmica do rio e das suas características, bem como a adequação de modelos matemáticos no tratamento de resultados formam uma estratégia de protecção ambiental inerente a futuros impactos que possam suceder.
Resumo:
Seismic data is difficult to analyze and classical mathematical tools reveal strong limitations in exposing hidden relationships between earthquakes. In this paper, we study earthquake phenomena in the perspective of complex systems. Global seismic data, covering the period from 1962 up to 2011 is analyzed. The events, characterized by their magnitude, geographic location and time of occurrence, are divided into groups, either according to the Flinn-Engdahl (F-E) seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Two methods of analysis are considered and compared in this study. In a first method, the distributions of magnitudes are approximated by Gutenberg-Richter (G-R) distributions and the parameters used to reveal the relationships among regions. In the second method, the mutual information is calculated and adopted as a measure of similarity between regions. In both cases, using clustering analysis, visualization maps are generated, providing an intuitive and useful representation of the complex relationships that are present among seismic data. Such relationships might not be perceived on classical geographic maps. Therefore, the generated charts are a valid alternative to other visualization tools, for understanding the global behavior of earthquakes.
Resumo:
Atualmente o sistema produtivo do tipo job shop é muito comum nas PMEs (Pequenas e Médias Empresas). Estas empresas trabalham por encomenda. Produzem grande variedade de modelos, e em pequenas quantidades. Os prazos de entrega são um fator de elevada importância, pois os clientes exigem um produto de qualidade no tempo certo. O presente trabalho, pretende criar uma ferramenta de programação da produção para a secção da costura, usando dados reais da empresa, que tem uma implantação do tipo job shop com máquinas multi-operação (Multi-Purpose -Machines Job Shop). No final, são reunidas as principais conclusões e perspetivados futuros desenvolvimentos. Os resultados obtidos comprovam que o algoritmo desenvolvido, com base no algoritmo de Giffler & Thompson, consegue obter com grande precisão e de forma rápida o escalonamento / balanceamento da secção da costura. Com a ferramenta criada, a empresa otimiza a programação da secção da costura e fornece informação importante á gestão da produção, possibilitando uma melhoria do planeamento da empresa.
Resumo:
Consider the problem of disseminating data from an arbitrary source node to all other nodes in a distributed computer system, like Wireless Sensor Networks (WSNs). We assume that wireless broadcast is used and nodes do not know the topology. We propose new protocols which disseminate data faster and use fewer broadcasts than the simple broadcast protocol.
Resumo:
In previous works we have proposed a hybrid wired/wireless PROFIBUS solution where the interconnection between the heterogeneous media was accomplished through bridge-like devices with wireless stations being able to move between different wireless cells. Additionally, we had also proposed a worst-case timing analysis assuming that stations were stationary. In this paper we advance these previous works by proposing a worst-case timing analysis for the system’s message streams considering the effect of inter-cell mobility.
Resumo:
Nowadays, due to the incredible grow of the mobile devices market, when we want to implement a client-server applications we must consider mobile devices limitations. In this paper we discuss which can be the more reliable and fast way to exchange information between a server and an Android mobile application. This is an important issue because with a responsive application the user experience is more enjoyable. In this paper we present a study that test and evaluate two data transfer protocols, socket and HTTP, and three data serialization formats (XML, JSON and Protocol Buffers) using different environments and mobile devices to realize which is the most practical and fast to use.
Resumo:
Virtual Reality (VR) has grown to become state-of-theart technology in many business- and consumer oriented E-Commerce applications. One of the major design challenges of VR environments is the placement of the rendering process. The rendering process converts the abstract description of a scene as contained in an object database to an image. This process is usually done at the client side like in VRML [1] a technology that requires the client’s computational power for smooth rendering. The vision of VR is also strongly connected to the issue of Quality of Service (QoS) as the perceived realism is subject to an interactive frame rate ranging from 10 to 30 frames-per-second (fps), real-time feedback mechanisms and realistic image quality. These requirements overwhelm traditional home computers or even high sophisticated graphical workstations over their limits. Our work therefore introduces an approach for a distributed rendering architecture that gracefully balances the workload between the client and a clusterbased server. We believe that a distributed rendering approach as described in this paper has three major benefits: It reduces the clients workload, it decreases the network traffic and it allows to re-use already rendered scenes.
Resumo:
The goal of the this paper is to show that the DGPS data Internet service we designed and developed provides campus-wide real time access to Differential GPS (DGPS) data and, thus, supports precise outdoor navigation. First we describe the developed distributed system in terms of architecture (a three tier client/server application), services provided (real time DGPS data transportation from remote DGPS sources and campus wide data dissemination) and transmission modes implemented (raw and frame mode over TCP and UDP). Then we present and discuss the results obtained and, finally, we draw some conclusions.
Resumo:
Sensor/actuator networks promised to extend automated monitoring and control into industrial processes. Avionic system is one of the prominent technologies that can highly gain from dense sensor/actuator deployments. An aircraft with smart sensing skin would fulfill the vision of affordability and environmental friendliness properties by reducing the fuel consumption. Achieving these properties is possible by providing an approximate representation of the air flow across the body of the aircraft and suppressing the detected aerodynamic drags. To the best of our knowledge, getting an accurate representation of the physical entity is one of the most significant challenges that still exists with dense sensor/actuator network. This paper offers an efficient way to acquire sensor readings from very large sensor/actuator network that are located in a small area (dense network). It presents LIA algorithm, a Linear Interpolation Algorithm that provides two important contributions. First, it demonstrates the effectiveness of employing a transformation matrix to mimic the environmental behavior. Second, it renders a smart solution for updating the previously defined matrix through a procedure called learning phase. Simulation results reveal that the average relative error in LIA algorithm can be reduced by as much as 60% by exploiting transformation matrix.
Resumo:
Managing the physical and compute infrastructure of a large data center is an embodiment of a Cyber-Physical System (CPS). The physical parameters of the data center (such as power, temperature, pressure, humidity) are tightly coupled with computations, even more so in upcoming data centers, where the location of workloads can vary substantially due, for example, to workloads being moved in a cloud infrastructure hosted in the data center. In this paper, we describe a data collection and distribution architecture that enables gathering physical parameters of a large data center at a very high temporal and spatial resolutionof the sensor measurements. We think this is an important characteristic to enable more accurate heat-flow models of the data center andwith them, _and opportunities to optimize energy consumption. Havinga high resolution picture of the data center conditions, also enables minimizing local hotspots, perform more accurate predictive maintenance (pending failures in cooling and other infrastructure equipment can be more promptly detected) and more accurate billing. We detail this architecture and define the structure of the underlying messaging system that is used to collect and distribute the data. Finally, we show the results of a preliminary study of a typical data center radio environment.
Resumo:
Consider the problem of designing an algorithm for acquiring sensor readings. Consider specifically the problem of obtaining an approximate representation of sensor readings where (i) sensor readings originate from different sensor nodes, (ii) the number of sensor nodes is very large, (iii) all sensor nodes are deployed in a small area (dense network) and (iv) all sensor nodes communicate over a communication medium where at most one node can transmit at a time (a single broadcast domain). We present an efficient algorithm for this problem, and our novel algorithm has two desired properties: (i) it obtains an interpolation based on all sensor readings and (ii) it is scalable, that is, its time-complexity is independent of the number of sensor nodes. Achieving these two properties is possible thanks to the close interlinking of the information processing algorithm, the communication system and a model of the physical world.
Resumo:
Network control systems (NCSs) are spatially distributed systems in which the communication between sensors, actuators and controllers occurs through a shared band-limited digital communication network. However, the use of a shared communication network, in contrast to using several dedicated independent connections, introduces new challenges which are even more acute in large scale and dense networked control systems. In this paper we investigate a recently introduced technique of gathering information from a dense sensor network to be used in networked control applications. Obtaining efficiently an approximate interpolation of the sensed data is exploited as offering a good tradeoff between accuracy in the measurement of the input signals and the delay to the actuation. These are important aspects to take into account for the quality of control. We introduce a variation to the state-of-the-art algorithms which we prove to perform relatively better because it takes into account the changes over time of the input signal within the process of obtaining an approximate interpolation.
Resumo:
Cluster scheduling and collision avoidance are crucial issues in large-scale cluster-tree Wireless Sensor Networks (WSNs). The paper presents a methodology that provides a Time Division Cluster Scheduling (TDCS) mechanism based on the cyclic extension of RCPS/TC (Resource Constrained Project Scheduling with Temporal Constraints) problem for a cluster-tree WSN, assuming bounded communication errors. The objective is to meet all end-to-end deadlines of a predefined set of time-bounded data flows while minimizing the energy consumption of the nodes by setting the TDCS period as long as possible. Sinceeach cluster is active only once during the period, the end-to-end delay of a given flow may span over several periods when there are the flows with opposite direction. The scheduling tool enables system designers to efficiently configure all required parameters of the IEEE 802.15.4/ZigBee beaconenabled cluster-tree WSNs in the network design time. The performance evaluation of thescheduling tool shows that the problems with dozens of nodes can be solved while using optimal solvers.
Resumo:
The simulation analysis is important approach to developing and evaluating the systems in terms of development time and cost. This paper demonstrates the application of Time Division Cluster Scheduling (TDCS) tool for the configuration of IEEE 802.15.4/ZigBee beaconenabled cluster-tree WSNs using the simulation analysis, as an illustrative example that confirms the practical applicability of the tool. The simulation study analyses how the number of retransmissions impacts the reliability of data transmission, the energy consumption of the nodes and the end-to-end communication delay, based on the simulation model that was implemented in the Opnet Modeler. The configuration parameters of the network are obtained directly from the TDCS tool. The simulation results show that the number of retransmissions impacts the reliability, the energy consumption and the end-to-end delay, in a way that improving the one may degrade the others.
Resumo:
Cooperating objects (COs) is a recently coined term used to signify the convergence of classical embedded computer systems, wireless sensor networks and robotics and control. We present essential elements of a reference architecture for scalable data processing for the CO paradigm.