914 resultados para Internet-centric Systems in Hydroinformatics
Resumo:
Information-centric networking (ICN) is a new communication paradigm that aims at increasing security and efficiency of content delivery in communication networks. In recent years, many research efforts in ICN have focused on caching strategies to reduce traffic and increase overall performance by decreasing download times. Since caches need to operate at line speed, they have only a limited size and content can only be stored for a short time. However, if content needs to be available for a longer time, e.g., for delay-tolerant networking or to provide high content availability similar to content delivery networks (CDNs), persistent caching is required. We base our work on the Content-Centric Networking (CCN) architecture and investigate persistent caching by extending the current repository implementation in CCNx. We show by extensive evaluations in a YouTube and webserver traffic scenario that repositories can be efficiently used to increase content availability by significantly increasing cache hit rates.
Resumo:
Information-centric networking (ICN) enables communication in isolated islands, where fixed infrastructure is not available, but also supports seamless communication if the infrastructure is up and running again. In disaster scenarios, when a fixed infrastructure is broken, content discovery algorit hms are required to learn what content is locally available. For example, if preferred content is not available, users may also be satisfied with second best options. In this paper, we describe a new content discovery algorithm and compare it to existing Depth-first and Breadth-first traversal algorithms. Evaluations in mobile scenarios with up to 100 nodes show that it results in better performance, i.e., faster discovery time and smaller traffic overhead, than existing algorithms.
Resumo:
Two of the main issues in wireless industrial Internet of Things applications are interoperability and network lifetime. In this work we extend a semantic interoperability platform and introduce an application-layer sleepy nodes protocol that can leverage on information stored in semantic repositories. We propose an integration platform for managing the sleep states and an application layer protocol based upon the Constraint Application Layer protocol. We evaluate our system on windowing based task allocation strategies, aiming for lower overall energy consumption that results in higher network lifetime.
Resumo:
El auge del "Internet de las Cosas" (IoT, "Internet of Things") y sus tecnologías asociadas han permitido su aplicación en diversos dominios de la aplicación, entre los que se encuentran la monitorización de ecosistemas forestales, la gestión de catástrofes y emergencias, la domótica, la automatización industrial, los servicios para ciudades inteligentes, la eficiencia energética de edificios, la detección de intrusos, la gestión de desastres y emergencias o la monitorización de señales corporales, entre muchas otras. La desventaja de una red IoT es que una vez desplegada, ésta queda desatendida, es decir queda sujeta, entre otras cosas, a condiciones climáticas cambiantes y expuestas a catástrofes naturales, fallos de software o hardware, o ataques maliciosos de terceros, por lo que se puede considerar que dichas redes son propensas a fallos. El principal requisito de los nodos constituyentes de una red IoT es que estos deben ser capaces de seguir funcionando a pesar de sufrir errores en el propio sistema. La capacidad de la red para recuperarse ante fallos internos y externos inesperados es lo que se conoce actualmente como "Resiliencia" de la red. Por tanto, a la hora de diseñar y desplegar aplicaciones o servicios para IoT, se espera que la red sea tolerante a fallos, que sea auto-configurable, auto-adaptable, auto-optimizable con respecto a nuevas condiciones que puedan aparecer durante su ejecución. Esto lleva al análisis de un problema fundamental en el estudio de las redes IoT, el problema de la "Conectividad". Se dice que una red está conectada si todo par de nodos en la red son capaces de encontrar al menos un camino de comunicación entre ambos. Sin embargo, la red puede desconectarse debido a varias razones, como que se agote la batería, que un nodo sea destruido, etc. Por tanto, se hace necesario gestionar la resiliencia de la red con el objeto de mantener la conectividad entre sus nodos, de tal manera que cada nodo IoT sea capaz de proveer servicios continuos, a otros nodos, a otras redes o, a otros servicios y aplicaciones. En este contexto, el objetivo principal de esta tesis doctoral se centra en el estudio del problema de conectividad IoT, más concretamente en el desarrollo de modelos para el análisis y gestión de la Resiliencia, llevado a la práctica a través de las redes WSN, con el fin de mejorar la capacidad la tolerancia a fallos de los nodos que componen la red. Este reto se aborda teniendo en cuenta dos enfoques distintos, por una parte, a diferencia de otro tipo de redes de dispositivos convencionales, los nodos en una red IoT son propensos a perder la conexión, debido a que se despliegan en entornos aislados, o en entornos con condiciones extremas; por otra parte, los nodos suelen ser recursos con bajas capacidades en términos de procesamiento, almacenamiento y batería, entre otros, por lo que requiere que el diseño de la gestión de su resiliencia sea ligero, distribuido y energéticamente eficiente. En este sentido, esta tesis desarrolla técnicas auto-adaptativas que permiten a una red IoT, desde la perspectiva del control de su topología, ser resiliente ante fallos en sus nodos. Para ello, se utilizan técnicas basadas en lógica difusa y técnicas de control proporcional, integral y derivativa (PID - "proportional-integral-derivative"), con el objeto de mejorar la conectividad de la red, teniendo en cuenta que el consumo de energía debe preservarse tanto como sea posible. De igual manera, se ha tenido en cuenta que el algoritmo de control debe ser distribuido debido a que, en general, los enfoques centralizados no suelen ser factibles a despliegues a gran escala. El presente trabajo de tesis implica varios retos que conciernen a la conectividad de red, entre los que se incluyen: la creación y el análisis de modelos matemáticos que describan la red, una propuesta de sistema de control auto-adaptativo en respuesta a fallos en los nodos, la optimización de los parámetros del sistema de control, la validación mediante una implementación siguiendo un enfoque de ingeniería del software y finalmente la evaluación en una aplicación real. Atendiendo a los retos anteriormente mencionados, el presente trabajo justifica, mediante una análisis matemático, la relación existente entre el "grado de un nodo" (definido como el número de nodos en la vecindad del nodo en cuestión) y la conectividad de la red, y prueba la eficacia de varios tipos de controladores que permiten ajustar la potencia de trasmisión de los nodos de red en respuesta a eventuales fallos, teniendo en cuenta el consumo de energía como parte de los objetivos de control. Así mismo, este trabajo realiza una evaluación y comparación con otros algoritmos representativos; en donde se demuestra que el enfoque desarrollado es más tolerante a fallos aleatorios en los nodos de la red, así como en su eficiencia energética. Adicionalmente, el uso de algoritmos bioinspirados ha permitido la optimización de los parámetros de control de redes dinámicas de gran tamaño. Con respecto a la implementación en un sistema real, se han integrado las propuestas de esta tesis en un modelo de programación OSGi ("Open Services Gateway Initiative") con el objeto de crear un middleware auto-adaptativo que mejore la gestión de la resiliencia, especialmente la reconfiguración en tiempo de ejecución de componentes software cuando se ha producido un fallo. Como conclusión, los resultados de esta tesis doctoral contribuyen a la investigación teórica y, a la aplicación práctica del control resiliente de la topología en redes distribuidas de gran tamaño. Los diseños y algoritmos presentados pueden ser vistos como una prueba novedosa de algunas técnicas para la próxima era de IoT. A continuación, se enuncian de forma resumida las principales contribuciones de esta tesis: (1) Se han analizado matemáticamente propiedades relacionadas con la conectividad de la red. Se estudia, por ejemplo, cómo varía la probabilidad de conexión de la red al modificar el alcance de comunicación de los nodos, así como cuál es el mínimo número de nodos que hay que añadir al sistema desconectado para su re-conexión. (2) Se han propuesto sistemas de control basados en lógica difusa para alcanzar el grado de los nodos deseado, manteniendo la conectividad completa de la red. Se han evaluado diferentes tipos de controladores basados en lógica difusa mediante simulaciones, y los resultados se han comparado con otros algoritmos representativos. (3) Se ha investigado más a fondo, dando un enfoque más simple y aplicable, el sistema de control de doble bucle, y sus parámetros de control se han optimizado empleando algoritmos heurísticos como el método de la entropía cruzada (CE, "Cross Entropy"), la optimización por enjambre de partículas (PSO, "Particle Swarm Optimization"), y la evolución diferencial (DE, "Differential Evolution"). (4) Se han evaluado mediante simulación, la mayoría de los diseños aquí presentados; además, parte de los trabajos se han implementado y validado en una aplicación real combinando técnicas de software auto-adaptativo, como por ejemplo las de una arquitectura orientada a servicios (SOA, "Service-Oriented Architecture"). ABSTRACT The advent of the Internet of Things (IoT) enables a tremendous number of applications, such as forest monitoring, disaster management, home automation, factory automation, smart city, etc. However, various kinds of unexpected disturbances may cause node failure in the IoT, for example battery depletion, software/hardware malfunction issues and malicious attacks. So, it can be considered that the IoT is prone to failure. The ability of the network to recover from unexpected internal and external failures is known as "resilience" of the network. Resilience usually serves as an important non-functional requirement when designing IoT, which can further be broken down into "self-*" properties, such as self-adaptive, self-healing, self-configuring, self-optimization, etc. One of the consequences that node failure brings to the IoT is that some nodes may be disconnected from others, such that they are not capable of providing continuous services for other nodes, networks, and applications. In this sense, the main objective of this dissertation focuses on the IoT connectivity problem. A network is regarded as connected if any pair of different nodes can communicate with each other either directly or via a limited number of intermediate nodes. More specifically, this thesis focuses on the development of models for analysis and management of resilience, implemented through the Wireless Sensor Networks (WSNs), which is a challenging task. On the one hand, unlike other conventional network devices, nodes in the IoT are more likely to be disconnected from each other due to their deployment in a hostile or isolated environment. On the other hand, nodes are resource-constrained in terms of limited processing capability, storage and battery capacity, which requires that the design of the resilience management for IoT has to be lightweight, distributed and energy-efficient. In this context, the thesis presents self-adaptive techniques for IoT, with the aim of making the IoT resilient against node failures from the network topology control point of view. The fuzzy-logic and proportional-integral-derivative (PID) control techniques are leveraged to improve the network connectivity of the IoT in response to node failures, meanwhile taking into consideration that energy consumption must be preserved as much as possible. The control algorithm itself is designed to be distributed, because the centralized approaches are usually not feasible in large scale IoT deployments. The thesis involves various aspects concerning network connectivity, including: creation and analysis of mathematical models describing the network, proposing self-adaptive control systems in response to node failures, control system parameter optimization, implementation using the software engineering approach, and evaluation in a real application. This thesis also justifies the relations between the "node degree" (the number of neighbor(s) of a node) and network connectivity through mathematic analysis, and proves the effectiveness of various types of controllers that can adjust power transmission of the IoT nodes in response to node failures. The controllers also take into consideration the energy consumption as part of the control goals. The evaluation is performed and comparison is made with other representative algorithms. The simulation results show that the proposals in this thesis can tolerate more random node failures and save more energy when compared with those representative algorithms. Additionally, the simulations demonstrate that the use of the bio-inspired algorithms allows optimizing the parameters of the controller. With respect to the implementation in a real system, the programming model called OSGi (Open Service Gateway Initiative) is integrated with the proposals in order to create a self-adaptive middleware, especially reconfiguring the software components at runtime when failures occur. The outcomes of this thesis contribute to theoretic research and practical applications of resilient topology control for large and distributed networks. The presented controller designs and optimization algorithms can be viewed as novel trials of the control and optimization techniques for the coming era of the IoT. The contributions of this thesis can be summarized as follows: (1) Mathematically, the fault-tolerant probability of a large-scale stochastic network is analyzed. It is studied how the probability of network connectivity depends on the communication range of the nodes, and what is the minimum number of neighbors to be added for network re-connection. (2) A fuzzy-logic control system is proposed, which obtains the desired node degree and in turn maintains the network connectivity when it is subject to node failures. There are different types of fuzzy-logic controllers evaluated by simulations, and the results demonstrate the improvement of fault-tolerant capability as compared to some other representative algorithms. (3) A simpler but more applicable approach, the two-loop control system is further investigated, and its control parameters are optimized by using some heuristic algorithms such as Cross Entropy (CE), Particle Swarm Optimization (PSO), and Differential Evolution (DE). (4) Most of the designs are evaluated by means of simulations, but part of the proposals are implemented and tested in a real-world application by combining the self-adaptive software technique and the control algorithms which are presented in this thesis.
Resumo:
This report summarizes the current state of the art in cooperative vehicle-highway automation systems in Europe and Asia based on a series of meetings, demonstrations, and site visits, combined with the results of literature review. This review covers systems that provide drivers with a range of automation capabilities, from driver assistance to fully automated driving, with an emphasis on cooperative systems that involve active exchanges of information between the vehicles and the roadside and among separate vehicles. The trends in development and deployment of these systems are examined by country, and the similarities and differences relative to the U.S. situation are noted, leading toward recommendations for future U.S. action. The Literature Review on Recent International Activity in Cooperative Vehicle-Highway Automation Systems is published separately as FHWA-HRT-13-025.
Resumo:
One of the obstacles to improved security of the Internet is ad hoc development of technologies with different design goals and different security goals. This paper proposes reconceptualizing the Internet as a secure distributed system, focusing specifically on the application layer. The notion is to redesign specific functionality, based on principles discovered in research on distributed systems in the decades since the initial development of the Internet. Because of the problems in retrofitting new technology across millions of clients and servers, any options with prospects of success must support backward compatibility. This paper outlines a possible new architecture for internet-based mail which would replace existing protocols by a more secure framework. To maintain backward compatibility, initial implementation could offer a web browser-based front end but the longer-term approach would be to implement the system using appropriate models of replication. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The internet's potential impact on supply chain operations is often approached in the literature in a quite generic way due to the complex nature of supply chains and the different levels of operations' integration. Drawing on existing research, this paper proposes an overall framework of supply chain integration and then attempts to provide a categorisation of the internet's role in the supply chain activities, providing insights from various sectors. The purpose of this paper is to describe and present the alternative ways that the internet impacts on the integration of supply chain operations, by comparing four different sectors: the automotive, the computer, the food and the grocery sector. The paper concludes that in the food, grocery and computer sector, internet's impact on supply chain operations has been quite poor, particularly forward integration, while it has been significant, in the backward integration of the automotive sector. Copyright © 2007 Inderscience Enterprises Ltd.
Resumo:
Recent paradigms in wireless communication architectures describe environments where nodes present a highly dynamic behavior (e.g., User Centric Networks). In such environments, routing is still performed based on the regular packet-switched behavior of store-and-forward. Albeit sufficient to compute at least an adequate path between a source and a destination, such routing behavior cannot adequately sustain the highly nomadic lifestyle that Internet users are today experiencing. This thesis aims to analyse the impact of the nodes’ mobility on routing scenarios. It also aims at the development of forwarding concepts that help in message forwarding across graphs where nodes exhibit human mobility patterns, as is the case of most of the user-centric wireless networks today. The first part of the work involved the analysis of the mobility impact on routing, and we found that node mobility significance can affect routing performance, and it depends on the link length, distance, and mobility patterns of nodes. The study of current mobility parameters showed that they capture mobility partially. The routing protocol robustness to node mobility depends on the routing metric sensitivity to node mobility. As such, mobility-aware routing metrics were devised to increase routing robustness to node mobility. Two categories of routing metrics proposed are the time-based and spatial correlation-based. For the validation of the metrics, several mobility models were used, which include the ones that mimic human mobility patterns. The metrics were implemented using the Network Simulator tool using two widely used multi-hop routing protocols of Optimized Link State Routing (OLSR) and Ad hoc On Demand Distance Vector (AODV). Using the proposed metrics, we reduced the path re-computation frequency compared to the benchmark metric. This means that more stable nodes were used to route data. The time-based routing metrics generally performed well across the different node mobility scenarios used. We also noted a variation on the performance of the metrics, including the benchmark metric, under different mobility models, due to the differences in the node mobility governing rules of the models.
Resumo:
Agricultural management practices that promote net carbon (C) accumulation in the soil have been considered as an important potential mitigation option to combat global warming. The change in the sugarcane harvesting system, to one which incorporates C into the soil from crop residues, is the focus of this work. The main objective was to assess and discuss the changes in soil organic C stocks caused by the conversion of burnt to unburnt sugarcane harvesting systems in Brazil, when considering the main soils and climates associated with this crop. For this purpose, a dataset was obtained from a literature review of soils under sugarcane in Brazil. Although not necessarily from experimental studies, only paired comparisons were examined, and for each site the dominant soil type, topography and climate were similar. The results show a mean annual C accumulation rate of 1.5 Mg ha-1 year-1 for the surface to 30-cm depth (0.73 and 2.04 Mg ha-1 year-1 for sandy and clay soils, respectively) caused by the conversion from a burnt to an unburnt sugarcane harvesting system. The findings suggest that soil should be included in future studies related to life cycle assessment and C footprint of Brazilian sugarcane ethanol.
Resumo:
Outgassing of carbon dioxide (CO(2)) from rivers and streams to the atmosphere is a major loss term in the coupled terrestrial-aquatic carbon cycle of major low-gradient river systems (the term ""river system"" encompasses the rivers and streams of all sizes that compose the drainage network in a river basin). However, the magnitude and controls on this important carbon flux are not well quantified. We measured carbon dioxide flux rates (F(CO2)), gas transfer velocity (k), and partial pressures (p(CO2)) in rivers and streams of the Amazon and Mekong river systems in South America and Southeast Asia, respectively. F(CO2) and k values were significantly higher in small rivers and streams (channels <100 m wide) than in large rivers (channels >100 m wide). Small rivers and streams also had substantially higher variability in k values than large rivers. Observed F(CO2) and k values suggest that previous estimates of basinwide CO(2) evasion from tropical rivers and wetlands have been conservative and are likely to be revised upward substantially in the future. Data from the present study combined with data compiled from the literature collectively suggest that the physical control of gas exchange velocities and fluxes in low-gradient river systems makes a transition from the dominance of wind control at the largest spatial scales (in estuaries and river mainstems) toward increasing importance of water current velocity and depth at progressively smaller channel dimensions upstream. These results highlight the importance of incorporating scale-appropriate k values into basinwide models of whole ecosystem carbon balance.
Resumo:
Biofuels are both a promising solution to global warming mitigation and a potential contributor to the problem. Several life cycle assessments of bioethanol have been conducted to address these questions. We performed a synthesis of the available data on Brazilian ethanol production focusing on greenhouse gas (GHG) emissions and carbon (C) sinks in the agricultural and industrial phases. Emissions of carbon dioxide (CO(2)) from fossil fuels, methane (CH(4)) and nitrous oxide (N(2)O) from sources commonly included in C footprints, such as fossil fuel usage, biomass burning, nitrogen fertilizer application, liming and litter decomposition were accounted for. In addition, black carbon (BC) emissions from burning biomass and soil C sequestration were included in the balance. Most of the annual emissions per hectare are in the agricultural phase, both in the burned system (2209 out of a total of 2398 kg C(eq)), and in the unburned system (559 out of 748 kg C(eq)). Although nitrogen fertilizer emissions are large, 111 kg C(eq) ha-1 yr-1, the largest single source of emissions is biomass burning in the manual harvest system, with a large amount of both GHG (196 kg C(eq) ha-1 yr-1). and BC (1536 kg C(eq) ha-1 yr-1). Besides avoiding emissions from biomass burning, harvesting sugarcane mechanically without burning tends to increase soil C stocks, providing a C sink of 1500 kg C ha-1 yr-1 in the 30 cm layer. The data show a C output: input ratio of 1.4 for ethanol produced under the conventionally burned and manual harvest compared with 6.5 for the mechanized harvest without burning, signifying the importance of conservation agricultural systems in bioethanol feedstock production.
Resumo:
In this work, a criterion considering the topological instability (lambda) and the differences in the electronegativity of the constituent elements (Delta e) was applied to the Al-La and Al-Ni-La systems in order to predict the best glass-forming compositions. The results were compared with literature data and with our own experimental data for the Al-La-Ni system. The alloy described in the literature as the best glass former in the Al-La system is located near the point with local maximum for the lambda.Delta e criterion. A good agreement was found between the predictions of the lambda.Delta e criterion and literature data in the Al-La-Ni system, with the region of the best glass-forming ability (GFA) and largest supercooled liquid region (Delta T(x)) coinciding with the best compositional region for amorphization indicated by the lambda.Delta e criterion. Four new glassy compositions were found in the Al-La-Ni system, with the best predicted composition presenting the best glass-forming ability observed so far for this system. Although the lambda.Delta e criterion needs further refinements for completely describe the glass-forming ability in the Al-La and Al-La-Ni systems, the results demonstrated that this criterion is a good tool to predict new glass-forming compositions. (C) 2010 Elsevier B. V. All rights reserved.
Resumo:
Rutin, one of the major flavonoids found in an assortment of plants, was reported to act as a sun protection factor booster with high anti-UVA defense, antioxidant, antiaging, and anticellulite, by improvement of the cutaneous microcirculation. This research work aimed at evaluating the rutin in vitro release from semisolid systems, in vertical diffusion cells, containing urea, isopropanol and propylene glycol, associated or not, according to the factorial design with two levels with center point. Urea (alone and in association with isopropanol and propylene glycol) and isopropanol (alone and in association with propylene glycol) influenced significant and negatively rutin liberation in diverse parameters: flux (g/cm2.h); apparent permeability coefficient (cm/h); rutin amount released (g/cm2); and liberation enhancement factor. In accordance with the results, the presence of propylene glycol 5.0% (wt/wt) presented statistically favorable to promote rutin release from this semisolid system with flux = 105.12 8.59 g/cm2.h; apparent permeability coefficient = 7.01 0.572 cm/h; rutin amount released = 648.80 53.01 g/cm2; and liberation enhancement factor = 1.21 0.07.
Resumo:
We use the finite element method to simulate the rock alteration and metamorphic process in hydrothermal systems. In particular, we consider the fluid-rock interaction problems in pore-fluid saturated porous rocks. Since the fluid rock interaction takes place at the contact interface between the pore-fluid and solid minerals, it is governed by the chemical reaction which usually takes place very slowly at this contact interface, from the geochemical point of view. Due to the relative slowness of the rate of the chemical reaction to the velocity of the pore-fluid flow in the hydrothermal system to be considered, there exists a retardation zone, in which the conventional static theory in geochemistry does not hold true. Since this issue is often overlooked by some purely numerical modellers, it is emphasized in this paper. The related results from a typical rock alteration and metamorphic problem in a hydrothermal system have shown not only the detailed rock alteration and metamorphic process, but also the size of the retardation zone in the hydrothermal system. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
We conduct a theoretical analysis of steady-state heat transfer problems through mid-crustal vertical cracks with upward throughflow in hydrothermal systems. In particular, we derive analytical solutions for both the far field and near field of the system. In order to investigate the contribution of the forced advection to the total temperature of the system, two concepts, namely the critical Peclet number and the critical permeability of the system, have been presented and discussed in this paper. The analytical solution for the far field of the system indicates that if the pore-fluid pressure gradient in the crust is lithostatic, the critical permeability of the system can be used to determine whether or not the contribution of the forced advection to the total temperature of the system is negligible. Otherwise, the critical Peclet number should be used. For a crust of moderate thickness, the critical permeability is of the order of magnitude of 10(-20) m(2), under which heat conduction is the overwhelming mechanism to transfer heat energy, even though the pore-fluid pressure gradient in the crust is lithostatic. Furthermore, the lower bound analytical solution for the near field of the system demonstrates that the permeable vertical cracks in the middle crust can efficiently transfer heat energy from the lower crust to the upper crust of the Earth. Copyright (C) 2002 John Wiley Sons, Ltd.