876 resultados para Internet of Things (IOT)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the continuous development in the fields of sensors, advanced data processing and communications, road transport oriented intelligent applications and services have reached a significant maturity and complexity. Cooperative ITS services, based on the idea of sharing accurate information among road entities, are currently being tested on a large scale by different initiatives. The field operational test (FOTsis) project contributes to the deployment environment with services that involve a significant number of entities out of the vehicle. This made necessary the specification of an architecture which, based on the ISO ITS station reference architecture for communications, could support the requirements of the services proposed in the project. During the project, internal implementation tests and external interoperability tests have resulted in the validation of the proposed architecture. At the same time, these tests have had as a result the awareness of areas in which the FOTsis architecture could be completed, mainly to take full advantage of all the emerging and foreseeable data sources which may be relevant in the road environment. In this study, the authors will outline an approach that, based on the current cooperative ITS architecture and the SmartCities and Internet Of Things (IoT) architectures, can provide a common convergence platform to maximise the information available for ITS purposes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The solutions to cope with new challenges that societies have to face nowadays involve providing smarter daily systems. To achieve this, technology has to evolve and leverage physical systems automatic interactions, with less human intervention. Technological paradigms like Internet of Things (IoT) and Cyber-Physical Systems (CPS) are providing reference models, architectures, approaches and tools that are to support cross-domain solutions. Thus, CPS based solutions will be applied in different application domains like e-Health, Smart Grid, Smart Transportation and so on, to assure the expected response from a complex system that relies on the smooth interaction and cooperation of diverse networked physical systems. The Wireless Sensors Networks (WSN) are a well-known wireless technology that are part of large CPS. The WSN aims at monitoring a physical system, object, (e.g., the environmental condition of a cargo container), and relaying data to the targeted processing element. The WSN communication reliability, as well as a restrained energy consumption, are expected features in a WSN. This paper shows the results obtained in a real WSN deployment, based on SunSPOT nodes, which carries out a fuzzy based control strategy to improve energy consumption while keeping communication reliability and computational resources usage among boundaries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El consumo energético de las Redes de Sensores Inalámbricas (WSNs por sus siglas en inglés) es un problema histórico que ha sido abordado desde diferentes niveles y visiones, ya que no solo afecta a la propia supervivencia de la red sino que el creciente uso de dispositivos inteligentes y el nuevo paradigma del Internet de las Cosas hace que las WSNs tengan cada vez una mayor influencia en la huella energética. Debido a la tendencia al alza en el uso de estas redes se añade un nuevo problema, la saturación espectral. Las WSNs operan habitualmente en bandas sin licencia como son las bandas Industrial, Científica y Médica (ISM por sus siglas en inglés). Estas bandas se comparten con otro tipo de redes como Wi-Fi o Bluetooth cuyo uso ha crecido exponencialmente en los últimos años. Para abordar este problema aparece el paradigma de la Radio Cognitiva (CR), una tecnología que permite el acceso oportunista al espectro. La introducción de capacidades cognitivas en las WSNs no solo permite optimizar su eficiencia espectral sino que también tiene un impacto positivo en parámetros como la calidad de servicio, la seguridad o el consumo energético. Sin embargo, por otra parte, este nuevo paradigma plantea algunos retos relacionados con el consumo energético. Concretamente, el sensado del espectro, la colaboración entre los nodos (que requiere comunicación adicional) y el cambio en los parámetros de transmisión aumentan el consumo respecto a las WSN clásicas. Teniendo en cuenta que la investigación en el campo del consumo energético ha sido ampliamente abordada puesto que se trata de una de sus principales limitaciones, asumimos que las nuevas estrategias deben surgir de las nuevas capacidades añadidas por las redes cognitivas. Por otro lado, a la hora de diseñar estrategias de optimización para CWSN hay que tener muy presentes las limitaciones de recursos de estas redes en cuanto a memoria, computación y consumo energético de los nodos. En esta tesis doctoral proponemos dos estrategias de reducción de consumo energético en CWSNs basadas en tres pilares fundamentales. El primero son las capacidades cognitivas añadidas a las WSNs que proporcionan la posibilidad de adaptar los parámetros de transmisión en función del espectro disponible. La segunda es la colaboración, como característica intrínseca de las CWSNs. Finalmente, el tercer pilar de este trabajo es teoría de juegos como algoritmo de soporte a la decisión, ampliamente utilizado en WSNs debido a su simplicidad. Como primer aporte de la tesis se presenta un análisis completo de las posibilidades introducidas por la radio cognitiva en materia de reducción de consumo para WSNs. Gracias a las conclusiones extraídas de este análisis, se han planteado las hipótesis de esta tesis relacionadas con la validez de usar capacidades cognitivas como herramienta para la reducción de consumo en CWSNs. Una vez presentada las hipótesis, pasamos a desarrollar las principales contribuciones de la tesis: las dos estrategias diseñadas para reducción de consumo basadas en teoría de juegos y CR. La primera de ellas hace uso de un juego no cooperativo que se juega mediante pares de jugadores. En la segunda estrategia, aunque el juego continúa siendo no cooperativo, se añade el concepto de colaboración. Para cada una de las estrategias se presenta el modelo del juego, el análisis formal de equilibrios y óptimos y la descripción de la estrategia completa donde se incluye la interacción entre nodos. Con el propósito de probar las estrategias mediante simulación e implementación en dispositivos reales hemos desarrollado un marco de pruebas compuesto por un simulador cognitivo y un banco de pruebas formado por nodos cognitivos capaces de comunicarse en tres bandas ISM desarrollados en el B105 Lab. Este marco de pruebas constituye otra de las aportaciones de la tesis que permitirá el avance en la investigación en el área de las CWSNs. Finalmente, se presentan y discuten los resultados derivados de la prueba de las estrategias desarrolladas. La primera estrategia proporciona ahorros de energía mayores al 65% comparados con una WSN sin capacidades cognitivas y alrededor del 25% si la comparamos con una estrategia cognitiva basada en el sensado periódico del espectro para el cambio de canal de acuerdo a un nivel de ruido fijado. Este algoritmo se comporta de forma similar independientemente del nivel de ruido siempre que éste sea espacialmente uniformemente. Esta estrategia, a pesar de su sencillez, nos asegura el comportamiento óptimo en cuanto a consumo energético debido a la utilización de teoría de juegos en la fase de diseño del comportamiento de los nodos. La estrategia colaborativa presenta mejoras respecto a la anterior en términos de protección frente al ruido en escenarios de ruido más complejos donde aporta una mejora del 50% comparada con la estrategia anterior. ABSTRACT Energy consumption in Wireless Sensor Networks (WSNs) is a known historical problem that has been addressed from different areas and on many levels. But this problem should not only be approached from the point of view of their own efficiency for survival. A major portion of communication traffic has migrated to mobile networks and systems. The increased use of smart devices and the introduction of the Internet of Things (IoT) give WSNs a great influence on the carbon footprint. Thus, optimizing the energy consumption of wireless networks could reduce their environmental impact considerably. In recent years, another problem has been added to the equation: spectrum saturation. Wireless Sensor Networks usually operate in unlicensed spectrum bands such as Industrial, Scientific, and Medical (ISM) bands shared with other networks (mainly Wi-Fi and Bluetooth). To address the efficient spectrum utilization problem, Cognitive Radio (CR) has emerged as the key technology that enables opportunistic access to the spectrum. Therefore, the introduction of cognitive capabilities to WSNs allows optimizing their spectral occupation. Cognitive Wireless Sensor Networks (CWSNs) do not only increase the reliability of communications, but they also have a positive impact on parameters such as the Quality of Service (QoS), network security, or energy consumption. These new opportunities introduced by CWSNs unveil a wide field in the energy consumption research area. However, this also implies some challenges. Specifically, the spectrum sensing stage, collaboration among devices (which requires extra communication), and changes in the transmission parameters increase the total energy consumption of the network. When designing CWSN optimization strategies, the fact that WSN nodes are very limited in terms of memory, computational power, or energy consumption has to be considered. Thus, light strategies that require a low computing capacity must be found. Since the field of energy conservation in WSNs has been widely explored, we assume that new strategies could emerge from the new opportunities presented by cognitive networks. In this PhD Thesis, we present two strategies for energy consumption reduction in CWSNs supported by three main pillars. The first pillar is that cognitive capabilities added to the WSN provide the ability to change the transmission parameters according to the spectrum. The second pillar is that the ability to collaborate is a basic characteristic of CWSNs. Finally, the third pillar for this work is the game theory as a decision-making algorithm, which has been widely used in WSNs due to its lightness and simplicity that make it valid to operate in CWSNs. For the development of these strategies, a complete analysis of the possibilities is first carried out by incorporating the cognitive abilities into the network. Once this analysis has been performed, we expose the hypotheses of this thesis related to the use of cognitive capabilities as a useful tool to reduce energy consumption in CWSNs. Once the analyses are exposed, we present the main contribution of this thesis: the two designed strategies for energy consumption reduction based on game theory and cognitive capabilities. The first one is based on a non-cooperative game played between two players in a simple and selfish way. In the second strategy, the concept of collaboration is introduced. Despite the fact that the game used is also a non-cooperative game, the decisions are taken through collaboration. For each strategy, we present the modeled game, the formal analysis of equilibrium and optimum, and the complete strategy describing the interaction between nodes. In order to test the strategies through simulation and implementation in real devices, we have developed a CWSN framework composed by a CWSN simulator based on Castalia and a testbed based on CWSN nodes able to communicate in three different ISM bands. We present and discuss the results derived by the energy optimization strategies. The first strategy brings energy improvement rates of over 65% compared to WSN without cognitive techniques. It also brings energy improvement rates of over 25% compared with sensing strategies for changing channels based on a decision threshold. We have also seen that the algorithm behaves similarly even with significant variations in the level of noise while working in a uniform noise scenario. The collaborative strategy presents improvements respecting the previous strategy in terms of noise protection when the noise scheme is more complex where this strategy shows improvement rates of over 50%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La Arquitectura de la Red de las Cosas (IoT) hace referencia a una red de objetos cotidianos interconectados digitalmente. Gracias a IoT, no sólo podemos almacenar, analizar e intercambiar información y datos con dichos objetos, sino que además ellos pueden tener la capacidad de interactuar entre ellos de forma autónoma. Para ellos, los objetos cotidianos disponen de actuadores y sensores que permiten modificar su comportamiento y conocer su estado y propiedades, respectivamente. La gestión de IoT combina todas las funcionalidades necesarias para coordinar un sistema con una Arquitectura de la Red de las Cosas. Una buena gestión del sistema puede reducir costes, mejorar la asistencia a problemas de uso inesperado, corregir fallos y permitir la escalabilidad del sistema permitiéndole la incorporación de nuevos módulos y funcionalidades. En este Proyecto Fin de Grado se realizará primero un análisis de los aspectos de IoT relacionados con la gestión de dispositivos integrados en la Arquitectura de la Red de las Cosas. Después se procederá a realizar la especificación y el diseño de plataforma de gestión. Y finalmente se desarrollarán un caso de uso que permita validar algunos elementos de la plataforma diseñada. Se realizarán distintas pruebas para comprobar una correcta gestión de los dispositivos como el correcto funcionamiento del diseño previamente establecido, por medio, entre otras, de las siguientes operaciones: listar los elementos conectados, posibilidad de obtener y/o modificar dichos elementos (su configuración y su estado) o presentar informes y comprobar el estado en el que se encuentran los dispositivos: operativos o no operativos. De tal forma, en esta memoria se plasma como se ha desarrollado la gestión de dispositivos integrados en un sistema con Arquitectura de la Red de las Cosas utilizando tanto plataformas Intel Galileo como Arduino. ABSTRACT. The Architecture of the Internet of Things (IoT) refers to a network of digitally interconnected everyday objects. With IoT, not only we can store, analyze and exchange information and data with objects, but they can also autonomously interact among them. To accomplish that, the everyday objects are made of actuators and sensors that let us act on their behavior and know their state and properties, respectively. Management of IoT combines all the functionalities needed for coordinating a system with an Architecture of the Internet of Things. A good management system can reduce faults, improve assistance to reduce unexpected problems, correct errors and allow the scalability of the system, allowing the addition of new modules and functionalities. In this Degree Final Project, an analysis about aspects of IoT related to the management of devices integrated into the Architecture of the Internet of things is carried out first. Then, the specification and the design of the management platform is made. Finally, a use case will be developed to validate some elements of the designed platform. Several tests will be run to check the correct management of the devices such as the proper functioning of the design previously established, requesting, among others, the following set of operations: list the connected elements, possibility to obtain or modify these elements (their configuration and their state) or reporting and checking which devices are operating or non-operating. So, in this memory it is explained how it has been carried out the management of devices integrated in a system with an Architecture of the Internet of Things (IoT), based on the Intel Galileo and Arduino platforms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La computación ubicua está extendiendo su aplicación desde entornos específicos hacia el uso cotidiano; el Internet de las cosas (IoT, en inglés) es el ejemplo más brillante de su aplicación y de la complejidad intrínseca que tiene, en comparación con el clásico desarrollo de aplicaciones. La principal característica que diferencia la computación ubicua de los otros tipos está en como se emplea la información de contexto. Las aplicaciones clásicas no usan en absoluto la información de contexto o usan sólo una pequeña parte de ella, integrándola de una forma ad hoc con una implementación específica para la aplicación. La motivación de este tratamiento particular se tiene que buscar en la dificultad de compartir el contexto con otras aplicaciones. En realidad lo que es información de contexto depende del tipo de aplicación: por poner un ejemplo, para un editor de imágenes, la imagen es la información y sus metadatos, tales como la hora de grabación o los ajustes de la cámara, son el contexto, mientras que para el sistema de ficheros la imagen junto con los ajustes de cámara son la información, y el contexto es representado por los metadatos externos al fichero como la fecha de modificación o la de último acceso. Esto significa que es difícil compartir la información de contexto, y la presencia de un middleware de comunicación que soporte el contexto de forma explícita simplifica el desarrollo de aplicaciones para computación ubicua. Al mismo tiempo el uso del contexto no tiene que ser obligatorio, porque si no se perdería la compatibilidad con las aplicaciones que no lo usan, convirtiendo así dicho middleware en un middleware de contexto. SilboPS, que es nuestra implementación de un sistema publicador/subscriptor basado en contenido e inspirado en SIENA [11, 9], resuelve dicho problema extendiendo el paradigma con dos elementos: el Contexto y la Función de Contexto. El contexto representa la información contextual propiamente dicha del mensaje por enviar o aquella requerida por el subscriptor para recibir notificaciones, mientras la función de contexto se evalúa usando el contexto del publicador y del subscriptor. Esto permite desacoplar la lógica de gestión del contexto de aquella de la función de contexto, incrementando de esta forma la flexibilidad de la comunicación entre varias aplicaciones. De hecho, al utilizar por defecto un contexto vacío, las aplicaciones clásicas y las que manejan el contexto pueden usar el mismo SilboPS, resolviendo de esta forma la incompatibilidad entre las dos categorías. En cualquier caso la posible incompatibilidad semántica sigue existiendo ya que depende de la interpretación que cada aplicación hace de los datos y no puede ser solucionada por una tercera parte agnóstica. El entorno IoT conlleva retos no sólo de contexto, sino también de escalabilidad. La cantidad de sensores, el volumen de datos que producen y la cantidad de aplicaciones que podrían estar interesadas en manipular esos datos está en continuo aumento. Hoy en día la respuesta a esa necesidad es la computación en la nube, pero requiere que las aplicaciones sean no sólo capaces de escalar, sino de hacerlo de forma elástica [22]. Desgraciadamente no hay ninguna primitiva de sistema distribuido de slicing que soporte un particionamiento del estado interno [33] junto con un cambio en caliente, además de que los sistemas cloud actuales como OpenStack u OpenNebula no ofrecen directamente una monitorización elástica. Esto implica que hay un problema bilateral: cómo puede una aplicación escalar de forma elástica y cómo monitorizar esa aplicación para saber cuándo escalarla horizontalmente. E-SilboPS es la versión elástica de SilboPS y se adapta perfectamente como solución para el problema de monitorización, gracias al paradigma publicador/subscriptor basado en contenido y, a diferencia de otras soluciones [5], permite escalar eficientemente, para cumplir con la carga de trabajo sin sobre-provisionar o sub-provisionar recursos. Además está basado en un algoritmo recientemente diseñado que muestra como añadir elasticidad a una aplicación con distintas restricciones sobre el estado: sin estado, estado aislado con coordinación externa y estado compartido con coordinación general. Su evaluación enseña como se pueden conseguir notables speedups, siendo el nivel de red el principal factor limitante: de hecho la eficiencia calculada (ver Figura 5.8) demuestra cómo se comporta cada configuración en comparación con las adyacentes. Esto permite conocer la tendencia actual de todo el sistema, para saber si la siguiente configuración compensará el coste que tiene con la ganancia que lleva en el throughput de notificaciones. Se tiene que prestar especial atención en la evaluación de los despliegues con igual coste, para ver cuál es la mejor solución en relación a una carga de trabajo dada. Como último análisis se ha estimado el overhead introducido por las distintas configuraciones a fin de identificar el principal factor limitante del throughput. Esto ayuda a determinar la parte secuencial y el overhead de base [26] en un despliegue óptimo en comparación con uno subóptimo. Efectivamente, según el tipo de carga de trabajo, la estimación puede ser tan baja como el 10 % para un óptimo local o tan alta como el 60 %: esto ocurre cuando se despliega una configuración sobredimensionada para la carga de trabajo. Esta estimación de la métrica de Karp-Flatt es importante para el sistema de gestión porque le permite conocer en que dirección (ampliar o reducir) es necesario cambiar el despliegue para mejorar sus prestaciones, en lugar que usar simplemente una política de ampliación. ABSTRACT The application of pervasive computing is extending from field-specific to everyday use. The Internet of Things (IoT) is the shiniest example of its application and of its intrinsic complexity compared with classical application development. The main characteristic that differentiates pervasive from other forms of computing lies in the use of contextual information. Some classical applications do not use any contextual information whatsoever. Others, on the other hand, use only part of the contextual information, which is integrated in an ad hoc fashion using an application-specific implementation. This information is handled in a one-off manner because of the difficulty of sharing context across applications. As a matter of fact, the application type determines what the contextual information is. For instance, for an imaging editor, the image is the information and its meta-data, like the time of the shot or camera settings, are the context, whereas, for a file-system application, the image, including its camera settings, is the information and the meta-data external to the file, like the modification date or the last accessed timestamps, constitute the context. This means that contextual information is hard to share. A communication middleware that supports context decidedly eases application development in pervasive computing. However, the use of context should not be mandatory; otherwise, the communication middleware would be reduced to a context middleware and no longer be compatible with non-context-aware applications. SilboPS, our implementation of content-based publish/subscribe inspired by SIENA [11, 9], solves this problem by adding two new elements to the paradigm: the context and the context function. Context represents the actual contextual information specific to the message to be sent or that needs to be notified to the subscriber, whereas the context function is evaluated using the publisher’s context and the subscriber’s context to decide whether the current message and context are useful for the subscriber. In this manner, context logic management is decoupled from context management, increasing the flexibility of communication and usage across different applications. Since the default context is empty, context-aware and classical applications can use the same SilboPS, resolving the syntactic mismatch that there is between the two categories. In any case, the possible semantic mismatch is still present because it depends on how each application interprets the data, and it cannot be resolved by an agnostic third party. The IoT environment introduces not only context but scaling challenges too. The number of sensors, the volume of the data that they produce and the number of applications that could be interested in harvesting such data are growing all the time. Today’s response to the above need is cloud computing. However, cloud computing applications need to be able to scale elastically [22]. Unfortunately there is no slicing, as distributed system primitives that support internal state partitioning [33] and hot swapping and current cloud systems like OpenStack or OpenNebula do not provide elastic monitoring out of the box. This means there is a two-sided problem: 1) how to scale an application elastically and 2) how to monitor the application and know when it should scale in or out. E-SilboPS is the elastic version of SilboPS. I t is the solution for the monitoring problem thanks to its content-based publish/subscribe nature and, unlike other solutions [5], it scales efficiently so as to meet workload demand without overprovisioning or underprovisioning. Additionally, it is based on a newly designed algorithm that shows how to add elasticity in an application with different state constraints: stateless, isolated stateful with external coordination and shared stateful with general coordination. Its evaluation shows that it is able to achieve remarkable speedups where the network layer is the main limiting factor: the calculated efficiency (see Figure 5.8) shows how each configuration performs with respect to adjacent configurations. This provides insight into the actual trending of the whole system in order to predict if the next configuration would offset its cost against the resulting gain in notification throughput. Particular attention has been paid to the evaluation of same-cost deployments in order to find out which one is the best for the given workload demand. Finally, the overhead introduced by the different configurations has been estimated to identify the primary limiting factor for throughput. This helps to determine the intrinsic sequential part and base overhead [26] of an optimal versus a suboptimal deployment. Depending on the type of workload, this can be as low as 10% in a local optimum or as high as 60% when an overprovisioned configuration is deployed for a given workload demand. This Karp-Flatt metric estimation is important for system management because it indicates the direction (scale in or out) in which the deployment has to be changed in order to improve its performance instead of simply using a scale-out policy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sensing technology is a key enabler of the Internet of Things (IoT) and could produce huge volume data to contribute the Big Data paradigm. Modelling of sensing information is an important and challenging topic, which influences essentially the quality of smart city systems. In this paper, the author discusses the relevant technologies and information modelling in the context of smart city and especially reports the investigation of how to model sensing and location information in order to support smart city development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the Internet of Things becoming more and more popular, and a prediction that there will be more than 50 million devices connected to the Internet in 2020, the quantity of IoT platforms on the market is rapidly growing. Facing so many platforms to choose, the object of this thesis is to give some suggestions for reference by performing a quantitative comparison between two platforms: SensibleThings and Kaa. These two platforms have difference architectures so may suitable in different scenes. The comparison includes some measurement and evaluation under two designed scenarios and a general contrast in theory. Two scenarios cover cases of message delivery between two endpoints at different rates and multiple endpoints pushing log data continually. The result of measurement together with the theoretical analysis draw out the following conclusion. SensibleThings platform is more suitable for simple and small-scale message delivery between endpoints, like home environment with few devices. And Kaa platform is more suitable for large-scale and complicated application for data collection and processing, like meteorology field with huge amount of sensors and data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The wide adaptation of Internet Protocol (IP) as de facto protocol for most communication networks has established a need for developing IP capable data link layer protocol solutions for Machine to machine (M2M) and Internet of Things (IoT) networks. However, the wireless networks used for M2M and IoT applications usually lack the resources commonly associated with modern wireless communication networks. The existing IP capable data link layer solutions for wireless IoT networks provide the necessary overhead minimising and frame optimising features, but are often built to be compatible only with IPv6 and specific radio platforms. The objective of this thesis is to design IPv4 compatible data link layer for Netcontrol Oy's narrow band half-duplex packet data radio system. Based on extensive literature research, system modelling and solution concept testing, this thesis proposes the usage of tunslip protocol as the basis for the system data link layer protocol development. In addition to the functionality of tunslip, this thesis discusses the additional network, routing, compression, security and collision avoidance changes required to be made to the radio platform in order for it to be IP compatible while still being able to maintain the point-to-multipoint and multi-hop network characteristics. The data link layer design consists of the radio application, dynamic Maximum Transmission Unit (MTU) optimisation daemon and the tunslip interface. The proposed design uses tunslip for creating an IP capable data link protocol interface. The radio application receives data from tunslip and compresses the packets and uses the IP addressing information for radio network addressing and routing before forwarding the message to radio network. The dynamic MTU size optimisation daemon controls the tunslip interface maximum MTU size according to the link quality assessment calculated from the radio network diagnostic data received from the radio application. For determining the usability of tunslip as the basis for data link layer protocol, testing of the tunslip interface is conducted with both IEEE 802.15.4 radios and packet data radios. The test cases measure the radio network usability for User Datagram Protocol (UDP) based applications without applying any header or content compression. The test results for the packet data radios reveal that the typical success rate for packet reception through a single-hop link is above 99% with a round-trip-delay of 0.315s for 63B packets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Securing e-health applications in the context of Internet of Things (IoT) is challenging. Indeed, resources scarcity in such environment hinders the implementation of existing standard based protocols. Among these protocols, MIKEY (Multimedia Internet KEYing) aims at establishing security credentials between two communicating entities. However, the existing MIKEY modes fail to meet IoT specificities. In particular, the pre-shared key mode is energy efficient, but suffers from severe scalability issues. On the other hand, asymmetric modes such as the public key mode are scalable, but are highly resource consuming. To address this issue, we combine two previously proposed approaches to introduce a new hybrid MIKEY mode. Indeed, relying on a cooperative approach, a set of third parties is used to discharge the constrained nodes from heavy computational operations. Doing so, the pre-shared mode is used in the constrained part of the network, while the public key mode is used in the unconstrained part of the network. Preliminary results show that our proposed mode is energy preserving whereas its security properties are kept safe.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The digital revolution of the 21st century contributed to stem the Internet of Things (IoT). Trillions of embedded devices using the Internet Protocol (IP), also called smart objects, will be an integral part of the Internet. In order to support such an extremely large address space, a new Internet Protocol, called Internet Protocol Version 6 (IPv6) is being adopted. The IPv6 over Low Power Wireless Personal Area Networks (6LoWPAN) has accelerated the integration of WSNs into the Internet. At the same time, the Constrained Application Protocol (CoAP) has made it possible to provide resource constrained devices with RESTful Web services functionalities. This work builds upon previous experience in street lighting networks, for which a proprietary protocol, devised by the Lighting Living Lab, was implemented and used for several years. The proprietary protocol runs on a broad range of lighting control boards. In order to support heterogeneous applications with more demanding communication requirements and to improve the application development process, it was decided to port the Contiki OS to the four channel LED driver (4LD) board from Globaltronic. This thesis describes the work done to adapt the Contiki OS to support the Microchip TM PIC24FJ128GA308 microprocessor and presents an IP based solution to integrate sensors and actuators in smart lighting applications. Besides detailing the system’s architecture and implementation, this thesis presents multiple results showing that the performance of CoAP based resource retrievals in constrained nodes is adequate for supporting networking services in street lighting networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last years there has been a clear evolution in the world of telecommunications, which goes from new services that need higher speeds and higher bandwidth, until a role of interactions between people and machines, named by Internet of Things (IoT). So, the only technology able to follow this growth is the optical communications. Currently the solution that enables to overcome the day-by-day needs, like collaborative job, audio and video communications and share of les is based on Gigabit-capable Passive Optical Network (G-PON) with the recently successor named Next Generation Passive Optical Network Phase 2 (NG-PON2). This technology is based on the multiplexing domain wavelength and due to its characteristics and performance becomes the more advantageous technology. A major focus of optical communications are Photonic Integrated Circuits (PICs). These can include various components into a single device, which simpli es the design of the optical system, reducing space and power consumption, and improves reliability. These characteristics make this type of devices useful for several applications, that justi es the investments in the development of the technology into a very high level of performance and reliability in terms of the building blocks. With the goal to develop the optical networks of future generations, this work presents the design and implementation of a PIC, which is intended to be a universal transceiver for applications for NG-PON2. The same PIC will be able to be used as an Optical Line Terminal (OLT) or an Optical Network Unit (ONU) and in both cases as transmitter and receiver. Initially a study is made of Passive Optical Network (PON) and its standards. Therefore it is done a theoretical overview that explores the materials used in the development and production of this PIC, which foundries are available, and focusing in SMART Photonics, the components used in the development of this chip. For the conceptualization of the project di erent architectures are designed and part of the laser cavity is simulated using Aspic™. Through the analysis of advantages and disadvantages of each one, it is chosen the best to be used in the implementation. Moreover, the architecture of the transceiver is simulated block by block through the VPItransmissionMaker™ and it is demonstrated its operating principle. Finally it is presented the PIC implementation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent years have seen an astronomical rise in SQL Injection Attacks (SQLIAs) used to compromise the confidentiality, authentication and integrity of organisations’ databases. Intruders becoming smarter in obfuscating web requests to evade detection combined with increasing volumes of web traffic from the Internet of Things (IoT), cloud-hosted and on-premise business applications have made it evident that the existing approaches of mostly static signature lack the ability to cope with novel signatures. A SQLIA detection and prevention solution can be achieved through exploring an alternative bio-inspired supervised learning approach that uses input of labelled dataset of numerical attributes in classifying true positives and negatives. We present in this paper a Numerical Encoding to Tame SQLIA (NETSQLIA) that implements a proof of concept for scalable numerical encoding of features to a dataset attributes with labelled class obtained from deep web traffic analysis. In the numerical attributes encoding: the model leverages proxy in the interception and decryption of web traffic. The intercepted web requests are then assembled for front-end SQL parsing and pattern matching by applying traditional Non-Deterministic Finite Automaton (NFA). This paper is intended for a technique of numerical attributes extraction of any size primed as an input dataset to an Artificial Neural Network (ANN) and statistical Machine Learning (ML) algorithms implemented using Two-Class Averaged Perceptron (TCAP) and Two-Class Logistic Regression (TCLR) respectively. This methodology then forms the subject of the empirical evaluation of the suitability of this model in the accurate classification of both legitimate web requests and SQLIA payloads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract One of the most important challenges of this decade is the Internet of Things (IoT) that pursues the integration of real-world objects in Internet. One of the key areas of the IoT is the Ambient Assisted Living (AAL) systems, which should be able to react to variable and continuous changes while ensuring their acceptance and adoption by users. This means that AAL systems need to work as self-adaptive systems. The autonomy property inherent to software agents, makes them a suitable choice for developing self-adaptive systems. However, agents lack the mechanisms to deal with the variability present in the IoT domain with regard to devices and network technologies. To overcome this limitation we have already proposed a Software Product Line (SPL) process for the development of self-adaptive agents in the IoT. Here we analyze the challenges that poses the development of self-adaptive AAL systems based on agents. To do so, we focus on the domain and application engineering of the self-adaptation concern of our SPL process. In addition, we provide a validation of our development process for AAL systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proliferation of new mobile communication devices, such as smartphones and tablets, has led to an exponential growth in network traffic. The demand for supporting the fast-growing consumer data rates urges the wireless service providers and researchers to seek a new efficient radio access technology, which is the so-called 5G technology, beyond what current 4G LTE can provide. On the other hand, ubiquitous RFID tags, sensors, actuators, mobile phones and etc. cut across many areas of modern-day living, which offers the ability to measure, infer and understand the environmental indicators. The proliferation of these devices creates the term of the Internet of Things (IoT). For the researchers and engineers in the field of wireless communication, the exploration of new effective techniques to support 5G communication and the IoT becomes an urgent task, which not only leads to fruitful research but also enhance the quality of our everyday life. Massive MIMO, which has shown the great potential in improving the achievable rate with a very large number of antennas, has become a popular candidate. However, the requirement of deploying a large number of antennas at the base station may not be feasible in indoor scenarios. Does there exist a good alternative that can achieve similar system performance to massive MIMO for indoor environment? In this dissertation, we address this question by proposing the time-reversal technique as a counterpart of massive MIMO in indoor scenario with the massive multipath effect. It is well known that radio signals will experience many multipaths due to the reflection from various scatters, especially in indoor environments. The traditional TR waveform is able to create a focusing effect at the intended receiver with very low transmitter complexity in a severe multipath channel. TR's focusing effect is in essence a spatial-temporal resonance effect that brings all the multipaths to arrive at a particular location at a specific moment. We show that by using time-reversal signal processing, with a sufficiently large bandwidth, one can harvest the massive multipaths naturally existing in a rich-scattering environment to form a large number of virtual antennas and achieve the desired massive multipath effect with a single antenna. Further, we explore the optimal bandwidth for TR system to achieve maximal spectral efficiency. Through evaluating the spectral efficiency, the optimal bandwidth for TR system is found determined by the system parameters, e.g., the number of users and backoff factor, instead of the waveform types. Moreover, we investigate the tradeoff between complexity and performance through establishing a generalized relationship between the system performance and waveform quantization in a practical communication system. It is shown that a 4-bit quantized waveforms can be used to achieve the similar bit-error-rate compared to the TR system with perfect precision waveforms. Besides 5G technology, Internet of Things (IoT) is another terminology that recently attracts more and more attention from both academia and industry. In the second part of this dissertation, the heterogeneity issue within the IoT is explored. One of the significant heterogeneity considering the massive amount of devices in the IoT is the device heterogeneity, i.e., the heterogeneous bandwidths and associated radio-frequency (RF) components. The traditional middleware techniques result in the fragmentation of the whole network, hampering the objects interoperability and slowing down the development of a unified reference model for the IoT. We propose a novel TR-based heterogeneous system, which can address the bandwidth heterogeneity and maintain the benefit of TR at the same time. The increase of complexity in the proposed system lies in the digital processing at the access point (AP), instead of at the devices' ends, which can be easily handled with more powerful digital signal processor (DSP). Meanwhile, the complexity of the terminal devices stays low and therefore satisfies the low-complexity and scalability requirement of the IoT. Since there is no middleware in the proposed scheme and the additional physical layer complexity concentrates on the AP side, the proposed heterogeneous TR system better satisfies the low-complexity and energy-efficiency requirement for the terminal devices (TDs) compared with the middleware approach.