943 resultados para Internet of things, Mqtt, domotica, Raspberry Pi
Resumo:
Postprint
Resumo:
With the Internet of Things becoming more and more popular, and a prediction that there will be more than 50 million devices connected to the Internet in 2020, the quantity of IoT platforms on the market is rapidly growing. Facing so many platforms to choose, the object of this thesis is to give some suggestions for reference by performing a quantitative comparison between two platforms: SensibleThings and Kaa. These two platforms have difference architectures so may suitable in different scenes. The comparison includes some measurement and evaluation under two designed scenarios and a general contrast in theory. Two scenarios cover cases of message delivery between two endpoints at different rates and multiple endpoints pushing log data continually. The result of measurement together with the theoretical analysis draw out the following conclusion. SensibleThings platform is more suitable for simple and small-scale message delivery between endpoints, like home environment with few devices. And Kaa platform is more suitable for large-scale and complicated application for data collection and processing, like meteorology field with huge amount of sensors and data.
Resumo:
The continuous technology evaluation is benefiting our lives to a great extent. The evolution of Internet of things and deployment of wireless sensor networks is making it possible to have more connectivity between people and devices used extensively in our daily lives. Almost every discipline of daily life including health sector, transportation, agriculture etc. is benefiting from these technologies. There is a great potential of research and refinement of health sector as the current system is very often dependent on manual evaluations conducted by the clinicians. There is no automatic system for patient health monitoring and assessment which results to incomplete and less reliable heath information. Internet of things has a great potential to benefit health care applications by automated and remote assessment, monitoring and identification of diseases. Acute pain is the main cause of people visiting to hospitals. An automatic pain detection system based on internet of things with wireless devices can make the assessment and redemption significantly more efficient. The contribution of this research work is proposing pain assessment method based on physiological parameters. The physiological parameters chosen for this study are heart rate, electrocardiography, breathing rate and galvanic skin response. As a first step, the relation between these physiological parameters and acute pain experienced by the test persons is evaluated. The electrocardiography data collected from the test persons is analyzed to extract interbeat intervals. This evaluation clearly demonstrates specific patterns and trends in these parameters as a consequence of pain. This parametric behavior is then used to assess and identify the pain intensity by implementing machine learning algorithms. Support vector machines are used for classifying these parameters influenced by different pain intensities and classification results are achieved. The classification results with good accuracy rates between two and three levels of pain intensities shows clear indication of pain and the feasibility of this pain assessment method. An improved approach on the basis of this research work can be implemented by using both physiological parameters and electromyography data of facial muscles for classification.
Resumo:
Avec l’avènement des objets connectés, la bande passante nécessaire dépasse la capacité des interconnections électriques et interface sans fils dans les réseaux d’accès mais aussi dans les réseaux coeurs. Des systèmes photoniques haute capacité situés dans les réseaux d’accès utilisant la technologie radio sur fibre systèmes ont été proposés comme solution dans les réseaux sans fil de 5e générations. Afin de maximiser l’utilisation des ressources des serveurs et des ressources réseau, le cloud computing et des services de stockage sont en cours de déploiement. De cette manière, les ressources centralisées pourraient être diffusées de façon dynamique comme l’utilisateur final le souhaite. Chaque échange nécessitant une synchronisation entre le serveur et son infrastructure, une couche physique optique permet au cloud de supporter la virtualisation des réseaux et de les définir de façon logicielle. Les amplificateurs à semi-conducteurs réflectifs (RSOA) sont une technologie clé au niveau des ONU(unité de communications optiques) dans les réseaux d’accès passif (PON) à fibres. Nous examinons ici la possibilité d’utiliser un RSOA et la technologie radio sur fibre pour transporter des signaux sans fil ainsi qu’un signal numérique sur un PON. La radio sur fibres peut être facilement réalisée grâce à l’insensibilité a la longueur d’onde du RSOA. Le choix de la longueur d’onde pour la couche physique est cependant choisi dans les couches 2/3 du modèle OSI. Les interactions entre la couche physique et la commutation de réseaux peuvent être faites par l’ajout d’un contrôleur SDN pour inclure des gestionnaires de couches optiques. La virtualisation réseau pourrait ainsi bénéficier d’une couche optique flexible grâce des ressources réseau dynamique et adaptée. Dans ce mémoire, nous étudions un système disposant d’une couche physique optique basé sur un RSOA. Celle-ci nous permet de façon simultanée un envoi de signaux sans fil et le transport de signaux numérique au format modulation tout ou rien (OOK) dans un système WDM(multiplexage en longueur d’onde)-PON. Le RSOA a été caractérisé pour montrer sa capacité à gérer une plage dynamique élevée du signal sans fil analogique. Ensuite, les signaux RF et IF du système de fibres sont comparés avec ses avantages et ses inconvénients. Finalement, nous réalisons de façon expérimentale une liaison point à point WDM utilisant la transmission en duplex intégral d’un signal wifi analogique ainsi qu’un signal descendant au format OOK. En introduisant deux mélangeurs RF dans la liaison montante, nous avons résolu le problème d’incompatibilité avec le système sans fil basé sur le TDD (multiplexage en temps duplexé).
Resumo:
Food safety has always been a social issue that draws great public attention. With the rapid development of wireless communication technologies and intelligent devices, more and more Internet of Things (IoT) systems are applied in the food safety tracking field. However, connection between things and information system is usually established by pre-storing information of things into RFID Tag, which is inapplicable for on-field food safety detection. Therefore, considering pesticide residue is one of the severe threaten to food safety, a new portable, high-sensitivity, low-power, on-field organophosphorus (OP) compounds detection system is proposed in this thesis to realize the on-field food safety detection. The system is designed based on optical detection method by using a customized photo-detection sensor. A Micro Controller Unit (MCU) and a Bluetooth Low Energy (BLE) module are used to quantize and transmit detection result. An Android Application (APP) is also developed for the system to processing and display detection result as well as control the detection process. Besides, a quartzose sample container and black system box are also designed and made for the system demonstration. Several optimizations are made in wireless communication, circuit layout, Android APP and industrial design to realize the mobility, low power and intelligence.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
Only recently, during the past five years, consumer electronics has been evolving rapidly. Many products have started to include “smart home” capabilities, enabling communication and interoperability of various smart devices. Even more devices and sensors can be remote controlled and monitored through cloud services. While the smart home systems have become very affordable to average consumer compared to the early solutions decades ago, there are still many issues and things that need to be fixed or improved upon: energy efficiency, connectivity with other devices and applications, security and privacy concerns, reliability, and response time. This paper focuses on designing Internet of Things (IoT) node and platform architectures that take these issues into account, notes other currently used solutions, and selects technologies in order to provide better solution. The node architecture aims for energy efficiency and modularity, while the platform architecture goals are in scalability, portability, maintainability, performance, and modularity. Moreover, the platform architecture attempts to improve user experience by providing higher reliability and lower response time compared to the alternative platforms. The architectures were developed iteratively using a development process involving research, planning, design, implementation, testing, and analysis. Additionally, they were documented using Kruchten’s 4+1 view model, which is used to describe the use cases and different views of the architectures. The node architecture consisted of energy efficient hardware, FC3180 microprocessor and CC2520 RF transceiver, modular operating system, Contiki, and a communication protocol, AllJoyn, used for providing better interoperability with other IoT devices and applications. The platform architecture provided reliable low response time control, monitoring, and initial setup capabilities by utilizing web technologies on various devices such as smart phones, tablets, and computers. Furthermore, an optional cloud service was provided in order to control devices and monitor sensors remotely by utilizing scalable high performance technologies in the backend enabling low response time and high reliability.
Resumo:
The wide adaptation of Internet Protocol (IP) as de facto protocol for most communication networks has established a need for developing IP capable data link layer protocol solutions for Machine to machine (M2M) and Internet of Things (IoT) networks. However, the wireless networks used for M2M and IoT applications usually lack the resources commonly associated with modern wireless communication networks. The existing IP capable data link layer solutions for wireless IoT networks provide the necessary overhead minimising and frame optimising features, but are often built to be compatible only with IPv6 and specific radio platforms. The objective of this thesis is to design IPv4 compatible data link layer for Netcontrol Oy's narrow band half-duplex packet data radio system. Based on extensive literature research, system modelling and solution concept testing, this thesis proposes the usage of tunslip protocol as the basis for the system data link layer protocol development. In addition to the functionality of tunslip, this thesis discusses the additional network, routing, compression, security and collision avoidance changes required to be made to the radio platform in order for it to be IP compatible while still being able to maintain the point-to-multipoint and multi-hop network characteristics. The data link layer design consists of the radio application, dynamic Maximum Transmission Unit (MTU) optimisation daemon and the tunslip interface. The proposed design uses tunslip for creating an IP capable data link protocol interface. The radio application receives data from tunslip and compresses the packets and uses the IP addressing information for radio network addressing and routing before forwarding the message to radio network. The dynamic MTU size optimisation daemon controls the tunslip interface maximum MTU size according to the link quality assessment calculated from the radio network diagnostic data received from the radio application. For determining the usability of tunslip as the basis for data link layer protocol, testing of the tunslip interface is conducted with both IEEE 802.15.4 radios and packet data radios. The test cases measure the radio network usability for User Datagram Protocol (UDP) based applications without applying any header or content compression. The test results for the packet data radios reveal that the typical success rate for packet reception through a single-hop link is above 99% with a round-trip-delay of 0.315s for 63B packets.
Resumo:
Securing e-health applications in the context of Internet of Things (IoT) is challenging. Indeed, resources scarcity in such environment hinders the implementation of existing standard based protocols. Among these protocols, MIKEY (Multimedia Internet KEYing) aims at establishing security credentials between two communicating entities. However, the existing MIKEY modes fail to meet IoT specificities. In particular, the pre-shared key mode is energy efficient, but suffers from severe scalability issues. On the other hand, asymmetric modes such as the public key mode are scalable, but are highly resource consuming. To address this issue, we combine two previously proposed approaches to introduce a new hybrid MIKEY mode. Indeed, relying on a cooperative approach, a set of third parties is used to discharge the constrained nodes from heavy computational operations. Doing so, the pre-shared mode is used in the constrained part of the network, while the public key mode is used in the unconstrained part of the network. Preliminary results show that our proposed mode is energy preserving whereas its security properties are kept safe.
Resumo:
The digital revolution of the 21st century contributed to stem the Internet of Things (IoT). Trillions of embedded devices using the Internet Protocol (IP), also called smart objects, will be an integral part of the Internet. In order to support such an extremely large address space, a new Internet Protocol, called Internet Protocol Version 6 (IPv6) is being adopted. The IPv6 over Low Power Wireless Personal Area Networks (6LoWPAN) has accelerated the integration of WSNs into the Internet. At the same time, the Constrained Application Protocol (CoAP) has made it possible to provide resource constrained devices with RESTful Web services functionalities. This work builds upon previous experience in street lighting networks, for which a proprietary protocol, devised by the Lighting Living Lab, was implemented and used for several years. The proprietary protocol runs on a broad range of lighting control boards. In order to support heterogeneous applications with more demanding communication requirements and to improve the application development process, it was decided to port the Contiki OS to the four channel LED driver (4LD) board from Globaltronic. This thesis describes the work done to adapt the Contiki OS to support the Microchip TM PIC24FJ128GA308 microprocessor and presents an IP based solution to integrate sensors and actuators in smart lighting applications. Besides detailing the system’s architecture and implementation, this thesis presents multiple results showing that the performance of CoAP based resource retrievals in constrained nodes is adequate for supporting networking services in street lighting networks.
Resumo:
In the last years there has been a clear evolution in the world of telecommunications, which goes from new services that need higher speeds and higher bandwidth, until a role of interactions between people and machines, named by Internet of Things (IoT). So, the only technology able to follow this growth is the optical communications. Currently the solution that enables to overcome the day-by-day needs, like collaborative job, audio and video communications and share of les is based on Gigabit-capable Passive Optical Network (G-PON) with the recently successor named Next Generation Passive Optical Network Phase 2 (NG-PON2). This technology is based on the multiplexing domain wavelength and due to its characteristics and performance becomes the more advantageous technology. A major focus of optical communications are Photonic Integrated Circuits (PICs). These can include various components into a single device, which simpli es the design of the optical system, reducing space and power consumption, and improves reliability. These characteristics make this type of devices useful for several applications, that justi es the investments in the development of the technology into a very high level of performance and reliability in terms of the building blocks. With the goal to develop the optical networks of future generations, this work presents the design and implementation of a PIC, which is intended to be a universal transceiver for applications for NG-PON2. The same PIC will be able to be used as an Optical Line Terminal (OLT) or an Optical Network Unit (ONU) and in both cases as transmitter and receiver. Initially a study is made of Passive Optical Network (PON) and its standards. Therefore it is done a theoretical overview that explores the materials used in the development and production of this PIC, which foundries are available, and focusing in SMART Photonics, the components used in the development of this chip. For the conceptualization of the project di erent architectures are designed and part of the laser cavity is simulated using Aspic™. Through the analysis of advantages and disadvantages of each one, it is chosen the best to be used in the implementation. Moreover, the architecture of the transceiver is simulated block by block through the VPItransmissionMaker™ and it is demonstrated its operating principle. Finally it is presented the PIC implementation.
Resumo:
Recent years have seen an astronomical rise in SQL Injection Attacks (SQLIAs) used to compromise the confidentiality, authentication and integrity of organisations’ databases. Intruders becoming smarter in obfuscating web requests to evade detection combined with increasing volumes of web traffic from the Internet of Things (IoT), cloud-hosted and on-premise business applications have made it evident that the existing approaches of mostly static signature lack the ability to cope with novel signatures. A SQLIA detection and prevention solution can be achieved through exploring an alternative bio-inspired supervised learning approach that uses input of labelled dataset of numerical attributes in classifying true positives and negatives. We present in this paper a Numerical Encoding to Tame SQLIA (NETSQLIA) that implements a proof of concept for scalable numerical encoding of features to a dataset attributes with labelled class obtained from deep web traffic analysis. In the numerical attributes encoding: the model leverages proxy in the interception and decryption of web traffic. The intercepted web requests are then assembled for front-end SQL parsing and pattern matching by applying traditional Non-Deterministic Finite Automaton (NFA). This paper is intended for a technique of numerical attributes extraction of any size primed as an input dataset to an Artificial Neural Network (ANN) and statistical Machine Learning (ML) algorithms implemented using Two-Class Averaged Perceptron (TCAP) and Two-Class Logistic Regression (TCLR) respectively. This methodology then forms the subject of the empirical evaluation of the suitability of this model in the accurate classification of both legitimate web requests and SQLIA payloads.
Resumo:
In recent years the technological world has grown by incorporating billions of small sensing devices, collecting and sharing real-world information. As the number of such devices grows, it becomes increasingly difficult to manage all these new information sources. There is no uniform way to share, process and understand context information. In previous publications we discussed efficient ways to organize context information that is independent of structure and representation. However, our previous solution suffers from semantic sensitivity. In this paper we review semantic methods that can be used to minimize this issue, and propose an unsupervised semantic similarity solution that combines distributional profiles with public web services. Our solution was evaluated against Miller-Charles dataset, achieving a correlation of 0.6.
Resumo:
Abstract One of the most important challenges of this decade is the Internet of Things (IoT) that pursues the integration of real-world objects in Internet. One of the key areas of the IoT is the Ambient Assisted Living (AAL) systems, which should be able to react to variable and continuous changes while ensuring their acceptance and adoption by users. This means that AAL systems need to work as self-adaptive systems. The autonomy property inherent to software agents, makes them a suitable choice for developing self-adaptive systems. However, agents lack the mechanisms to deal with the variability present in the IoT domain with regard to devices and network technologies. To overcome this limitation we have already proposed a Software Product Line (SPL) process for the development of self-adaptive agents in the IoT. Here we analyze the challenges that poses the development of self-adaptive AAL systems based on agents. To do so, we focus on the domain and application engineering of the self-adaptation concern of our SPL process. In addition, we provide a validation of our development process for AAL systems.
Resumo:
Trabajo realizado en la empresa ULMA Embedded Solutions