830 resultados para Internet-of-Things, Wireless Sensor Network, CoAP
Resumo:
A Wireless Sensor Network (WSN) consists of distributed devices in an area in order to monitor physical variables such as temperature, pressure, vibration, motion and environmental conditions in places where wired networks would be difficult or impractical to implement, for example, industrial applications of difficult access, monitoring and control of oil wells on-shore or off-shore, monitoring of large areas of agricultural and animal farming, among others. To be viable, a WSN should have important requirements such as low cost, low latency, and especially low power consumption. However, to ensure these requirements, these networks suffer from limited resources, and eventually being used in hostile environments, leading to high failure rates, such as segmented routing, mes sage loss, reducing efficiency, and compromising the entire network, inclusive. This work aims to present the FTE-LEACH, a fault tolerant and energy efficient routing protocol that maintains efficiency in communication and dissemination of data.This protocol was developed based on the IEEE 802.15.4 standard and suitable for industrial networks with limited energy resources
Resumo:
Questo elaborato presenta il progetto di una interfaccia per l'aggiunta di sensori inerziali ad un nodo di una WSN (Wireless Sensor Network) �finalizzato al monitoraggio delle frane. Analizzando i vantaggi che avrebbe portato l'utilizzo di ulteriori sensori, si �e cercato di fornire un valido approccio di progettazione; in particolare l'idea �e quella di integrarli con un giroscopio ed un accelerometro aventi applicazioni in altri settori. Con questo particolare utilizzo, essi possono portare ad un miglior monitoraggio riuscendo a rilevare i movimenti in modo dettagliato ed a riconoscere i falsi allarmi. Nell'approccio che si intende suggerire verranno sfruttate schede per la prototipazione rapida, user-friendly e con costi decisamente accessibili, adatte alla sperimentazione elettronica e per lo sviluppo di nuovi dispositivi. Attraverso l'utilizzo di ambienti di sviluppo appositamente creati, si sono simulate le comunicazioni tra nodo e scheda di sensori, mettendo in evidenza i vantaggi ottenuti. Buona parte del progetto ha riguardato la programmazione in linguaggio C/C++, con una particolare attenzione al risparmio energetico.
Resumo:
Postprint
Resumo:
We wish to acknowledge the support of the Brazilian agencies: CNPq, CAPES, and FAPESP (2015/07311-7 and 2011/19296-1).
Resumo:
With the development of the Internet-of-Things, more and more IoT platforms come up with different structures and characteristics. Making balance of their advantages and disadvantages, we should choose the suitable platform in differ- ent scenarios. For this project, I make comparison of a cloud-based centralized platform, Microsoft Azure IoT hub and a fully distributed platform, Sensi- bleThings. Quantitative comparison is made for performance by 2 scenarios, messages sending speed adds up, devices lie in different location. General com- parison is made for security, utilization and the storage. Finally I draw the con- clusion that SensibleThings performs more stable when a lot of messages push- es to the platform. Microsoft Azure has better geographic expansion. For gener- al comparison, Microsoft Azure IoT hub has better security. The requirement of local device for Microsoft Azure IoT hub is lower than SensibleThings. The SensibleThings are open source and free while Microsoft Azure follow the con- cept “pay as you go” with many throttling limitations for different editions. Microsoft is more user-friendly.
Resumo:
The objective of this paper is to perform a quantitative comparison of Dweet.io and SensibleThings from different aspects. With the fast development of internet of things, the platforms for internet-of-things face bigger challenges. This paper will evaluate both systems in four parts. The first part shows the general comparison of input ways and output functions provided by the platforms. The second part shows the security comparison, which focuses on the protocol types of the packets and the stability during the communication. The third part shows the scalability comparison when the value becomes bigger. The fourth part shows the scalability comparison when speeding up the processes. After the comparisons, I concluded that Dweet.io is more easy to use on devices and supports more programming languages. Dweet.io realizes visualization and it can be shared. Dweet.io is safer and more stable than SensibleThings. SensibleThings provides more openness. SensibleThings has better scalability in handling big values and quick speed.
Resumo:
Abstract—With the proliferation of Software systems and the rise of paradigms such the Internet of Things, Cyber- Physical Systems and Smart Cities to name a few, the energy consumed by software applications is emerging as a major concern. Hence, it has become vital that software engineers have a better understanding of the energy consumed by the code they write. At software level, work so far has focused on measuring the energy consumption at function and application level. In this paper, we propose a novel approach to measure energy consumption at a feature level, cross-cutting multiple functions, classes and systems. We argue the importance of such measurement and the new insight it provides to non-traditional stakeholders such as service providers. We then demonstrate, using an experiment, how the measurement can be done with a combination of tools, namely our program slicing tool (PORBS) and energy measurement tool (Jolinar).
Resumo:
Illinois State Water Survey
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
The role of the RNA silencing network on the co-evolution of Phytophthora infestans and Solanum spp.
Resumo:
Securing e-health applications in the context of Internet of Things (IoT) is challenging. Indeed, resources scarcity in such environment hinders the implementation of existing standard based protocols. Among these protocols, MIKEY (Multimedia Internet KEYing) aims at establishing security credentials between two communicating entities. However, the existing MIKEY modes fail to meet IoT specificities. In particular, the pre-shared key mode is energy efficient, but suffers from severe scalability issues. On the other hand, asymmetric modes such as the public key mode are scalable, but are highly resource consuming. To address this issue, we combine two previously proposed approaches to introduce a new hybrid MIKEY mode. Indeed, relying on a cooperative approach, a set of third parties is used to discharge the constrained nodes from heavy computational operations. Doing so, the pre-shared mode is used in the constrained part of the network, while the public key mode is used in the unconstrained part of the network. Preliminary results show that our proposed mode is energy preserving whereas its security properties are kept safe.