832 resultados para Internet of Things
Resumo:
"Serial no. 108-35."
Resumo:
J.L., then a 25-year-old physiotherapist, became densely amnesic following herpes simplex encephalitis. She displayed severe retrograde amnesia, category-specific semantic memory loss, and a profound anterograde amnesia affecting both verbal and visual memory. Her working memory systems were relatively spared as were most of her cognitive problem-solving abilities, but her social functioning was grossly impaired. She was able to demonstrate several previously learned physiotherapy skills, but was unable to modify her application of these procedures in accordance with patient response. She showed no memory of theoretical or propositional knowledge, and could neither plan treatment or reason clinically. Three years later, J.L. had profound impairment of anterograde and retrograde declarative memory, with relative sparing of working memory for problem solving and long-term memory of procedural skills. The theoretical and practical implications of her amnesic syndrome are discussed.
Resumo:
This article investigates the continuing influence of the past on contemporary politics in Poland and Ukraine by examining the impact of the vocal 'informed' segment of public opinion on mutual relations between the two countries. The section 'What history?' examines the question of exactly what understanding of history matters so much in Polish-Ukrainian relations. The following sections examine how history influences the present, what are the contours of public opinion on Polish-Ukrainian relations within each state, and what is the impact of shared history on the contemporary politics of Polish-Ukrainian relations. Finally, the article suggests a potentially generaliseable hypothesis for future research.
Resumo:
For the development of communication systems such as Internet of Things, integrating communication with power supplies is an attractive solution to reduce supply cost. This paper presents a novel method of power/signal dual modulation (PSDM), by which signal transmission is integrated with power conversion. This method takes advantage of the intrinsic ripple initiated in switch mode power supplies as signal carriers, by which cost-effective communications can be realized. The principles of PSDM are discussed, and two basic dual modulation methods (specifically PWM/FSK and PWM/PSK) are concluded. The key points of designing a PWM/FSK system, including topology selection, carrier shape, and carrier frequency, are discussed to provide theoretical guidelines. A practical signal modulation-demodulation method is given, and a prototype system provides experimental results to verify the effectiveness of the proposed solution.
Resumo:
Through numerous technological advances in recent years along with the popularization of computer devices, the company is moving towards a paradigm “always connected”. Computer networks are everywhere and the advent of IPv6 paves the way for the explosion of the Internet of Things. This concept enables the sharing of data between computing machines and objects of day-to-day. One of the areas placed under Internet of Things are the Vehicular Networks. However, the information generated individually for a vehicle has no large amount and does not contribute to an improvement in transit, once information has been isolated. This proposal presents the Infostructure, a system that has to facilitate the efforts and reduce costs for development of applications context-aware to high-level semantic for the scenario of Internet of Things, which allows you to manage, store and combine the data in order to generate broader context. To this end we present a reference architecture, which aims to show the major components of the Infostructure. Soon after a prototype is presented which is used to validate our work reaches the level of contextualization desired high level semantic as well as a performance evaluation, which aims to evaluate the behavior of the subsystem responsible for managing contextual information on a large amount of data. After statistical analysis is performed with the results obtained in the evaluation. Finally, the conclusions of the work and some problems such as no assurance as to the integrity of the sensory data coming Infostructure, and future work that takes into account the implementation of other modules so that we can conduct tests in real environments are presented.
Resumo:
Postprint
Resumo:
Major food adulteration and contamination events occur with alarming regularity and are known to be episodic, with the question being not if but when another large-scale food safety/integrity incident will occur. Indeed, the challenges of maintaining food security are now internationally recognised. The ever increasing scale and complexity of food supply networks can lead to them becoming significantly more vulnerable to fraud and contamination, and potentially dysfunctional. This can make the task of deciding which analytical methods are more suitable to collect and analyse (bio)chemical data within complex food supply chains, at targeted points of vulnerability, that much more challenging. It is evident that those working within and associated with the food industry are seeking rapid, user-friendly methods to detect food fraud and contamination, and rapid/high-throughput screening methods for the analysis of food in general. In addition to being robust and reproducible, these methods should be portable and ideally handheld and/or remote sensor devices, that can be taken to or be positioned on/at-line at points of vulnerability along complex food supply networks and require a minimum amount of background training to acquire information rich data rapidly (ergo point-and-shoot). Here we briefly discuss a range of spectrometry and spectroscopy based approaches, many of which are commercially available, as well as other methods currently under development. We discuss a future perspective of how this range of detection methods in the growing sensor portfolio, along with developments in computational and information sciences such as predictive computing and the Internet of Things, will together form systems- and technology-based approaches that significantly reduce the areas of vulnerability to food crime within food supply chains. As food fraud is a problem of systems and therefore requires systems level solutions and thinking.
Resumo:
After years of deliberation, the EU commission sped up the reform process of a common EU digital policy considerably in 2015 by launching the EU digital single market strategy. In particular, two core initiatives of the strategy were agreed upon: General Data Protection Regulation and the Network and Information Security (NIS) Directive law texts. A new initiative was additionally launched addressing the role of online platforms. This paper focuses on the platform privacy rationale behind the data protection legislation, primarily based on the proposal for a new EU wide General Data Protection Regulation. We analyse the legislation rationale from an Information System perspective to understand the role user data plays in creating platforms that we identify as “processing silos”. Generative digital infrastructure theories are used to explain the innovative mechanisms that are thought to govern the notion of digitalization and successful business models that are affected by digitalization. We foresee continued judicial data protection challenges with the now proposed Regulation as the adoption of the “Internet of Things” continues. The findings of this paper illustrate that many of the existing issues can be addressed through legislation from a platform perspective. We conclude by proposing three modifications to the governing rationale, which would not only improve platform privacy for the data subject, but also entrepreneurial efforts in developing intelligent service platforms. The first modification is aimed at improving service differentiation on platforms by lessening the ability of incumbent global actors to lock-in the user base to their service/platform. The second modification posits limiting the current unwanted tracking ability of syndicates, by separation of authentication and data store services from any processing entity. Thirdly, we propose a change in terms of how security and data protection policies are reviewed, suggesting a third party auditing procedure.
Resumo:
With the development of the Internet-of-Things, more and more IoT platforms come up with different structures and characteristics. Making balance of their advantages and disadvantages, we should choose the suitable platform in differ- ent scenarios. For this project, I make comparison of a cloud-based centralized platform, Microsoft Azure IoT hub and a fully distributed platform, Sensi- bleThings. Quantitative comparison is made for performance by 2 scenarios, messages sending speed adds up, devices lie in different location. General com- parison is made for security, utilization and the storage. Finally I draw the con- clusion that SensibleThings performs more stable when a lot of messages push- es to the platform. Microsoft Azure has better geographic expansion. For gener- al comparison, Microsoft Azure IoT hub has better security. The requirement of local device for Microsoft Azure IoT hub is lower than SensibleThings. The SensibleThings are open source and free while Microsoft Azure follow the con- cept “pay as you go” with many throttling limitations for different editions. Microsoft is more user-friendly.
Resumo:
This project is aimed at making comparison between current existing Internet- of-Things (IoT) platforms, SensibleThings (ST) and Global Sensors Networks (GSN). Project can be served as a further work of platforms’ investigation. Comparing and learning from each other aim to contribute to the improvement of future platforms development. Detailed comparison is mainly with the respect of platform feature, communication and data present-frequency performance under stress, and platform node scalability performance on one limited device. Study is conducted through developing applications on each platform, and making measuring performance under the same condition in household network environment. So far, all these respects have had results and been concluded. Qualitatively comparing, GSN performs better in the facets of node’s swift development and deployment, data management, node subscription and connection retry mechanism. Whereas, ST is superior in respects of network package encryption, platform reliability, session initializing latency, and degree of developing freedom. In quantitative comparison, nodes on GSN has better data push pressure resistence while ST nodes works with lower session latency. In terms of data present-frequency, ST node can reach higher updating frequency than GSN node. In the aspect of node sclability on one limited device, ST nodes take the advantage in averagely lower latency than GSN node when nodes number is less than 15 on limited device. But due to sharing mechanism of GSN, on one limited device, it's nodes shows more scalable if platform nodes have similar job.
Resumo:
The objective of this paper is to perform a quantitative comparison of Dweet.io and SensibleThings from different aspects. With the fast development of internet of things, the platforms for internet-of-things face bigger challenges. This paper will evaluate both systems in four parts. The first part shows the general comparison of input ways and output functions provided by the platforms. The second part shows the security comparison, which focuses on the protocol types of the packets and the stability during the communication. The third part shows the scalability comparison when the value becomes bigger. The fourth part shows the scalability comparison when speeding up the processes. After the comparisons, I concluded that Dweet.io is more easy to use on devices and supports more programming languages. Dweet.io realizes visualization and it can be shared. Dweet.io is safer and more stable than SensibleThings. SensibleThings provides more openness. SensibleThings has better scalability in handling big values and quick speed.
Resumo:
Abstract—With the proliferation of Software systems and the rise of paradigms such the Internet of Things, Cyber- Physical Systems and Smart Cities to name a few, the energy consumed by software applications is emerging as a major concern. Hence, it has become vital that software engineers have a better understanding of the energy consumed by the code they write. At software level, work so far has focused on measuring the energy consumption at function and application level. In this paper, we propose a novel approach to measure energy consumption at a feature level, cross-cutting multiple functions, classes and systems. We argue the importance of such measurement and the new insight it provides to non-traditional stakeholders such as service providers. We then demonstrate, using an experiment, how the measurement can be done with a combination of tools, namely our program slicing tool (PORBS) and energy measurement tool (Jolinar).