916 resultados para sensor-Cloud system
Resumo:
Food safety has always been a social issue that draws great public attention. With the rapid development of wireless communication technologies and intelligent devices, more and more Internet of Things (IoT) systems are applied in the food safety tracking field. However, connection between things and information system is usually established by pre-storing information of things into RFID Tag, which is inapplicable for on-field food safety detection. Therefore, considering pesticide residue is one of the severe threaten to food safety, a new portable, high-sensitivity, low-power, on-field organophosphorus (OP) compounds detection system is proposed in this thesis to realize the on-field food safety detection. The system is designed based on optical detection method by using a customized photo-detection sensor. A Micro Controller Unit (MCU) and a Bluetooth Low Energy (BLE) module are used to quantize and transmit detection result. An Android Application (APP) is also developed for the system to processing and display detection result as well as control the detection process. Besides, a quartzose sample container and black system box are also designed and made for the system demonstration. Several optimizations are made in wireless communication, circuit layout, Android APP and industrial design to realize the mobility, low power and intelligence.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
A sensing device for a touchless, hand gesture, user interface based on an inexpensive passive infrared pyroelectric detector array is presented. The 2 x 2 element sensor responds to changing infrared radiation generated by hand movement over the array. The sensing range is from a few millimetres to tens of centimetres. The low power consumption (< 50 μW) enables the sensor’s use in mobile devices and in low energy applications. Detection rates of 77% have been demonstrated using a prototype system that differentiates the four main hand motion trajectories – up, down, left and right. This device allows greater non-contact control capability without an increase in size, cost or power consumption over existing on/off devices.
Resumo:
In this thesis, tool support is addressed for the combined disciplines of Model-based testing and performance testing. Model-based testing (MBT) utilizes abstract behavioral models to automate test generation, thus decreasing time and cost of test creation. MBT is a functional testing technique, thereby focusing on output, behavior, and functionality. Performance testing, however, is non-functional and is concerned with responsiveness and stability under various load conditions. MBPeT (Model-Based Performance evaluation Tool) is one such tool which utilizes probabilistic models, representing dynamic real-world user behavior patterns, to generate synthetic workload against a System Under Test and in turn carry out performance analysis based on key performance indicators (KPI). Developed at Åbo Akademi University, the MBPeT tool is currently comprised of a downloadable command-line based tool as well as a graphical user interface. The goal of this thesis project is two-fold: 1) to extend the existing MBPeT tool by deploying it as a web-based application, thereby removing the requirement of local installation, and 2) to design a user interface for this web application which will add new user interaction paradigms to the existing feature set of the tool. All phases of the MBPeT process will be realized via this single web deployment location including probabilistic model creation, test configurations, test session execution against a SUT with real-time monitoring of user configurable metric, and final test report generation and display. This web application (MBPeT Dashboard) is implemented with the Java programming language on top of the Vaadin framework for rich internet application development. The Vaadin framework handles the complicated web communications processes and front-end technologies, freeing developers to implement the business logic as well as the user interface in pure Java. A number of experiments are run in a case study environment to validate the functionality of the newly developed Dashboard application as well as the scalability of the solution implemented in handling multiple concurrent users. The results support a successful solution with regards to the functional and performance criteria defined, while improvements and optimizations are suggested to increase both of these factors.
Resumo:
By considering the spatial character of sensor-based interactive systems, this paper investigates how discussions of seams and seamlessness in ubiquitous computing neglect the complex spatial character that is constructed as a side-effect of deploying sensor technology within a space. Through a study of a torch (`flashlight') based interface, we develop a framework for analysing this spatial character generated by sensor technology. This framework is then used to analyse and compare a range of other systems in which sensor technology is used, in order to develop a design spectrum that contrasts the revealing and hiding of a system's structure to users. Finally, we discuss the implications for interfaces situated in public spaces and consider the benefits of hiding structure from users.
Resumo:
Dissertação apresentada ao Instituto Politécnico de Castelo Branco para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Desenvolvimento de Software e Sistemas Interativos, realizada sob a orientação científica do Doutor Fernando Reinaldo Silva Garcia Ribeiro e do Doutor José Carlos Meireles Monteiro Metrôlho, Professores Adjuntos da Unidade Técnico-Científica de Informática da Escola Superior de Tecnologia do Instituto Politécnico de Castelo Branco.
Resumo:
The air-sea flux of greenhouse gases (e.g. carbon dioxide, CO2) is a critical part of the climate system and a major factor in the biogeochemical development of the oceans. More accurate and higher resolution calculations of these gas fluxes are required if we are to fully understand and predict our future climate. Satellite Earth observation is able to provide large spatial scale datasets that can be used to study gas fluxes. However, the large storage requirements needed to host such data can restrict its use by the scientific community. Fortunately, the development of cloud-computing can provide a solution. Here we describe an open source air-sea CO2 flux processing toolbox called the ‘FluxEngine’, designed for use on a cloud-computing infrastructure. The toolbox allows users to easily generate global and regional air-sea CO2 flux data from model, in situ and Earth observation data, and its air-sea gas flux calculation is user configurable. Its current installation on the Nephalae cloud allows users to easily exploit more than 8 terabytes of climate-quality Earth observation data for the derivation of gas fluxes. The resultant NetCDF data output files contain >20 data layers containing the various stages of the flux calculation along with process indicator layers to aid interpretation of the data. This paper describes the toolbox design, the verification of the air-sea CO2 flux calculations, demonstrates the use of the tools for studying global and shelf-sea air-sea fluxes and describes future developments.
Resumo:
Over the past decade Surface Plasmon Resonance (SPR) techniques have been applied to the measurement of numerous analytes. In this article, an SPR biosensor system deployed from an oceanographic vessel was used to measure dissolved domoic acid (DA), a common and harmful phycotoxin produced by certain microalgae species belonging to the genus Pseudo-nitzschia. During the biosensor deployment, concentrations of Pseudo-nitzschia cells were very low over the study area and measured DA concentrations were below detection. However, the in situ operational detection limit of the system was established using calibrated seawater solutions spiked with DA. The system could detect the toxin at concentrations as low as 0.1 ng mL−1 and presented a linear dynamic range from 0.1 ng mL−1 to 2.0 ng mL−1. This sensor showed promise for in situ detection of DA.
Resumo:
A smart solar photovoltaic grid system is an advent of innovation coherence of information and communications technology (ICT) with power systems control engineering via the internet [1]. This thesis designs and demonstrates a smart solar photovoltaic grid system that is selfhealing, environmental and consumer friendly, but also with the ability to accommodate other renewable sources of energy generation seamlessly, creating a healthy competitive energy industry and optimising energy assets efficiency. This thesis also presents the modelling of an efficient dynamic smart solar photovoltaic power grid system by exploring the maximum power point tracking efficiency, optimisation of the smart solar photovoltaic array through modelling and simulation to improve the quality of design for the solar photovoltaic module. In contrast, over the past decade quite promising results have been published in literature, most of which have not addressed the basis of the research questions in this thesis. The Levenberg-Marquardt and sparse based algorithms have proven to be very effective tools in helping to improve the quality of design for solar photovoltaic modules, minimising the possible relative errors in this thesis. Guided by theoretical and analytical reviews in literature, this research has carefully chosen the MatLab/Simulink software toolbox for modelling and simulation experiments performed on the static smart solar grid system. The auto-correlation coefficient results obtained from the modelling experiments give an accuracy of 99% with negligible mean square error (MSE), root mean square error (RMSE) and standard deviation. This thesis further explores the design and implementation of a robust real-time online solar photovoltaic monitoring system, establishing a comparative study of two solar photovoltaic tracking systems which provide remote access to the harvested energy data. This research made a landmark innovation in designing and implementing a unique approach for online remote access solar photovoltaic monitoring systems providing updated information of the energy produced by the solar photovoltaic module at the site location. In addressing the challenge of online solar photovoltaic monitoring systems, Darfon online data logger device has been systematically integrated into the design for a comparative study of the two solar photovoltaic tracking systems examined in this thesis. The site location for the comparative study of the solar photovoltaic tracking systems is at the National Kaohsiung University of Applied Sciences, Taiwan, R.O.C. The overall comparative energy output efficiency of the azimuthal-altitude dual-axis over the 450 stationary solar photovoltaic monitoring system as observed at the research location site is about 72% based on the total energy produced, estimated money saved and the amount of CO2 reduction achieved. Similarly, in comparing the total amount of energy produced by the two solar photovoltaic tracking systems, the overall daily generated energy for the month of July shows the effectiveness of the azimuthal-altitude tracking systems over the 450 stationary solar photovoltaic system. It was found that the azimuthal-altitude dual-axis tracking systems were about 68.43% efficient compared to the 450 stationary solar photovoltaic systems. Lastly, the overall comparative hourly energy efficiency of the azimuthal-altitude dual-axis over the 450 stationary solar photovoltaic energy system was found to be 74.2% efficient. Results from this research are quite promising and significant in satisfying the purpose of the research objectives and questions posed in the thesis. The new algorithms introduced in this research and the statistical measures applied to the modelling and simulation of a smart static solar photovoltaic grid system performance outperformed other previous works in reviewed literature. Based on this new implementation design of the online data logging systems for solar photovoltaic monitoring, it is possible for the first time to have online on-site information of the energy produced remotely, fault identification and rectification, maintenance and recovery time deployed as fast as possible. The results presented in this research as Internet of things (IoT) on smart solar grid systems are likely to offer real-life experiences especially both to the existing body of knowledge and the future solar photovoltaic energy industry irrespective of the study site location for the comparative solar photovoltaic tracking systems. While the thesis has contributed to the smart solar photovoltaic grid system, it has also highlighted areas of further research and the need to investigate more on improving the choice and quality design for solar photovoltaic modules. Finally, it has also made recommendations for further research in the minimization of the absolute or relative errors in the quality and design of the smart static solar photovoltaic module.
Resumo:
This paper deals with the development of an advanced parametrical modelling concept for packaging components of a 24 GHz radar sensor IC used in automotive driver assistance systems. For fast and efficient design of packages for system-in-package modules (SiP), a simplified model for the description of parasitic electromagnetic effects within the package is desirable, as 3-D field computation becomes inefficient due to the high density of conductive elements of the various signal paths in the package. By using lumped element models for the characterization of the conductive components, a fast indication of the design's signal-quality can be gained, but so far does not offer enough flexibility to cover the whole range of geometric arrangements of signal paths in a contemporary package. This work pursues to meet the challenge of developing a flexible and fast package modelling concept by defining parametric lumped-element models for all basic signal path components, e.g. bond wires, vias, strip lines, bumps and balls. © Author(s) 2011. CC Attribution 3.0 License.
Resumo:
In the context of this work we evaluated a multisensory, noninvasive prototype platform for shake flask cultivations by monitoring three basic parameters (pH, pO2 and biomass). The focus lies on the evaluation of the biomass sensor based on backward light scattering. The application spectrum was expanded to four new organisms in addition to E. coli K12 and S. cerevisiae [1]. It could be shown that the sensor is appropriate for a wide range of standard microorganisms, e.g., L. zeae, K. pastoris, A. niger and CHO-K1. The biomass sensor signal could successfully be correlated and calibrated with well-known measurement methods like OD600, cell dry weight (CDW) and cell concentration. Logarithmic and Bleasdale-Nelder derived functions were adequate for data fitting. Measurements at low cell concentrations proved to be critical in terms of a high signal to noise ratio, but the integration of a custom made light shade in the shake flask improved these measurements significantly. This sensor based measurement method has a high potential to initiate a new generation of online bioprocess monitoring. Metabolic studies will particularly benefit from the multisensory data acquisition. The sensor is already used in labscale experiments for shake flask cultivations.
Resumo:
A medição precisa da força é necessária para muitas aplicações, nomeadamente, para a determinação da resistência mecânica dos materiais, controlo de qualidade durante a produção, pesagem e segurança de pessoas. Dada a grande necessidade de medição de forças, têm-se desenvolvido, ao longo do tempo, várias técnicas e instrumentos para esse fim. Entre os vários instrumentos utilizados, destacam-se os sensores de força, também designadas por células de carga, pela sua simplicidade, precisão e versatilidade. O exemplo mais comum é baseado em extensómetros elétricos do tipo resistivo, que aliados a uma estrutura formam uma célula de carga. Este tipo de sensores possui sensibilidades baixas e em repouso, presença de offset diferente de zero, o que torna complexo o seu condicionamento de sinal. Este trabalho apresenta uma solução para o condicionamento e aquisição de dados para células de carga que, tanto quanto foi investigado, é inovador. Este dispositivo permite efetuar o condicionamento de sinal, digitalização e comunicação numa estrutura atómica. A ideia vai de encontro ao paradigma dos sensores inteligentes onde um único dispositivo eletrónico, associado a uma célula de carga, executa um conjunto de operações de processamento de sinal e transmissão de dados. Em particular permite a criação de uma rede ad-hoc utilizando o protocolo de comunicação IIC. O sistema é destinado a ser introduzido numa plataforma de carga, desenvolvida na Escola Superior de Tecnologia e Gestão de Bragança, local destinado à sua implementação. Devido à sua estratégia de conceção para a leitura de forças em três eixos, contém quatro células de carga, com duas saídas cada, totalizando oito saídas. O hardware para condicionamento de sinal já existente é analógico, e necessita de uma placa de dimensões consideráveis por cada saída. Do ponto de vista funcional, apresenta vários problemas, nomeadamente o ajuste de ganho e offset ser feito manualmente, tornando-se essencial um circuito com melhor desempenho no que respeita a lidar com um array de sensores deste tipo.
Resumo:
Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.
Resumo:
Part 20: Health and Care Networks
Resumo:
With the proliferation of new mobile devices and applications, the demand for ubiquitous wireless services has increased dramatically in recent years. The explosive growth in the wireless traffic requires the wireless networks to be scalable so that they can be efficiently extended to meet the wireless communication demands. In a wireless network, the interference power typically grows with the number of devices without necessary coordination among them. On the other hand, large scale coordination is always difficult due to the low-bandwidth and high-latency interfaces between access points (APs) in traditional wireless networks. To address this challenge, cloud radio access network (C-RAN) has been proposed, where a pool of base band units (BBUs) are connected to the distributed remote radio heads (RRHs) via high bandwidth and low latency links (i.e., the front-haul) and are responsible for all the baseband processing. But the insufficient front-haul link capacity may limit the scale of C-RAN and prevent it from fully utilizing the benefits made possible by the centralized baseband processing. As a result, the front-haul link capacity becomes a bottleneck in the scalability of C-RAN. In this dissertation, we explore the scalable C-RAN in the effort of tackling this challenge. In the first aspect of this dissertation, we investigate the scalability issues in the existing wireless networks and propose a novel time-reversal (TR) based scalable wireless network in which the interference power is naturally mitigated by the focusing effects of TR communications without coordination among APs or terminal devices (TDs). Due to this nice feature, it is shown that the system can be easily extended to serve more TDs. Motivated by the nice properties of TR communications in providing scalable wireless networking solutions, in the second aspect of this dissertation, we apply the TR based communications to the C-RAN and discover the TR tunneling effects which alleviate the traffic load in the front-haul links caused by the increment of TDs. We further design waveforming schemes to optimize the downlink and uplink transmissions in the TR based C-RAN, which are shown to improve the downlink and uplink transmission accuracies. Consequently, the traffic load in the front-haul links is further alleviated by the reducing re-transmissions caused by transmission errors. Moreover, inspired by the TR-based C-RAN, we propose the compressive quantization scheme which applies to the uplink of multi-antenna C-RAN so that more antennas can be utilized with the limited front-haul capacity, which provide rich spatial diversity such that the massive TDs can be served more efficiently.