897 resultados para Sensor Data Visualization
Resumo:
Background: The Unified Huntington’s Disease Rating Scale (UHDRS) is the principal means of assessing motor impairment in Huntington disease but is subjective and generally limited to in-clinic assessments. Objective: To evaluate the feasibility and ability of wearable sensors to measure motor impairment in individuals with Huntington disease in the clinic and at home. Methods: Participants with Huntington disease and controls were asked to wear five accelerometer-based sensors attached to the chest and each limb for standardized, in-clinic assessments and for one day at home. A secondchest sensor was worn for six additional days at home. Gait measures were compared between controls, participants with Huntington disease, and participants with Huntington disease grouped by UHDRS total motor score using Cohen’s d values. Results: Fifteen individuals with Huntington disease and five controls completed the study. Sensor data were successfully captured from 18 of the 20 participants at home. In the clinic, the standard deviation of step time (timebetween consecutive steps) was increased in Huntington disease (p<0.0001; Cohen’s d=2.61) compared to controls. At home with additional observations, significant differences were observed in seven additional gait measures. The gait of individuals with higher total motor scores (50 or more) differed significantly from those with lower total motor scores (below 50) on multiple measures at home. Conclusions: In this pilot study, the use of wearable sensors in clinic and at home was feasible and demonstrated gait differences between controls, participants with Huntington disease, and participants with Huntington diseasegrouped by motor impairment.
Resumo:
This paper reflects a research project on the influence of online news media (from print, radio, and televised outlets) on disaster response. Coverage on the October 2010 Indonesian tsunami and earthquake was gathered from 17 sources from October 26 through November 30. This data was analyzed quantitatively with respect to coverage intensity over time and among outlets. Qualitative analyses were also conducted using keywords and value scale that assessed the degree of positivity or negativity associated with that keyword in the context of accountability. Results yielded insights into the influence of online media on actors' assumption of accountability and quality of response. It also provided information as to the optimal time window in which advocates and disaster management specialists can best present recommendations to improve policy and raise awareness. Coverage of outlets was analyzed individually, in groups, and as a whole, in order to discern behavior patterns for a better understanding of media interdependency. This project produced analytical insights but is primarily intended as a prototype for more refined and extensive research.
Resumo:
Data Visualization is widely used to facilitate the comprehension of information and find relationships between data. One of the most widely used techniques for multivariate data (4 or more variables) visualization is the 2D scatterplot. This technique associates each data item to a visual mark in the following way: two variables are mapped to Cartesian coordinates so that a visual mark can be placed on the Cartesian plane; the others variables are mapped gradually to visual properties of the mark, such as size, color, shape, among others. As the number of variables to be visualized increases, the amount of visual properties associated to the mark increases as well. As a result, the complexity of the final visualization is higher. However, increasing the complexity of the visualization does not necessarily implies a better visualization and, sometimes, it provides an inverse situation, producing a visually polluted and confusing visualization—this problem is called visual properties overload. This work aims to investigate whether it is possible to work around the overload of the visual channel and improve insight about multivariate data visualized through a modification in the 2D scatterplot technique. In this modification, we map the variables from data items to multisensoriy marks. These marks are composed not only by visual properties, but haptic properties, such as vibration, viscosity and elastic resistance, as well. We believed that this approach could ease the insight process, through the transposition of properties from the visual channel to the haptic channel. The hypothesis was verified through experiments, in which we have analyzed (a) the accuracy of the answers; (b) response time; and (c) the grade of personal satisfaction with the proposed approach. However, the hypothesis was not validated. The results suggest that there is an equivalence between the investigated visual and haptic properties in all analyzed aspects, though in strictly numeric terms the multisensory visualization achieved better results in response time and personal satisfaction.
Resumo:
From the 12th until the 17th of July 2016, research vessel Maria S. Merian entered the Nordvestfjord of Scorsby Sound (East Greenland) as part of research cruise MSM56, "Ecological chemistry in Arctic fjords". A large variety of chemical and biological parameters of fjord and meltwater were measured during this cruise to characterize biogeochemical fluxes in arctic fjords. The photo documentation described here was a side project. It was started when we were close to the Daugaard-Jensen glacier at the end of the Nordvestfjord and realized that not many people have seen this area before and photos available for scientists are probably rare. These pictures shall help to document climate and landscape changes in a remote area of East Greenland. Pictures were taken with a Panasonic Lumix G6 equipped with either a 14-42 or 45-150 objective (zoom factor available in jpg metadata). Polarizer filters were used on both objectives. The time between taking the pictures and writing down the coordinates was maximally one minute but usually shorter. The uncertainty in position is therefore small as we were steaming slowly most of the time the pictures were taken (i.e. below 5 knots). I assume the uncertainty is in most cases below 200 m radius of the noted position. I did not check the direction I directed the camera to with a compass at the beginning. Hence, the direction that was noted is an approximation based on the navigation map and the positioning of the ship. The uncertainty was probably around +/- 40° but initially (pictures 1-17) perhaps even higher as this documentation was a spontaneous idea and it took some time to get the orientation right. It should be easy, however, to find the location of the mountains and glaciers when being on the respective positions because the mountains have a quite characteristic shape. In a later stage of this documentation, I took pictures from the bridge and used the gyros to approximate the direction the camera was pointed at. Here the uncertainty was much lower (i.e. +/- 20° or better). Directions approximated with the help of gyros have degree values in the overview table. The ship data provided in the MSM56 cruise report will contain all kinds of sensor data from Maria S. Merian sensor setup. This data can also be used to further constrain the position the pictures were taken because the exact time a photo was shot is noted in the metadata of the .jpg photo file. The shipboard clock was set on UTC. It was 57 minutes and 45 seconds behind the time in the camera. For example 12:57:45 on the camera was 12:00:00 UTC on the ship. All pictures provided here can be used for scientific purposes. In case of usage in presentations etc. please acknowledge RV Maria S. Merian (MSM56) and Lennart T. Bach as author. Please inform me and ask for reprint permission in case you want to use the pictures for scientific publications. I would like to thank all participants and the crew of Maria S. Merian Cruise 56 (MSM56, Ecological chemistry in Arctic fjords).
Resumo:
For the investigation of organic carbon fluxes reaching the seafloor, oxygen microprofiles were measured at 145 sites in different sub-regions of the Southern Ocean. At eleven sites, an in situ oxygen microprofiler was deployed for the measurement of oxygen profiles and the calculation of organic carbon fluxes. At four sites, both in situ and ex situ data were determined for high latitudes. Based on this dataset as well as on previous published data, a relationship was established for the estimation of fluxes derived by ex situ measured O2 profiles. The fluxes of labile organic matter range from 0.5 to 37.1 mgC m**2/day. The high values determined by in situ measurements were observed in the Polar Front region (water depth of more than 4290 m) and are comparable to organic matter fluxes observed for high-productivity, upwelling areas like off West Africa. The oxygen penetration depth, which reflects the long-term organic matter flux to the sediment, was correlated with assemblages of key diatom species. In the Scotia Sea (~3000 m water depth), oxygen penetration depths of less than 15 cm were observed, indicating high benthic organic carbon fluxes. In contrast, the oxic zone extends down to several decimeters in abyssal sediments of the Weddell Sea and the southeastern South Atlantic. The regional pattern of organic carbon fluxes derived from micro-sensor data suggest that episodic and seasonal sedimentation pulses are important for the carbon supply to the seafloor of the deep Southern Ocean.
Resumo:
El Periodismo de Datos se ha convertido en una de las tendencias que se están implantando en los medios. En pocos años el desarrollo y visibilidad de esta modalidad ha aumentado considerablemente y son numerosos los medios que cuentan con equipos y espacios específicos de Periodismo de Datos en el panorama internacional. Del mismo modo, existen aplicaciones, plataformas, webs, o fundaciones al margen de las empresas periodísticas cuya labor también puede ser enmarcada en este ámbito. El objetivo principal de esta contribución es establecer una radiografía de la implantación del Periodismo de Datos en España; tanto dentro como fuera de los medios. Aunque se trata de una disciplina todavía en fase de desarrollo, parece adecuado realizar un estudio exploratorio que ofrezca una panorámica de su situación actual en España.
Resumo:
Poor sleep is increasingly being recognised as an important prognostic parameter of health. For those with suspected sleep disorders, patients are referred to sleep clinics which guide treatment. However, sleep clinics are not always a viable option due to their high cost, a lack of experienced practitioners, lengthy waiting lists and an unrepresentative sleeping environment. A home-based non-contact sleep/wake monitoring system may be used as a guide for treatment potentially stratifying patients by clinical need or highlighting longitudinal changes in sleep and nocturnal patterns. This paper presents the evaluation of an under-mattress sleep monitoring system for non-contact sleep/wake discrimination. A large dataset of sensor data with concomitant sleep/wake state was collected from both younger and older adults participating in a circadian sleep study. A thorough training/testing/validation procedure was configured and optimised feature extraction and sleep/wake discrimination algorithms evaluated both within and across the two cohorts. An accuracy, sensitivity and specificity of 74.3%, 95.5%, and 53.2% is reported over all subjects using an external validation
dataset (71.9%, 87.9% and 56%, and 77.5%, 98% and 57% is reported for younger and older subjects respectively). These results compare favourably with similar research, however this system provides an ambient alternative suitable for long term continuous sleep monitoring, particularly amongst vulnerable populations.
Resumo:
In diesem Beitrag wird eine neue Methode zur Analyse des manuellen Kommissionierprozesses vorgestellt, mit der u. a. die Kommissionierzeitanteile automatisch erfasst werden können. Diese Methode basiert auf einer sensorgestützten Bewegungsklassifikation, wie sie bspw. im Sport oder in der Medizin Anwendung findet. Dabei werden mobile Sensoren genutzt, die fortlaufend Messwerte wie z. B. die Beschleunigung oder die Drehgeschwindigkeit des Kommissionierers aufzeichnen. Auf Basis dieser Daten können Informationen über die ausgeführten Bewegungen und insbesondere über die durchlaufenen Bewegungszustände gewonnen werden. Dieser Ansatz wird im vorliegenden Beitrag auf die Kommissionierung übertragen. Dazu werden zunächst Klassen relevanter Bewegungen identifiziert und anschließend mit Verfahren aus dem maschinellen Lernen verarbeitet. Die Klassifikation erfolgt nach dem Prinzip des überwachten Lernens. Dabei werden durchschnittliche Erkennungsraten von bis zu 78,94 Prozent erzielt.
Resumo:
Thesis (Master's)--University of Washington, 2016-01
Resumo:
Simultaneous Localization and Mapping (SLAM) is a procedure used to determine the location of a mobile vehicle in an unknown environment, while constructing a map of the unknown environment at the same time. Mobile platforms, which make use of SLAM algorithms, have industrial applications in autonomous maintenance, such as the inspection of flaws and defects in oil pipelines and storage tanks. A typical SLAM consists of four main components, namely, experimental setup (data gathering), vehicle pose estimation, feature extraction, and filtering. Feature extraction is the process of realizing significant features from the unknown environment such as corners, edges, walls, and interior features. In this work, an original feature extraction algorithm specific to distance measurements obtained through SONAR sensor data is presented. This algorithm has been constructed by combining the SONAR Salient Feature Extraction Algorithm and the Triangulation Hough Based Fusion with point-in-polygon detection. The reconstructed maps obtained through simulations and experimental data with the fusion algorithm are compared to the maps obtained with existing feature extraction algorithms. Based on the results obtained, it is suggested that the proposed algorithm can be employed as an option for data obtained from SONAR sensors in environment, where other forms of sensing are not viable. The algorithm fusion for feature extraction requires the vehicle pose estimation as an input, which is obtained from a vehicle pose estimation model. For the vehicle pose estimation, the author uses sensor integration to estimate the pose of the mobile vehicle. Different combinations of these sensors are studied (e.g., encoder, gyroscope, or encoder and gyroscope). The different sensor fusion techniques for the pose estimation are experimentally studied and compared. The vehicle pose estimation model, which produces the least amount of error, is used to generate inputs for the feature extraction algorithm fusion. In the experimental studies, two different environmental configurations are used, one without interior features and another one with two interior features. Numerical and experimental findings are discussed. Finally, the SLAM algorithm is implemented along with the algorithms for feature extraction and vehicle pose estimation. Three different cases are experimentally studied, with the floor of the environment intentionally altered to induce slipping. Results obtained for implementations with and without SLAM are compared and discussed. The present work represents a step towards the realization of autonomous inspection platforms for performing concurrent localization and mapping in harsh environments.
Resumo:
Artificial immune systems have previously been applied to the problem of intrusion detection. The aim of this research is to develop an intrusion detection system based on the function of Dendritic Cells (DCs). DCs are antigen presenting cells and key to the activation of the human immune system, behaviour which has been abstracted to form the Dendritic Cell Algorithm (DCA). In algorithmic terms, individual DCs perform multi-sensor data fusion, asynchronously correlating the fused data signals with a secondary data stream. Aggregate output of a population of cells is analysed and forms the basis of an anomaly detection system. In this paper the DCA is applied to the detection of outgoing port scans using TCP SYN packets. Results show that detection can be achieved with the DCA, yet some false positives can be encountered when simultaneously scanning and using other network services. Suggestions are made for using adaptive signals to alleviate this uncovered problem.
Resumo:
Artificial immune systems, more specifically the negative selection algorithm, have previously been applied to intrusion detection. The aim of this research is to develop an intrusion detection system based on a novel concept in immunology, the Danger Theory. Dendritic Cells (DCs) are antigen presenting cells and key to the activation of the human immune system. DCs perform the vital role of combining signals from the host tissue and correlate these signals with proteins known as antigens. In algorithmic terms, individual DCs perform multi-sensor data fusion based on time-windows. The whole population of DCs asynchronously correlates the fused signals with a secondary data stream. The behaviour of human DCs is abstracted to form the DC Algorithm (DCA), which is implemented using an immune inspired framework, libtissue. This system is used to detect context switching for a basic machine learning dataset and to detect outgoing portscans in real-time. Experimental results show a significant difference between an outgoing portscan and normal traffic.
Resumo:
With the ever-growing amount of connected sensors (IoT), making sense of sensed data becomes even more important. Pervasive computing is a key enabler for sustainable solutions, prominent examples are smart energy systems and decision support systems. A key feature of pervasive systems is situation awareness which allows a system to thoroughly understand its environment. It is based on external interpretation of data and thus relies on expert knowledge. Due to the distinct nature of situations in different domains and applications, the development of situation aware applications remains a complex process. This thesis is concerned with a general framework for situation awareness which simplifies the development of applications. It is based on the Situation Theory Ontology to provide a foundation for situation modelling which allows knowledge reuse. Concepts of the Situation Theory are mapped to the Context Space Theory which is used for situation reasoning. Situation Spaces in the Context Space are automatically generated with the defined knowledge. For the acquisition of sensor data, the IoT standards O-MI/O-DF are integrated into the framework. These allow a peer-to-peer data exchange between data publisher and the proposed framework and thus a platform independent subscription to sensed data. The framework is then applied for a use case to reduce food waste. The use case validates the applicability of the framework and furthermore serves as a showcase for a pervasive system contributing to the sustainability goals. Leading institutions, e.g. the United Nations, stress the need for a more resource efficient society and acknowledge the capability of ICT systems. The use case scenario is based on a smart neighbourhood in which the system recommends the most efficient use of food items through situation awareness to reduce food waste at consumption stage.
Resumo:
The dendritic cell algorithm (DCA) is an immune-inspired algorithm, developed for the purpose of anomaly detection. The algorithm performs multi-sensor data fusion and correlation which results in a ‘context aware’ detection system. Previous applications of the DCA have included the detection of potentially malicious port scanning activity, where it has produced high rates of true positives and low rates of false positives. In this work we aim to compare the performance of the DCA and of a self-organizing map (SOM) when applied to the detection of SYN port scans, through experimental analysis. A SOM is an ideal candidate for comparison as it shares similarities with the DCA in terms of the data fusion method employed. It is shown that the results of the two systems are comparable, and both produce false positives for the same processes. This shows that the DCA can produce anomaly detection results to the same standard as an established technique.
Resumo:
Part 14: Interoperability and Integration