945 resultados para User-Computer Interface
Resumo:
Various applications for the purposes of event detection, localization, and monitoring can benefit from the use of wireless sensor networks (WSNs). Wireless sensor networks are generally easy to deploy, with flexible topology and can support diversity of tasks thanks to the large variety of sensors that can be attached to the wireless sensor nodes. To guarantee the efficient operation of such a heterogeneous wireless sensor networks during its lifetime an appropriate management is necessary. Typically, there are three management tasks, namely monitoring, (re) configuration, and code updating. On the one hand, status information, such as battery state and node connectivity, of both the wireless sensor network and the sensor nodes has to be monitored. And on the other hand, sensor nodes have to be (re)configured, e.g., setting the sensing interval. Most importantly, new applications have to be deployed as well as bug fixes have to be applied during the network lifetime. All management tasks have to be performed in a reliable, time- and energy-efficient manner. The ability to disseminate data from one sender to multiple receivers in a reliable, time- and energy-efficient manner is critical for the execution of the management tasks, especially for code updating. Using multicast communication in wireless sensor networks is an efficient way to handle such traffic pattern. Due to the nature of code updates a multicast protocol has to support bulky traffic and endto-end reliability. Further, the limited resources of wireless sensor nodes demand an energy-efficient operation of the multicast protocol. Current data dissemination schemes do not fulfil all of the above requirements. In order to close the gap, we designed the Sensor Node Overlay Multicast (SNOMC) protocol such that to support a reliable, time-efficient and energy-efficient dissemination of data from one sender node to multiple receivers. In contrast to other multicast transport protocols, which do not support reliability mechanisms, SNOMC supports end-to-end reliability using a NACK-based reliability mechanism. The mechanism is simple and easy to implement and can significantly reduce the number of transmissions. It is complemented by a data acknowledgement after successful reception of all data fragments by the receiver nodes. In SNOMC three different caching strategies are integrated for an efficient handling of necessary retransmissions, namely, caching on each intermediate node, caching on branching nodes, or caching only on the sender node. Moreover, an option was included to pro-actively request missing fragments. SNOMC was evaluated both in the OMNeT++ simulator and in our in-house real-world testbed and compared to a number of common data dissemination protocols, such as Flooding, MPR, TinyCubus, PSFQ, and both UDP and TCP. The results showed that SNOMC outperforms the selected protocols in terms of transmission time, number of transmitted packets, and energy-consumption. Moreover, we showed that SNOMC performs well with different underlying MAC protocols, which support different levels of reliability and energy-efficiency. Thus, SNOMC can offer a robust, high-performing solution for the efficient distribution of code updates and management information in a wireless sensor network. To address the three management tasks, in this thesis we developed the Management Architecture for Wireless Sensor Networks (MARWIS). MARWIS is specifically designed for the management of heterogeneous wireless sensor networks. A distinguished feature of its design is the use of wireless mesh nodes as backbone, which enables diverse communication platforms and offloading functionality from the sensor nodes to the mesh nodes. This hierarchical architecture allows for efficient operation of the management tasks, due to the organisation of the sensor nodes into small sub-networks each managed by a mesh node. Furthermore, we developed a intuitive -based graphical user interface, which allows non-expert users to easily perform management tasks in the network. In contrast to other management frameworks, such as Mate, MANNA, TinyCubus, or code dissemination protocols, such as Impala, Trickle, and Deluge, MARWIS offers an integrated solution monitoring, configuration and code updating of sensor nodes. Integration of SNOMC into MARWIS further increases performance efficiency of the management tasks. To our knowledge, our approach is the first one, which offers a combination of a management architecture with an efficient overlay multicast transport protocol. This combination of SNOMC and MARWIS supports reliably, time- and energy-efficient operation of a heterogeneous wireless sensor network.
Resumo:
Since the emergence of the Internet and Social Media, privacy concerns and need for regulation in this area have been a frequent subject on the agenda of numerous stakeholders and policy-makers worldwide. Contributing to this debate, this paper builds on the responses of 553 Internet users to uncover users’ current privacy concerns and their attitudes towards legal assurances in this context. Our findings suggest that users have a complex attitude towards these issues. While they express strong concerns about privacy when asked directly, they often have difficulties formulating the exact nature of these concerns. In the Facebook context, Facebook itself is often mentioned as the primary source of threat, closely followed by marketing organizations. Users feel ill-protected by existing legal framework, especially when using Social Networking Sites. Reasons include common beliefs that the law is unable to address complexities of the Internet; local character of laws; possibilities to disregard the law, particularly since enforcement is difficult. Overall, positive changes in legal framework are desirable, with many respondents willing to pay more in taxes to ensure progress in this area.
Resumo:
Limited in motivation and cognitive ability to process the increasing amount of information on their Newsfeed, users apply heuristic processing to form their attitudes. Rather than extensively analysing the content, they increasingly rely on heuristic cues – such as the amount of comments and likes as well as the level of relationship with the “poster” – to process the incoming information. In the paper we explore what impact these heuristic cues have on the affective and cognitive attitude of users towards the posts on their Newsfeed. We conduct a survey on based on a Facebook application that allows users to evaluate Newsfeed posts in real time. Applying two distinct panel-regression methods we report robust results that indicate that there is a certain relationship primacy effect when users are processing information: only if the level of relationship with the “poster” is low, the impact of comments and likes on the attitude is considered, whereby likes trigger positive, whereas comments – negative evaluations.
Resumo:
In 2010 more than 600 radiocarbon samples were measured with the gas ion source at the MIni CArbon DAting System (MICADAS) at ETH Zurich and the number of measurements is rising quickly. While most samples contain less than 50 mu g C at present, the gas ion source is attractive as well for larger samples because the time-consuming graphitization is omitted. Additionally, modern samples are now measured down to 5 per-mill counting statistics in less than 30 min with the recently improved gas ion source. In the versatile gas handling system, a stepping-motor-driven syringe presses a mixture of helium and sample CO2 into the gas ion source, allowing continuous and stable measurements of different kinds of samples. CO2 can be provided in four different ways to the versatile gas interface. As a primary method. CO2 is delivered in glass or quartz ampoules. In this case, the CO2 is released in an automated ampoule cracker with 8 positions for individual samples. Secondly, OX-1 and blank gas in helium can be provided to the syringe by directly connecting gas bottles to the gas interface at the stage of the cracker. Thirdly, solid samples can be combusted in an elemental analyzer or in a thermo-optical OC/EC aerosol analyzer where the produced CO2 is transferred to the syringe via a zeolite trap for gas concentration. As a fourth method, CO2 is released from carbonates with phosphoric acid in septum-sealed vials and loaded onto the same trap used for the elemental analyzer. All four methods allow complete automation of the measurement, even though minor user input is presently still required. Details on the setup, versatility and applications of the gas handling system are given. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Abstract. During the last decade mobile communications increasingly became part of people's daily routine. Such usage raises new challenges regarding devices' battery lifetime management when using most popular wireless access technologies, such as IEEE 802.11. This paper investigates the energy/delay trade-off of using an end-user driven power saving approach, when compared with the standard IEEE 802.11 power saving algorithms. The assessment was conducted in a real testbed using an Android mobile phone and high-precision energy measurement hardware. The results show clear energy benefits of employing user-driven power saving techniques, when compared with other standard approaches.
Resumo:
Software developers often ask questions about software systems and software ecosystems that entail exploration and navigation, such as who uses this component?, and where is this feature implemented?. Software visualisation can be a great aid to understanding and exploring the answers to such questions, but visualisations require expertise to implement effectively, and they do not always scale well to large systems. We propose to automatically generate software visualisations based on software models derived from open source software corpora and from an analysis of the properties of typical developers queries and commonly used visualisations. The key challenges we see are (1) understanding how to match queries to suitable visualisations, and (2) scaling visualisations effectively to very large software systems and corpora. In the paper we motivate the idea of automatic software visualisation, we enumerate the challenges and our proposals to address them, and we describe some very initial results in our attempts to develop scalable visualisations of open source software corpora.
Resumo:
Background: Individuals with type 1 diabetes (T1D) have to count the carbohydrates (CHOs) of their meal to estimate the prandial insulin dose needed to compensate for the meal’s effect on blood glucose levels. CHO counting is very challenging but also crucial, since an error of 20 grams can substantially impair postprandial control. Method: The GoCARB system is a smartphone application designed to support T1D patients with CHO counting of nonpacked foods. In a typical scenario, the user places a reference card next to the dish and acquires 2 images with his/her smartphone. From these images, the plate is detected and the different food items on the plate are automatically segmented and recognized, while their 3D shape is reconstructed. Finally, the food volumes are calculated and the CHO content is estimated by combining the previous results and using the USDA nutritional database. Results: To evaluate the proposed system, a set of 24 multi-food dishes was used. For each dish, 3 pairs of images were taken and for each pair, the system was applied 4 times. The mean absolute percentage error in CHO estimation was 10 ± 12%, which led to a mean absolute error of 6 ± 8 CHO grams for normal-sized dishes. Conclusion: The laboratory experiments demonstrated the feasibility of the GoCARB prototype system since the error was below the initial goal of 20 grams. However, further improvements and evaluation are needed prior launching a system able to meet the inter- and intracultural eating habits.
Resumo:
This chapter presents an evaluation and initial testing of a meta-application (meta-app) for enhanced communication and improved interaction (e.g., appointment scheduling) between stakeholders (e.g., citizens) in cognitive cities. The underlying theoretical models as well as the paper prototype are presented to ensure the comprehensibility of the user interface. This paper prototype of the meta-app was evaluated through interviews with various experts in different fields (e.g., a strategic consultant, a small and medium-sized enterprises cofounder in the field of online marketing, an IT project leader, and an innovation manager). The results and implications of the evaluation show that the idea behind this meta-app has the potential to improve the living standards of citizens and to lead to a next step in the realization and maturity of the meta-app. The meta-app helps citizens more effectively manage their time and organize their personal schedules and thus allows them to have more leisure time and take full advantage of it to ensure a good work-life balance to enable them to be the most efficient and productive during their working time.
Resumo:
Various avours of a new research field on (socio-)physical or personal analytics have emerged, with the goal of deriving semantically-rich insights from people's low-level physical sensing combined with their (online) social interactions. In this paper, we argue for more comprehensive data sources, including environmental (e.g. weather, infrastructure) and application-specific data, to better capture the interactions between users and their context, in addition to those among users. To illustrate our proposed concept of synergistic user <-> context analytics, we first provide some example use cases. Then, we present our ongoing work towards a synergistic analytics platform: a testbed, based on mobile crowdsensing and the Internet of Things (IoT), a data model for representing the different sources of data and their connections, and a prediction engine for analyzing the data and producing insights.
Resumo:
The widespread use of wireless enabled devices and the increasing capabilities of wireless technologies has promoted multimedia content access and sharing among users. However, the quality perceived by the users still depends on multiple factors such as video characteristics, device capabilities, and link quality. While video characteristics include the video time and spatial complexity as well as the coding complexity, one of the most important device characteristics is the battery lifetime. There is the need to assess how these aspects interact and how they impact the overall user satisfaction. This paper advances previous works by proposing and validating a flexible framework, named EViTEQ, to be applied in real testbeds to satisfy the requirements of performance assessment. EViTEQ is able to measure network interface energy consumption with high precision, while being completely technology independent and assessing the application level quality of experience. The results obtained in the testbed show the relevance of combined multi-criteria measurement approaches, leading to superior end-user satisfaction perception evaluation .
Resumo:
A computer simulation study describing the electrophoretic separation and migration of methadone enantiomers in presence of free and immobilized (2-hydroxypropyl)-β-CD is presented. The 1:1 interaction of methadone with the neutral CD was simulated by using experimentally determined mobilities and complexation constants for the complexes in a low-pH BGE comprising phosphoric acid and KOH. The use of complex mobilities represents free solution conditions with the chiral selector being a buffer additive, whereas complex mobilities set to zero provide data that mimic migration and separation with the chiral selector being immobilized, that is CEC conditions in absence of unspecific interaction between analytes and the chiral stationary phase. Simulation data reveal that separations are quicker, electrophoretic displacement rates are reduced, and sensitivity is enhanced in CEC with on-column detection in comparison to free solution conditions. Simulation is used to study electrophoretic analyte behavior at the interface between sample and the CEC column with the chiral selector (analyte stacking) and at the rear end when analytes leave the environment with complexation (analyte destacking). The latter aspect is relevant for off-column analyte detection in CEC and is described here for the first time via the dynamics of migrating analyte zones. Simulation provides insight into means to counteract analyte dilution at the column end via use of a BGE with higher conductivity. Furthermore, the impact of EOF on analyte migration, separation, and detection for configurations with the selector zone being displaced or remaining immobilized under buffer flow is simulated. In all cases, the data reveal that detection should occur within or immediately after the selector zone.
Resumo:
High Angular Resolution Diffusion Imaging (HARDI) techniques, including Diffusion Spectrum Imaging (DSI), have been proposed to resolve crossing and other complex fiber architecture in the human brain white matter. In these methods, directional information of diffusion is inferred from the peaks in the orientation distribution function (ODF). Extensive studies using histology on macaque brain, cat cerebellum, rat hippocampus and optic tracts, and bovine tongue are qualitatively in agreement with the DSI-derived ODFs and tractography. However, there are only two studies in the literature which validated the DSI results using physical phantoms and both these studies were not performed on a clinical MRI scanner. Also, the limited studies which optimized DSI in a clinical setting, did not involve a comparison against physical phantoms. Finally, there is lack of consensus on the necessary pre- and post-processing steps in DSI; and ground truth diffusion fiber phantoms are not yet standardized. Therefore, the aims of this dissertation were to design and construct novel diffusion phantoms, employ post-processing techniques in order to systematically validate and optimize (DSI)-derived fiber ODFs in the crossing regions on a clinical 3T MR scanner, and develop user-friendly software for DSI data reconstruction and analysis. Phantoms with a fixed crossing fiber configuration of two crossing fibers at 90° and 45° respectively along with a phantom with three crossing fibers at 60°, using novel hollow plastic capillaries and novel placeholders, were constructed. T2-weighted MRI results on these phantoms demonstrated high SNR, homogeneous signal, and absence of air bubbles. Also, a technique to deconvolve the response function of an individual peak from the overall ODF was implemented, in addition to other DSI post-processing steps. This technique greatly improved the angular resolution of the otherwise unresolvable peaks in a crossing fiber ODF. The effects of DSI acquisition parameters and SNR on the resultant angular accuracy of DSI on the clinical scanner were studied and quantified using the developed phantoms. With a high angular direction sampling and reasonable levels of SNR, quantification of a crossing region in the 90°, 45° and 60° phantoms resulted in a successful detection of angular information with mean ± SD of 86.93°±2.65°, 44.61°±1.6° and 60.03°±2.21° respectively, while simultaneously enhancing the ODFs in regions containing single fibers. For the applicability of these validated methodologies in DSI, improvement in ODFs and fiber tracking from known crossing fiber regions in normal human subjects were demonstrated; and an in-house software package in MATLAB which streamlines the data reconstruction and post-processing for DSI, with easy to use graphical user interface was developed. In conclusion, the phantoms developed in this dissertation offer a means of providing ground truth for validation of reconstruction and tractography algorithms of various diffusion models (including DSI). Also, the deconvolution methodology (when applied as an additional DSI post-processing step) significantly improved the angular accuracy of the ODFs obtained from DSI, and should be applicable to ODFs obtained from the other high angular resolution diffusion imaging techniques.
Resumo:
This paper presents a new tool for large-area photo-mosaicking (LAPM tool). This tool was developed specifically for the purpose of underwater mosaicking, and it is aimed at providing end-user scientists with an easy and robust way to construct large photo-mosaics from any set of images. It is notably capable of constructing mosaics with an unlimited number of images on any modern computer (minimum 1.30 GHz, 2 GB RAM). The mosaicking process can rely on both feature matching and navigation data. This is complemented by an intuitive graphical user interface, which gives the user the ability to select feature matches between any pair of overlapping images. Finally, mosaic files are given geographic attributes that permit direct import into ArcGIS. So far, the LAPM tool has been successfully used to construct geo-referenced photo-mosaics with photo and video material from several scientific cruises. The largest photo-mosaic contained more than 5000 images for a total area of about 105,000 m**2. This is the first article to present and to provide a finished and functional program to construct large geo-referenced photo-mosaics of the seafloor using feature detection and matching techniques. It also presents concrete examples of photo-mosaics produced with the LAPM tool.