248 resultados para sensor uncertainty
Resumo:
This paper presents a method of recovering the 6 DoF pose (Cartesian position and angular rotation) of a range sensor mounted on a mobile platform. The method utilises point targets in a local scene and optimises over the error between their absolute position and their apparent position as observed by the range sensor. The analysis includes an investigation into the sensitivity and robustness of the method. Practical results were collected using a SICK LRS2100 mounted on a P&H electric mining shovel and present the errors in scan data relative to an independent 3D scan of the scene. A comparison to directly measuring the sensor pose is presented and shows the significant accuracy improvements in scene reconstruction using this pose estimation method.
Resumo:
Modern statistical models and computational methods can now incorporate uncertainty of the parameters used in Quantitative Microbial Risk Assessments (QMRA). Many QMRAs use Monte Carlo methods, but work from fixed estimates for means, variances and other parameters. We illustrate the ease of estimating all parameters contemporaneously with the risk assessment, incorporating all the parameter uncertainty arising from the experiments from which these parameters are estimated. A Bayesian approach is adopted, using Markov Chain Monte Carlo Gibbs sampling (MCMC) via the freely available software, WinBUGS. The method and its ease of implementation are illustrated by a case study that involves incorporating three disparate datasets into an MCMC framework. The probabilities of infection when the uncertainty associated with parameter estimation is incorporated into a QMRA are shown to be considerably more variable over various dose ranges than the analogous probabilities obtained when constants from the literature are simply ‘plugged’ in as is done in most QMRAs. Neglecting these sources of uncertainty may lead to erroneous decisions for public health and risk management.
Resumo:
Generally speaking, psychologists have suggested three traditional views of how people cope with uncertainty. They are the certainty maximiser, the intuitive statistician-economist and the knowledge seeker (Smithson, 2008). In times of uncertainty, such as the recent global financial crisis, these coping methods often result in innovation in industry. Richards (2003) identifies innovation as different from creativity in that innovation aims to transform and implement rather than simply explore and invent. An examination of the work of iconic fashion designers, through case study and situational analysis, reveals that coping with uncertainty manifests itself in ways that have resulted in innovations in design, marketing methods, production and consumption. In relation to contemporary fashion, where many garments look the same in style, colour, cut and fit (Finn, 2008), the concept of innovation is an important one. This paper explores the role of uncertainty as a driver of innovation in fashion design. A key aspect of seeking knowledge, as a mechanism to cope with this uncertainty, is a return to basics. This is a problem for contemporary fashion designers who are no longer necessarily makers and therefore do not engage with the basic materials and methods of garment construction. In many cases design in fashion has become digital, communicated to an unseen, unknown production team via scanned image and specification alone. The disconnection between the design and the making of garments, as a result of decades of off-shore manufacturing, has limited the opportunity for this return to basics. The authors argue that the role of the fashion designer has become about the final product and as a result there is a lack of innovation in the process of making: in the form, fit and function of fashion garments. They propose that ‘knowledge seeking’ as a result of uncertainty in the fashion industry, in particular through re-examination of the methods of making, could hold the key to a new era of innovation in fashion design.
Resumo:
This paper addresses the tradeoff between energy consumption and localization performance in a mobile sensor network application. The focus is on augmenting GPS location with more energy-efficient location sensors to bound position estimate uncertainty in order to prolong node lifetime. We use empirical GPS and radio contact data from a largescale animal tracking deployment to model node mobility, GPS and radio performance. These models are used to explore duty cycling strategies for maintaining position uncertainty within specified bounds. We then explore the benefits of using short-range radio contact logging alongside GPS as an energy-inexpensive means of lowering uncertainty while the GPS is off, and we propose a versatile contact logging strategy that relies on RSSI ranging and GPS lock back-offs for reducing the node energy consumption relative to GPS duty cycling. Results show that our strategy can cut the node energy consumption by half while meeting application specific positioning criteria.
Resumo:
A Wireless Sensor Network (WSN) is a set of sensors that are integrated with a physical environment. These sensors are small in size, and capable of sensing physical phenomena and processing them. They communicate in a multihop manner, due to a short radio range, to form an Ad Hoc network capable of reporting network activities to a data collection sink. Recent advances in WSNs have led to several new promising applications, including habitat monitoring, military target tracking, natural disaster relief, and health monitoring. The current version of sensor node, such as MICA2, uses a 16 bit, 8 MHz Texas Instruments MSP430 micro-controller with only 10 KB RAM, 128 KB program space, 512 KB external ash memory to store measurement data, and is powered by two AA batteries. Due to these unique specifications and a lack of tamper-resistant hardware, devising security protocols for WSNs is complex. Previous studies show that data transmission consumes much more energy than computation. Data aggregation can greatly help to reduce this consumption by eliminating redundant data. However, aggregators are under the threat of various types of attacks. Among them, node compromise is usually considered as one of the most challenging for the security of WSNs. In a node compromise attack, an adversary physically tampers with a node in order to extract the cryptographic secrets. This attack can be very harmful depending on the security architecture of the network. For example, when an aggregator node is compromised, it is easy for the adversary to change the aggregation result and inject false data into the WSN. The contributions of this thesis to the area of secure data aggregation are manifold. We firstly define the security for data aggregation in WSNs. In contrast with existing secure data aggregation definitions, the proposed definition covers the unique characteristics that WSNs have. Secondly, we analyze the relationship between security services and adversarial models considered in existing secure data aggregation in order to provide a general framework of required security services. Thirdly, we analyze existing cryptographic-based and reputationbased secure data aggregation schemes. This analysis covers security services provided by these schemes and their robustness against attacks. Fourthly, we propose a robust reputationbased secure data aggregation scheme for WSNs. This scheme minimizes the use of heavy cryptographic mechanisms. The security advantages provided by this scheme are realized by integrating aggregation functionalities with: (i) a reputation system, (ii) an estimation theory, and (iii) a change detection mechanism. We have shown that this addition helps defend against most of the security attacks discussed in this thesis, including the On-Off attack. Finally, we propose a secure key management scheme in order to distribute essential pairwise and group keys among the sensor nodes. The design idea of the proposed scheme is the combination between Lamport's reverse hash chain as well as the usual hash chain to provide both past and future key secrecy. The proposal avoids the delivery of the whole value of a new group key for group key update; instead only the half of the value is transmitted from the network manager to the sensor nodes. This way, the compromise of a pairwise key alone does not lead to the compromise of the group key. The new pairwise key in our scheme is determined by Diffie-Hellman based key agreement.
Resumo:
Starting from a local problem with finding an archival clip on YouTube, this paper expands to consider the nature of archives in general. It considers the technological, communicative and philosophical characteristics of archives over three historical periods: 1) Modern ‘essence archives’ – museums and galleries organised around the concept of objectivity and realism; 2) Postmodern mediation archives – broadcast TV systems, which I argue were also ‘essence archives,’ albeit a transitional form; and 3) Network or ‘probability archives’ – YouTube and the internet, which are organised around the concept of probability. The paper goes on to argue the case for introducing quantum uncertainty and other aspects of probability theory into the humanities, in order to understand the way knowledge is collected, conserved, curated and communicated in the era of the internet. It is illustrated throughout by reference to the original technological 'affordance' – the Olduvai stone chopping tool.
Resumo:
In fault detection and diagnostics, limitations coming from the sensor network architecture are one of the main challenges in evaluating a system’s health status. Usually the design of the sensor network architecture is not solely based on diagnostic purposes, other factors like controls, financial constraints, and practical limitations are also involved. As a result, it quite common to have one sensor (or one set of sensors) monitoring the behaviour of two or more components. This can significantly extend the complexity of diagnostic problems. In this paper a systematic approach is presented to deal with such complexities. It is shown how the problem can be formulated as a Bayesian network based diagnostic mechanism with latent variables. The developed approach is also applied to the problem of fault diagnosis in HVAC systems, an application area with considerable modeling and measurement constraints.
Resumo:
In the context of learning paradigms of identification in the limit, we address the question: why is uncertainty sometimes desirable? We use mind change bounds on the output hypotheses as a measure of uncertainty and interpret ‘desirable’ as reduction in data memorization, also defined in terms of mind change bounds. The resulting model is closely related to iterative learning with bounded mind change complexity, but the dual use of mind change bounds — for hypotheses and for data — is a key distinctive feature of our approach. We show that situations exist where the more mind changes the learner is willing to accept, the less the amount of data it needs to remember in order to converge to the correct hypothesis. We also investigate relationships between our model and learning from good examples, set-driven, monotonic and strong-monotonic learners, as well as class-comprising versus class-preserving learnability.
Resumo:
The use of metal stripes for the guiding of plasmons is a well established technique for the infrared regime and has resulted in the development of a myriad of passive optical components and sensing devices. However, the plasmons suffer from large losses around sharp bends, making the compact design of nanoscale sensors and circuits problematic. A compact alternative would be to use evanescent coupling between two sufficiently close stripes, and thus we propose a compact interferometer design using evanescent coupling. The sensitivity of the design is compared with that achieved using a hand-held sensor based on the Kretschmann style surface plasmon resonance technique. Modeling of the new interferometric sensor is performed for various structural parameters using finite-difference time-domain and COMSOL Multiphysics. The physical mechanisms behind the coupling and propagation of plasmons in this structure are explained in terms of the allowed modes in each section of the device.