387 resultados para mobile sensors
em Queensland University of Technology - ePrints Archive
Resumo:
ElectricCOW is a network, animal behaviour and agent simulator designed to allow detailed simulation of an ad-hoc model network built from small mote-like devices called flecks. Detailed radio communications, cattle behaviour and sensor and actuator network modelling allows a closed-loop environment, where the network can influence the behaviour of its mobile platforms.
Exploring the opportunities and challenges of using mobile sensing for gamification and achievements
Resumo:
Gamified services delivered on smart phones, such as Foursquare, are able to utilise the sensors on the phone to capture user contexts as a means of triggering game elements. This paper identifies and discusses opportunities and challenges that exist when using mobile sensors as input for game elements. We present initial findings from a field study of a gamified mobile application made to support the university orientation event for new students using game achievements. The study showed that overall the use of context was well received by participants when compared to game elements that required no context to complete. It was also found that using context could help validate that an activity was completed however there were still technical challenges when using sensors that led to exploits in the game elements, or cheating.
Resumo:
Mobile devices and smartphones have become a significant communication channel for everyday life. The sensing capabilities of mobile devices are expanding rapidly, and sensors embedded in these devices are cheaper and more powerful than before. It is evident that mobile devices have become the most suitable candidates to sense contextual information without needing extra tools. However, current research shows only a limited number of sensors are being explored and investigated. As a result, it still needs to be clarified what forms of contextual information extracted from mo- bile sensors are useful. Therefore, this research investigates the context sensing using current mobile sensors, the study follows experimental methods and sensor data is evaluated and synthesised, in order to deduce the value of various sensors and combinations of sensor for the use in context-aware mobile applications. This study aims to develop a context fusion framework that will enhance the context-awareness on mobile applications, as well as exploring innovative techniques for context sensing on smartphone devices.
Resumo:
Data collection using Autonomous Underwater Vehicles (AUVs) is increasing in importance within the oceano- graphic research community. Contrary to traditional moored or static platforms, mobile sensors require intelligent planning strategies to manoeuvre through the ocean. However, the ability to navigate to high-value locations and collect data with specific scientific merit is worth the planning efforts. In this study, we examine the use of ocean model predictions to determine the locations to be visited by an AUV, and aid in planning the trajectory that the vehicle executes during the sampling mission. The objectives are: a) to provide near-real time, in situ measurements to a large-scale ocean model to increase the skill of future predictions, and b) to utilize ocean model predictions as a component in an end-to-end autonomous prediction and tasking system for aquatic, mobile sensor networks. We present an algorithm designed to generate paths for AUVs to track a dynamically evolving ocean feature utilizing ocean model predictions. This builds on previous work in this area by incorporating the predicted current velocities into the path planning to assist in solving the 3-D motion planning problem of steering an AUV between two selected locations. We present simulation results for tracking a fresh water plume by use of our algorithm. Additionally, we present experimental results from field trials that test the skill of the model used as well as the incorporation of the model predictions into an AUV trajectory planner. These results indicate a modest, but measurable, improvement in surfacing error when the model predictions are incorporated into the planner.
Resumo:
Harmful Algal Blooms (HABs) have become an important environmental concern along the western coast of the United States. Toxic and noxious blooms adversely impact the economies of coastal communities in the region, pose risks to human health, and cause mortality events that have resulted in the deaths of thousands of fish, marine mammals and seabirds. One goal of field-based research efforts on this topic is the development of predictive models of HABs that would enable rapid response, mitigation and ultimately prevention of these events. In turn, these objectives are predicated on understanding the environmental conditions that stimulate these transient phenomena. An embedded sensor network (Fig. 1), under development in the San Pedro Shelf region off the Southern California coast, is providing tools for acquiring chemical, physical and biological data at high temporal and spatial resolution to help document the emergence and persistence of HAB events, supporting the design and testing of predictive models, and providing contextual information for experimental studies designed to reveal the environmental conditions promoting HABs. The sensor platforms contained within this network include pier-based sensor arrays, ocean moorings, HF radar stations, along with mobile sensor nodes in the form of surface and subsurface autonomous vehicles. FreewaveTM radio modems facilitate network communication and form a minimally-intrusive, wireless communication infrastructure throughout the Southern California coastal region, allowing rapid and cost-effective data transfer. An emerging focus of this project is the incorporation of a predictive ocean model that assimilates near-real time, in situ data from deployed Autonomous Underwater Vehicles (AUVs). The model then assimilates the data to increase the skill of both nowcasts and forecasts, thus providing insight into bloom initiation as well as the movement of blooms or other oceanic features of interest (e.g., thermoclines, fronts, river discharge, etc.). From these predictions, deployed mobile sensors can be tasked to track a designated feature. This focus has led to the creation of a technology chain in which algorithms are being implemented for the innovative trajectory design for AUVs. Such intelligent mission planning is required to maneuver a vehicle to precise depths and locations that are the sites of active blooms, or physical/chemical features that might be sources of bloom initiation or persistence. The embedded network yields high-resolution, temporal and spatial measurements of pertinent environmental parameters and resulting biology (see Fig. 1). Supplementing this with ocean current information and remotely sensed imagery and meteorological data, we obtain a comprehensive foundation for developing a fundamental understanding of HAB events. This then directs labor- intensive and costly sampling efforts and analyses. Additionally, we provide coastal municipalities, managers and state agencies with detailed information to aid their efforts in providing responsible environmental stewardship of their coastal waters.
Resumo:
Mobile sensor platforms such as Autonomous Underwater Vehicles (AUVs) and robotic surface vessels, combined with static moored sensors compose a diverse sensor network that is able to provide macroscopic environmental analysis tool for ocean researchers. Working as a cohesive networked unit, the static buoys are always online, and provide insight as to the time and locations where a federated, mobile robot team should be deployed to effectively perform large scale spatiotemporal sampling on demand. Such a system can provide pertinent in situ measurements to marine biologists whom can then advise policy makers on critical environmental issues. This poster presents recent field deployment activity of AUVs demonstrating the effectiveness of our embedded communication network infrastructure throughout southern California coastal waters. We also report on progress towards real-time, web-streaming data from the multiple sampling locations and mobile sensor platforms. Static monitoring sites included in this presentation detail the network nodes positioned at Redondo Beach and Marina Del Ray. One of the deployed mobile sensors highlighted here are autonomous Slocum gliders. These nodes operate in the open ocean for periods as long as one month. The gliders are connected to the network via a Freewave radio modem network composed of multiple coastal base-stations. This increases the efficiency of deployment missions by reducing operational expenses via reduced reliability on satellite phones for communication, as well as increasing the rate and amount of data that can be transferred. Another mobile sensor platform presented in this study are the autonomous robotic boats. These platforms are utilized for harbor and littoral zone studies, and are capable of performing multi-robot coordination while observing known communication constraints. All of these pieces fit together to present an overview of ongoing collaborative work to develop an autonomous, region-wide, coastal environmental observation and monitoring sensor network.
Resumo:
For robots to operate in human environments they must be able to make their own maps because it is unrealistic to expect a user to enter a map into the robot’s memory; existing floorplans are often incorrect; and human environments tend to change. Traditionally robots have used sonar, infra-red or laser range finders to perform the mapping task. Digital cameras have become very cheap in recent years and they have opened up new possibilities as a sensor for robot perception. Any robot that must interact with humans can reasonably be expected to have a camera for tasks such as face recognition, so it makes sense to also use the camera for navigation. Cameras have advantages over other sensors such as colour information (not available with any other sensor), better immunity to noise (compared to sonar), and not being restricted to operating in a plane (like laser range finders). However, there are disadvantages too, with the principal one being the effect of perspective. This research investigated ways to use a single colour camera as a range sensor to guide an autonomous robot and allow it to build a map of its environment, a process referred to as Simultaneous Localization and Mapping (SLAM). An experimental system was built using a robot controlled via a wireless network connection. Using the on-board camera as the only sensor, the robot successfully explored and mapped indoor office environments. The quality of the resulting maps is comparable to those that have been reported in the literature for sonar or infra-red sensors. Although the maps are not as accurate as ones created with a laser range finder, the solution using a camera is significantly cheaper and is more appropriate for toys and early domestic robots.
Resumo:
Web applications such as blogs, wikis, video and photo sharing sites, and social networking systems have been termed ‘Web 2.0’ to highlight an arguably more open, collaborative, personalisable, and therefore more participatory internet experience than what had previously been possible. Giving rise to a culture of participation, an increasing number of these social applications are now available on mobile phones where they take advantage of device-specific features such as sensors, location and context awareness. This international volume of book chapters will make a contribution towards exploring and better understanding the opportunities and challenges provided by tools, interfaces, methods and practices of social and mobile technology that enable participation and engagement. It brings together an international group of academics and practitioners from a diverse range of disciplines such as computing and engineering, social sciences, digital media and human-computer interaction to critically examine a range of applications of social and mobile technology, such as social networking, mobile interaction, wikis, twitter, blogging, virtual worlds, shared displays and urban sceens, and their impact to foster community activism, civic engagement and cultural citizenship.
Resumo:
Mobile devices are becoming indispensable personal assistants in people's daily life as these devices support work, study, play and socializing activities. The multi-modal sensors and rich features of smartphones can capture abundant information about users' life experience, such as taking photos or videos on what they see and hear, and organizing their tasks and activities using calendar, to-do lists, and notes. Such vast information can become useful to help users recalling episodic memories and reminisce about meaningful experiences. In this paper, we propose to apply autobiographical memory framework to provide an effective mechanism to structure mobile life-log data. The proposed model is an attempt towards a more complete personal life-log indexing model, which will support long term capture, organization, and retrieval. To demonstrate the benefits of the proposed model, we propose some design solutions for enabling users-driven capture, annotation, and retrieval of autobiographical multimedia chronicles tools.
Resumo:
The ability to measure surface temperature and represent it on a metrically accurate 3D model has proven applications in many areas such as medical imaging, building energy auditing, and search and rescue. A system is proposed that enables this task to be performed with a handheld sensor, and for the first time with results able to be visualized and analyzed in real-time. A device comprising a thermal-infrared camera and range sensor is calibrated geometrically and used for data capture. The device is localized using a combination of ICP and video-based pose estimation from the thermal-infrared video footage which is shown to reduce the occurrence of failure modes. Furthermore, the problem of misregistration which can introduce severe distortions in assigned surface temperatures is avoided through the use of a risk-averse neighborhood weighting mechanism. Results demonstrate that the system is more stable and accurate than previous approaches, and can be used to accurately model complex objects and environments for practical tasks.
Resumo:
Reliable robotic perception and planning are critical to performing autonomous actions in uncertain, unstructured environments. In field robotic systems, automation is achieved by interpreting exteroceptive sensor information to infer something about the world. This is then mapped to provide a consistent spatial context, so that actions can be planned around the predicted future interaction of the robot and the world. The whole system is as reliable as the weakest link in this chain. In this paper, the term mapping is used broadly to describe the transformation of range-based exteroceptive sensor data (such as LIDAR or stereo vision) to a fixed navigation frame, so that it can be used to form an internal representation of the environment. The coordinate transformation from the sensor frame to the navigation frame is analyzed to produce a spatial error model that captures the dominant geometric and temporal sources of mapping error. This allows the mapping accuracy to be calculated at run time. A generic extrinsic calibration method for exteroceptive range-based sensors is then presented to determine the sensor location and orientation. This allows systematic errors in individual sensors to be minimized, and when multiple sensors are used, it minimizes the systematic contradiction between them to enable reliable multisensor data fusion. The mathematical derivations at the core of this model are not particularly novel or complicated, but the rigorous analysis and application to field robotics seems to be largely absent from the literature to date. The techniques in this paper are simple to implement, and they offer a significant improvement to the accuracy, precision, and integrity of mapped information. Consequently, they should be employed whenever maps are formed from range-based exteroceptive sensor data. © 2009 Wiley Periodicals, Inc.
Resumo:
This paper presents a full system demonstration of dynamic sensorbased reconfiguration of a networked robot team. Robots sense obstacles in their environment locally and dynamically adapt their global geometric configuration to conform to an abstract goal shape. We present a novel two-layer planning and control algorithm for team reconfiguration that is decentralised and assumes local (neighbour-to-neighbour) communication only. The approach is designed to be resource-efficient and we show experiments using a team of nine mobile robots with modest computation, communication, and sensing. The robots use acoustic beacons for localisation and can sense obstacles in their local neighbourhood using IR sensors. Our results demonstrate globally-specified reconfiguration from local information in a real robot network, and highlight limitations of standard mesh networks in implementing decentralised algorithms.
Resumo:
There is a growing interest to autonomously collect or manipulate objects in remote or unknown environments, such as mountains, gullies, bush-land, or rough terrain. There are several limitations of conventional methods using manned or remotely controlled aircraft. The capability of small Unmanned Aerial Vehicles (UAV) used in parallel with robotic manipulators could overcome some of these limitations. By enabling the autonomous exploration of both naturally hazardous environments, or areas which are biologically, chemically, or radioactively contaminated, it is possible to collect samples and data from such environments without directly exposing personnel to such risks. This paper covers the design, integration, and initial testing of a framework for outdoor mobile manipulation UAV. The framework is designed to allow further integration and testing of complex control theories, with the capability to operate outdoors in unknown environments. The results obtained act as a reference for the effectiveness of the integrated sensors and low-level control methods used for the preliminary testing, as well as identifying the key technologies needed for the development of an outdoor capable system.