843 resultados para Tracking and trailing.
Resumo:
Objectives: The objective of this systematic review was to synthesize the available qualitative evidence on the knowledge, attitudes and beliefs of adult patients, healthcare professionals and carers about oral dosage form modification. Design: A systematic review and synthesis of qualitative studies was undertaken, utilising the thematic synthesis approach. Data sources: The following databases were searched from inception to September 2015: PubMed, Medline (EBSCO), EMBASE, CINAHL, PsycINFO, Web of Science, ProQuest Databases, Scopus, Turning Research Into Practice (TRIP), Cochrane Central Register of Controlled Trials (CENTRAL) and the Cochrane Database of Systematic Reviews (CDSR). Citation tracking and searching the references lists of included studies was also undertaken. Grey literature was searched using the OpenGrey database, internet searching and personal knowledge. An updated search was undertaken in June 2016. Review methods: Studies meeting the following criteria were eligible for inclusion; (i) used qualitative data collection and analysis methods; (ii) full-text was available in English; (iii) included adult patients who require oral dosage forms to be modified to meet their needs or; (iv) carers or healthcare professionals of patients who require oral dosage forms to be modified. Two reviewers independently appraised the quality of the included studies using the Critical Appraisal Skills Programme Checklist. A thematic synthesis was conducted and analytical themes were generated. Results: Of 5455 records screened, seven studies were eligible for inclusion; three involved healthcare professionals and the remaining four studies involved patients. Four analytical themes emerged from the thematic synthesis: (i) patient-centred individuality and variability; (ii) communication; (iii) knowledge and uncertainty and; (iv) complexity. The variability of individual patient’s requirements, poor communication practices and lack of knowledge about oral dosage form modification, when combined with the complex and multi-faceted healthcare environment complicate decision making regarding oral dosage form modification and administration. Conclusions: This systematic review has highlighted the key factors influencing the knowledge, attitudes and beliefs of patients and healthcare professionals about oral dosage form modifications. The findings suggest that in order to optimise oral medicine modification practices the needs of individual patients should be routinely and systematically assessed and decision-making should be supported by evidence based recommendations with multidisciplinary input. Further research is needed to optimise oral dosage form modification practices and the factors identified in this review should be considered in the development of future interventions.
Resumo:
This project looks at the ways Northeastern Ontario citizens in rural communities regulate their private property through traditional and contemporary surveillance means. Through art and objects, this project allows viewers the opportunity to experience surveillance in rural areas through visual and creative ways that encourage interaction and critique. This project defines organic surveillance by looking at the ways ruralists in Markstay Ontario practice surveillance and deterrence which is influenced by characteristics of land, risks and other determining factors such as psychology, resourcefulness, sustainability, technology and private property. Organic surveillance argues that surveillance and deterrence is prevalent far beyond datamining, GPS tracking and social media. Surveillance and deterrence as methods of survival are found everywhere, even in the farthest, most “wild” and forested areas.
Resumo:
The article presents a study of a CEFR B2-level reading subtest that is part of the Slovenian national secondary school leaving examination in English as a foreign language, and compares the test-taker actual performance (objective difficulty) with the test-taker and expert perceptions of item difficulty (subjective difficulty). The study also analyses the test-takers’ comments on item difficulty obtained from a while-reading questionnaire. The results are discussed in the framework of the existing research in the fields of (the assessment of) reading comprehension, and are addressed with regard to their implications for item-writing, FL teaching and curriculum development.
Resumo:
We report the discovery, tracking, and detection circumstances for 85 trans-Neptunian objects (TNOs) from the first 42 deg2 of the Outer Solar System Origins Survey. This ongoing r-band solar system survey uses the 0.9 deg2 field of view MegaPrime camera on the 3.6 m Canada–France–Hawaii Telescope. Our orbital elements for these TNOs are precise to a fractional semimajor axis uncertainty <0.1%. We achieve this precision in just two oppositions, as compared to the normal three to five oppositions, via a dense observing cadence and innovative astrometric technique. These discoveries are free of ephemeris bias, a first for large trans-Neptunian surveys. We also provide the necessary information to enable models of TNO orbital distributions to be tested against our TNO sample. We confirm the existence of a cold "kernel" of objects within the main cold classical Kuiper Belt and infer the existence of an extension of the "stirred" cold classical Kuiper Belt to at least several au beyond the 2:1 mean motion resonance with Neptune. We find that the population model of Petit et al. remains a plausible representation of the Kuiper Belt. The full survey, to be completed in 2017, will provide an exquisitely characterized sample of important resonant TNO populations, ideal for testing models of giant planet migration during the early history of the solar system.
Resumo:
[EN]In visual surveillance face detection can be an important cue for initializing tracking algorithms. Recent work in psychophics hints at the importance of the local context of a face for robust detection, such as head contours and torso. This paper describes a detector that actively utilizes the idea of local context. The promise is to gain robustness that goes beyond the capabilities of traditional face detection making it particularly interesting for surveillance. The performance of the proposed detector in terms of accuracy and speed is evaluated on data sets from PETS 2000 and PETS 2003 and compared to the object-centered approach. Particular attention is paid to the role of available image resolution.
Resumo:
Abstract This seminar consists of two very different research reports by PhD students in WAIS. Hypertext Engineering, Fettling or Tinkering (Mark Anderson): Contributors to a public hypertext such as Wikipedia do not necessarily record their maintenance activities, but some specific hypertext features - such transclusion - could indicate deliberate editing with a mind to the hypertext’s long-term use. The MediaWiki software used to create Wikipedia supports transclusion, a deliberately hypertextual form of content creation which aids long terms consistency. This discusses the evidence of the use of hypertext transclusion in Wikipedia, and its implications for the coherence and stability of Wikipedia. Designing a Public Intervention - Towards a Sociotechnical Approach to Web Governance (Faranak Hardcastle): In this talk I introduce a critical and speculative design for a socio-technical intervention -called TATE (Transparency and Accountability Tracking Extension)- that aims to enhance transparency and accountability in Online Behavioural Tracking and Advertising mechanisms and practices.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
A primary goal of context-aware systems is delivering the right information at the right place and right time to users in order to enable them to make effective decisions and improve their quality of life. There are three key requirements for achieving this goal: determining what information is relevant, personalizing it based on the users’ context (location, preferences, behavioral history etc.), and delivering it to them in a timely manner without an explicit request from them. These requirements create a paradigm that we term as “Proactive Context-aware Computing”. Most of the existing context-aware systems fulfill only a subset of these requirements. Many of these systems focus only on personalization of the requested information based on users’ current context. Moreover, they are often designed for specific domains. In addition, most of the existing systems are reactive - the users request for some information and the system delivers it to them. These systems are not proactive i.e. they cannot anticipate users’ intent and behavior and act proactively without an explicit request from them. In order to overcome these limitations, we need to conduct a deeper analysis and enhance our understanding of context-aware systems that are generic, universal, proactive and applicable to a wide variety of domains. To support this dissertation, we explore several directions. Clearly the most significant sources of information about users today are smartphones. A large amount of users’ context can be acquired through them and they can be used as an effective means to deliver information to users. In addition, social media such as Facebook, Flickr and Foursquare provide a rich and powerful platform to mine users’ interests, preferences and behavioral history. We employ the ubiquity of smartphones and the wealth of information available from social media to address the challenge of building proactive context-aware systems. We have implemented and evaluated a few approaches, including some as part of the Rover framework, to achieve the paradigm of Proactive Context-aware Computing. Rover is a context-aware research platform which has been evolving for the last 6 years. Since location is one of the most important context for users, we have developed ‘Locus’, an indoor localization, tracking and navigation system for multi-story buildings. Other important dimensions of users’ context include the activities that they are engaged in. To this end, we have developed ‘SenseMe’, a system that leverages the smartphone and its multiple sensors in order to perform multidimensional context and activity recognition for users. As part of the ‘SenseMe’ project, we also conducted an exploratory study of privacy, trust, risks and other concerns of users with smart phone based personal sensing systems and applications. To determine what information would be relevant to users’ situations, we have developed ‘TellMe’ - a system that employs a new, flexible and scalable approach based on Natural Language Processing techniques to perform bootstrapped discovery and ranking of relevant information in context-aware systems. In order to personalize the relevant information, we have also developed an algorithm and system for mining a broad range of users’ preferences from their social network profiles and activities. For recommending new information to the users based on their past behavior and context history (such as visited locations, activities and time), we have developed a recommender system and approach for performing multi-dimensional collaborative recommendations using tensor factorization. For timely delivery of personalized and relevant information, it is essential to anticipate and predict users’ behavior. To this end, we have developed a unified infrastructure, within the Rover framework, and implemented several novel approaches and algorithms that employ various contextual features and state of the art machine learning techniques for building diverse behavioral models of users. Examples of generated models include classifying users’ semantic places and mobility states, predicting their availability for accepting calls on smartphones and inferring their device charging behavior. Finally, to enable proactivity in context-aware systems, we have also developed a planning framework based on HTN planning. Together, these works provide a major push in the direction of proactive context-aware computing.
Resumo:
Particle filtering has proven to be an effective localization method for wheeled autonomous vehicles. For a given map, a sensor model, and observations, occasions arise where the vehicle could equally likely be in many locations of the map. Because particle filtering algorithms may generate low confidence pose estimates under these conditions, more robust localization strategies are required to produce reliable pose estimates. This becomes more critical if the state estimate is an integral part of system control. We investigate the use of particle filter estimation techniques on a hovercraft vehicle. The marginally stable dynamics of a hovercraft require reliable state estimates for proper stability and control. We use the Monte Carlo localization method, which implements a particle filter in a recursive state estimate algorithm. An H-infinity controller, designed to accommodate the latency inherent in our state estimation, provides stability and controllability to the hovercraft. In order to eliminate the low confidence estimates produced in certain environments, a multirobot system is designed to introduce mobile environment features. By tracking and controlling the secondary robot, we can position the mobile feature throughout the environment to ensure a high confidence estimate, thus maintaining stability in the system. A laser rangefinder is the sensor the hovercraft uses to track the secondary robot, observe the environment, and facilitate successful localization and stability in motion.
A Digital Collection Center's Experience: ETD Discovery, Promotion, and Workflows in Digital Commons
Resumo:
This presentation was given at the Digital Commons Southeastern User Group conference at Winthrop University, South Carolina on June 5, 2015. The presentation discusses how the digital collections center (DCC) at Florida International University uses Digital Commons as their tool for ingesting, editing, tracking, and publishing university theses and dissertations. The basic DCC workflow is covered as well as institutional repository promotion.
Resumo:
Computer vision is much more than a technique to sense and recover environmental information from an UAV. It should play a main role regarding UAVs’ functionality because of the big amount of information that can be extracted, its possible uses and applications, and its natural connection to human driven tasks, taking into account that vision is our main interface to world understanding. Our current research’s focus lays on the development of techniques that allow UAVs to maneuver in spaces using visual information as their main input source. This task involves the creation of techniques that allow an UAV to maneuver towards features of interest whenever a GPS signal is not reliable or sufficient, e.g. when signal dropouts occur (which usually happens in urban areas, when flying through terrestrial urban canyons or when operating on remote planetary bodies), or when tracking or inspecting visual targets—including moving ones—without knowing their exact UMT coordinates. This paper also investigates visual serving control techniques that use velocity and position of suitable image features to compute the references for flight control. This paper aims to give a global view of the main aspects related to the research field of computer vision for UAVs, clustered in four main active research lines: visual serving and control, stereo-based visual navigation, image processing algorithms for detection and tracking, and visual SLAM. Finally, the results of applying these techniques in several applications are presented and discussed: this study will encompass power line inspection, mobile target tracking, stereo distance estimation, mapping and positioning.
Resumo:
Surveillance systems such as object tracking and abandoned object detection systems typically rely on a single modality of colour video for their input. These systems work well in controlled conditions but often fail when low lighting, shadowing, smoke, dust or unstable backgrounds are present, or when the objects of interest are a similar colour to the background. Thermal images are not affected by lighting changes or shadowing, and are not overtly affected by smoke, dust or unstable backgrounds. However, thermal images lack colour information which makes distinguishing between different people or objects of interest within the same scene difficult. ----- By using modalities from both the visible and thermal infrared spectra, we are able to obtain more information from a scene and overcome the problems associated with using either modality individually. We evaluate four approaches for fusing visual and thermal images for use in a person tracking system (two early fusion methods, one mid fusion and one late fusion method), in order to determine the most appropriate method for fusing multiple modalities. We also evaluate two of these approaches for use in abandoned object detection, and propose an abandoned object detection routine that utilises multiple modalities. To aid in the tracking and fusion of the modalities we propose a modified condensation filter that can dynamically change the particle count and features used according to the needs of the system. ----- We compare tracking and abandoned object detection performance for the proposed fusion schemes and the visual and thermal domains on their own. Testing is conducted using the OTCBVS database to evaluate object tracking, and data captured in-house to evaluate the abandoned object detection. Our results show that significant improvement can be achieved, and that a middle fusion scheme is most effective.
Resumo:
We consider the problem of monitoring and controlling the position of herd animals, and view animals as networked agents with natural mobility but not strictly controllable. By exploiting knowledge of individual and herd behavior we would like to apply a vast body of theory in robotics and motion planning to achieving the constrained motion of a herd. In this paper we describe the concept of a virtual fence which applies a stimulus to an animal as a function of its pose with respect to the fenceline. Multiple fence lines can define a region, and the fences can be static or dynamic. The fence algorithm is implemented by a small position-aware computer device worn by the animal, which we refer to as a Smart Collar.We describe a herd-animal simulator, the Smart Collar hardware and algorithms for tracking and controlling animals as well as the results of on-farm experiments with up to ten Smart Collars.
Resumo:
While close talking microphones give the best signal quality and produce the highest accuracy from current Automatic Speech Recognition (ASR) systems, the speech signal enhanced by microphone array has been shown to be an effective alternative in a noisy environment. The use of microphone arrays in contrast to close talking microphones alleviates the feeling of discomfort and distraction to the user. For this reason, microphone arrays are popular and have been used in a wide range of applications such as teleconferencing, hearing aids, speaker tracking, and as the front-end to speech recognition systems. With advances in sensor and sensor network technology, there is considerable potential for applications that employ ad-hoc networks of microphone-equipped devices collaboratively as a virtual microphone array. By allowing such devices to be distributed throughout the users’ environment, the microphone positions are no longer constrained to traditional fixed geometrical arrangements. This flexibility in the means of data acquisition allows different audio scenes to be captured to give a complete picture of the working environment. In such ad-hoc deployment of microphone sensors, however, the lack of information about the location of devices and active speakers poses technical challenges for array signal processing algorithms which must be addressed to allow deployment in real-world applications. While not an ad-hoc sensor network, conditions approaching this have in effect been imposed in recent National Institute of Standards and Technology (NIST) ASR evaluations on distant microphone recordings of meetings. The NIST evaluation data comes from multiple sites, each with different and often loosely specified distant microphone configurations. This research investigates how microphone array methods can be applied for ad-hoc microphone arrays. A particular focus is on devising methods that are robust to unknown microphone placements in order to improve the overall speech quality and recognition performance provided by the beamforming algorithms. In ad-hoc situations, microphone positions and likely source locations are not known and beamforming must be achieved blindly. There are two general approaches that can be employed to blindly estimate the steering vector for beamforming. The first is direct estimation without regard to the microphone and source locations. An alternative approach is instead to first determine the unknown microphone positions through array calibration methods and then to use the traditional geometrical formulation for the steering vector. Following these two major approaches investigated in this thesis, a novel clustered approach which includes clustering the microphones and selecting the clusters based on their proximity to the speaker is proposed. Novel experiments are conducted to demonstrate that the proposed method to automatically select clusters of microphones (ie, a subarray), closely located both to each other and to the desired speech source, may in fact provide a more robust speech enhancement and recognition than the full array could.
Resumo:
RFID has been widely used in today's commercial and supply chain industry, due to the significant advantages it offers and the relatively low production cost. However, this ubiquitous technology has inherent problems in security and privacy. This calls for the development of simple, efficient and cost effective mechanisms against a variety of security threats. This paper proposes a two-step authentication protocol based on the randomized hash-lock scheme proposed by S. Weis in 2003. By introducing additional measures during the authentication process, this new protocol proves to enhance the security of RFID significantly, and protects the passive tags from almost all major attacks, including tag cloning, replay, full-disclosure, tracking, and eavesdropping. Furthermore, no significant changes to the tags is required to implement this protocol, and the low complexity level of the randomized hash-lock algorithm is retained.