8 resultados para Visual Object Identification Task
em Digital Commons at Florida International University
Resumo:
This dissertation explored memory conformity effects on people who interacted with a confederate and of bystanders to that interaction. Two studies were carried out. Study 1 was conducted in the field. A male confederate approached a group of people at the beach and had a brief interaction. About a minute later a research assistant approached the group and administered a target-absent lineup to each person in the group. Analyses revealed that memory conformity occurred during the lineup task. Bystanders were twice as likely to conform as those who interacted with the confederate. Study 2 was carried out in a laboratory under controlled conditions. Participants were exposed to two events during their time in the laboratory. In one event, participants were shown a brief video with no determinate roles assigned. In the other event participants were randomly assigned to interact with a confederate (actor condition) or to witness that interaction (bystander condition). Participants were given memory tests on both events to understand the effects of participant role (actor vs. bystander) on memory conformity. Participants answered second to all questions, following a confederate acting as a participant, who disseminated misinformation on critical questions. Analyses revealed no significant differences in memory conformity between actors and bystanders during the movie memory task. However, differences were found for the interaction memory task such that bystanders conformed more than actors on two of four critical questions. Bystanders also conformed more than actors during a lineup identification task. The results of these studies suggest that the role a person plays in an interaction affects how susceptible they are to information from a co-witness. Theoretical and applied implications are discussed. First, the results are explained through the use of two models of memory. Second, recommendations are made for forensic investigators.
Resumo:
To navigate effectively in three-dimensional space, flying insects must approximate distances to nearby objects. Humans are able to use an array of cues to guide depth perception in the visual world. However, some of these cues are not available to insects that are constrained by their rigid eyes and relatively small body size. Flying fruit flies can use motion parallax to gauge the distance of nearby objects, but using this cue becomes a less effective strategy as objects become more remote. Humans are able to infer depth across far distances by comparing the angular distance of an object to the horizon. This study tested if flying fruit flies, like humans, use the relative position of the horizon as a depth cue. Fruit flies in tethered flight were stimulated with a virtual environment that displayed vertical bars of varying elevation relative to a horizon, and their tracking responses were recorded. This study showed that tracking responses of the flies were strongly increased by reducing the apparent elevation of the bar against the horizon, indicating that fruit flies may be able to assess the distance of far off objects in the natural world by comparing them against a visual horizon.
Resumo:
The main challenges of multimedia data retrieval lie in the effective mapping between low-level features and high-level concepts, and in the individual users' subjective perceptions of multimedia content. ^ The objectives of this dissertation are to develop an integrated multimedia indexing and retrieval framework with the aim to bridge the gap between semantic concepts and low-level features. To achieve this goal, a set of core techniques have been developed, including image segmentation, content-based image retrieval, object tracking, video indexing, and video event detection. These core techniques are integrated in a systematic way to enable the semantic search for images/videos, and can be tailored to solve the problems in other multimedia related domains. In image retrieval, two new methods of bridging the semantic gap are proposed: (1) for general content-based image retrieval, a stochastic mechanism is utilized to enable the long-term learning of high-level concepts from a set of training data, such as user access frequencies and access patterns of images. (2) In addition to whole-image retrieval, a novel multiple instance learning framework is proposed for object-based image retrieval, by which a user is allowed to more effectively search for images that contain multiple objects of interest. An enhanced image segmentation algorithm is developed to extract the object information from images. This segmentation algorithm is further used in video indexing and retrieval, by which a robust video shot/scene segmentation method is developed based on low-level visual feature comparison, object tracking, and audio analysis. Based on shot boundaries, a novel data mining framework is further proposed to detect events in soccer videos, while fully utilizing the multi-modality features and object information obtained through video shot/scene detection. ^ Another contribution of this dissertation is the potential of the above techniques to be tailored and applied to other multimedia applications. This is demonstrated by their utilization in traffic video surveillance applications. The enhanced image segmentation algorithm, coupled with an adaptive background learning algorithm, improves the performance of vehicle identification. A sophisticated object tracking algorithm is proposed to track individual vehicles, while the spatial and temporal relationships of vehicle objects are modeled by an abstract semantic model. ^
Resumo:
As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.
Resumo:
Given the growing number of wrongful convictions involving faulty eyewitness evidence and the strong reliance by jurors on eyewitness testimony, researchers have sought to develop safeguards to decrease erroneous identifications. While decades of eyewitness research have led to numerous recommendations for the collection of eyewitness evidence, less is known regarding the psychological processes that govern identification responses. The purpose of the current research was to expand the theoretical knowledge of eyewitness identification decisions by exploring two separate memory theories: signal detection theory and dual-process theory. This was accomplished by examining both system and estimator variables in the context of a novel lineup recognition paradigm. Both theories were also examined in conjunction with confidence to determine whether it might add significantly to the understanding of eyewitness memory. ^ In two separate experiments, both an encoding and a retrieval-based manipulation were chosen to examine the application of theory to eyewitness identification decisions. Dual-process estimates were measured through the use of remember-know judgments (Gardiner & Richardson-Klavehn, 2000). In Experiment 1, the effects of divided attention and lineup presentation format (simultaneous vs. sequential) were examined. In Experiment 2, perceptual distance and lineup response deadline were examined. Overall, the results indicated that discrimination and remember judgments (recollection) were generally affected by variations in encoding quality and response criterion and know judgments (familiarity) were generally affected by variations in retrieval options. Specifically, as encoding quality improved, discrimination ability and judgments of recollection increased; and as the retrieval task became more difficult there was a shift toward lenient choosing and more reliance on familiarity. ^ The application of signal detection theory and dual-process theory in the current experiments produced predictable results on both system and estimator variables. These theories were also compared to measures of general confidence, calibration, and diagnosticity. The application of the additional confidence measures in conjunction with signal detection theory and dual-process theory gave a more in-depth explanation than either theory alone. Therefore, the general conclusion is that eyewitness identifications can be understood in a more complete manor by applying theory and examining confidence. Future directions and policy implications are discussed. ^
Resumo:
More information is now readily available to computer users than at any time in human history; however, much of this information is often inaccessible to people with blindness or low-vision, for whom information must be presented non-visually. Currently, screen readers are able to verbalize on-screen text using text-to-speech (TTS) synthesis; however, much of this vocalization is inadequate for browsing the Internet. An auditory interface that incorporates auditory-spatial orientation was created and tested. For information that can be structured as a two-dimensional table, links can be semantically grouped as cells in a row within an auditory table, which provides a consistent structure for auditory navigation. An auditory display prototype was tested.^ Sixteen legally blind subjects participated in this research study. Results demonstrated that stereo panning was an effective technique for audio-spatially orienting non-visual navigation in a five-row, six-column HTML table as compared to a centered, stationary synthesized voice. These results were based on measuring the time- to-target (TTT), or the amount of time elapsed from the first prompting to the selection of each tabular link. Preliminary analysis of the TTT values recorded during the experiment showed that the populations did not conform to the ANOVA requirements of normality and equality of variances. Therefore, the data were transformed using the natural logarithm. The repeated-measures two-factor ANOVA results show that the logarithmically-transformed TTTs were significantly affected by the tonal variation method, F(1,15) = 6.194, p= 0.025. Similarly, the results show that the logarithmically transformed TTTs were marginally affected by the stereo spatialization method, F(1,15) = 4.240, p=0.057. The results show that the logarithmically transformed TTTs were not significantly affected by the interaction of both methods, F(1,15) = 1.381, p=0.258. These results suggest that some confusion may be caused in the subject when employing both of these methods simultaneously. The significant effect of tonal variation indicates that the effect is actually increasing the average TTT. In other words, the presence of preceding tones increases task completion time on average. The marginally-significant effect of stereo spatialization decreases the average log(TTT) from 2.405 to 2.264.^
Resumo:
A Waveguide Microgripper utilizes flexible optical waveguides as gripping arms, which provide the physical means for grasping a microobject, while simultaneously enabling light to be delivered and collected. This unique capability allows extensive optical characterization of the structure being held such as transmission, reflection or fluorescence. One of the simplest capabilities of the waveguide microgripper is to be able to detect the presence of a microobject between the microgripper facets by monitoring the transmitted intensity of light coupled through the facets. The intensity of coupled light is expected to drop when there is an object obstructing the path of light. The optical sensing and characterization function of the microgripper is a strong function of the optical power incident on the structure of interest. Hence it is important to understand the factors affecting the power distribution across the facet. The microgripper is also capable of detecting the fluorescence. This capability of microgripper is expected to have applications in medical, bio-medical and related fields.
Resumo:
As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.