643 resultados para visual method
Resumo:
Recent surveys of information technology management professionals show that understanding business domains in terms of business productivity and cost reduction potential, knowledge of different vertical industry segments and their information requirements, understanding of business processes and client-facing skills are more critical for Information Systems personnel than ever before. In an attempt to restrucuture the information systems curriculum accordingly, our view it that information systems students need to develop an appreciation for organizational work systems in order to understand the operation and significance of information systems within such work systems.
Resumo:
In this paper we present a novel algorithm for localization during navigation that performs matching over local image sequences. Instead of calculating the single location most likely to correspond to a current visual scene, the approach finds candidate matching locations within every section (subroute) of all learned routes. Through this approach, we reduce the demands upon the image processing front-end, requiring it to only be able to correctly pick the best matching image from within a short local image sequence, rather than globally. We applied this algorithm to a challenging downhill mountain biking visual dataset where there was significant perceptual or environment change between repeated traverses of the environment, and compared performance to applying the feature-based algorithm FAB-MAP. The results demonstrate the potential for localization using visual sequences, even when there are no visual features that can be reliably detected.
Resumo:
In this paper an existing method for indoor Simultaneous Localisation and Mapping (SLAM) is extended to operate in large outdoor environments using an omnidirectional camera as its principal external sensor. The method, RatSLAM, is based upon computational models of the area in the rat brain that maintains the rodent’s idea of its position in the world. The system uses the visual appearance of different locations to build hybrid spatial-topological maps of places it has experienced that facilitate relocalisation and path planning. A large dataset was acquired from a dynamic campus environment and used to verify the system’s ability to construct representations of the world and simultaneously use these representations to maintain localisation.
Resumo:
In this paper, we present a new algorithm for boosting visual template recall performance through a process of visual expectation. Visual expectation dynamically modifies the recognition thresholds of learnt visual templates based on recently matched templates, improving the recall of sequences of familiar places while keeping precision high, without any feedback from a mapping backend. We demonstrate the performance benefits of visual expectation using two 17 kilometer datasets gathered in an outdoor environment at two times separated by three weeks. The visual expectation algorithm provides up to a 100% improvement in recall. We also combine the visual expectation algorithm with the RatSLAM SLAM system and show how the algorithm enables successful mapping
Resumo:
For the analysis of material nonlinearity, an effective shear modulus approach based on the strain control method is proposed in this paper by using point collocation method. Hencky’s total deformation theory is used to evaluate the effective shear modulus, Young’s modulus and Poisson’s ratio, which are treated as spatial field variables. These effective properties are obtained by the strain controlled projection method in an iterative manner. To evaluate the second order derivatives of shape function at the field point, the radial basis function (RBF) in the local support domain is used. Several numerical examples are presented to demonstrate the efficiency and accuracy of the proposed method and comparisons have been made with analytical solutions and the finite element method (ABAQUS).
Resumo:
Power relations and small and medium-sized enterprise strategies for capturing value in global production networks: visual effects (VFX) service firms in the Hollywood film industry, Regional Studies. This paper provides insights into the way in which non-lead firms manoeuvre in global value chains in the pursuit of a larger share of revenue and how power relations affect these manoeuvres. It examines the nature of value capture and power relations in the global supply of visual effects (VFX) services and the range of strategies VFX firms adopt to capture higher value in the global value chain. The analysis is based on a total of thirty-six interviews with informants in the industry in Australia, the United Kingdom and Canada, and a database of VFX credits for 3323 visual products for 640 VFX firms.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
This study aimed to examine the effects on driving, usability and subjective workload of performing music selection tasks using a touch screen interface. Additionally, to explore whether the provision of visual and/or auditory feedback offers any performance and usability benefits. Thirty participants performed music selection tasks with a touch screen interface while driving. The interface provided four forms of feedback: no feedback, auditory feedback, visual feedback, and a combination of auditory and visual feedback. Performance on the music selection tasks significantly increased subjective workload and degraded performance on a range of driving measures including lane keeping variation and number of lane excursions. The provision of any form of feedback on the touch screen interface did not significantly affect driving performance, usability or subjective workload, but was preferred by users over no feedback. Overall, the results suggest that touch screens may not be a suitable input device for navigating scrollable lists.
Resumo:
This paper presents a strategy for delayed research method selection in a qualitative interpretivist research. An exemplary case details how explorative interviews were designed and conducted in accordance with a paradigm prior to deciding whether to adopt grounded theory or phenomenology for data analysis. The focus here is to determine the most appropriate research strategy in this case the methodological framing to conduct research and represent findings, both of which are detailed. Research addressing current management issues requires both a flexible framework and the capability to consider the research problem from various angles, to derive tangible results for academia with immediate application to business demands. Researchers, and in particular novices, often struggle to decide on an appropriate research method suitable to address their research problem. This often applies to interpretative qualitative research where it is not always immediately clear which is the most appropriate method to use, as the research objectives shift and crystallize over time. This paper uses an exemplary case to reveal how the strategy for delayed research method selection contributes to deciding whether to adopt grounded theory or phenomenology in the initial phase of a PhD research project. In this case, semi-structured interviews were used for data generation framed in an interpretivist approach, situated in a business context. Research questions for this study were thoroughly defined and carefully framed in accordance with the research paradigm‟s principles, while at the same time ensuring that the requirements of both potential research methods were met. The grounded theory and phenomenology methods were compared and contrasted to determine their suitability and whether they meet the research objectives based on a pilot study. The strategy proposed in this paper is an alternative to the more „traditional‟ approach, which initially selects the methodological formulation, followed by data generation. In conclusion, the suggested strategy for delayed research method selection intends to help researchers identify and apply the most appropriate method to their research. This strategy is based on explorations of data generation and analysis in order to derive faithful results from the data generated.
Resumo:
The accuracy of marker placement on palpable surface anatomical landmarks is an important consideration in biomechanics. Although marker placement reliability has been studied in some depth, it remains unclear whether or not the markers are accurately positioned over the intended landmark in order to define the static position and orientation of the segment. A novel method using commonly available X-ray imaging was developed to identify the accuracy of markers placed on the shoe surface by palpating landmarks through the shoe. An anterior–posterior and lateral–medial X-ray was taken on 24 participants with a newly developed marker set applied to both the skin and shoe. The vector magnitude of both skin- and shoe-mounted markers from the anatomical landmark was calculated, as well as the mean marker offset between skin- and shoe-mounted markers. The accuracy of placing markers on the shoe relative to the skin-mounted markers, accounting for shoe thickness, was less than 5mm for all markers studied. Further, when using the developed guidelines provided in this study, the method was deemed reliable (Intra-rater ICCs¼0.50–0.92). In conclusion, the method proposed here can reliably assess marker placement accuracy on the shoe surface relative to chosen anatomical landmarks beneath the skin.
Resumo:
In recent years, the advent of new tools for musculoskeletal simulation has increased the potential for significantly improving the ergonomic design process and ergonomic assessment of design. In this paper we investigate the use of one such tool, ‘The AnyBody Modeling System’, applied to solve a one-parameter and yet, complex ergonomic design problem. The aim of this paper is to investigate the potential of computer-aided musculoskeletal modelling in the ergonomic design process, in the same way as CAE technology has been applied to engineering design.
Resumo:
Purpose: To compare accuracies of different methods for calculating human lens power when lens thickness is not available. Methods: Lens power was calculated by four methods. Three methods were used with previously published biometry and refraction data of 184 emmetropic and myopic eyes of 184 subjects (age range [18, 63] years, spherical equivalent range [–12.38, +0.75] D). These three methods consist of the Bennett method, which uses lens thickness, our modification of the Stenström method and the Bennett¬Rabbetts method, both of which do not require knowledge of lens thickness. These methods include c constants, which represent distances from lens surfaces to principal planes. Lens powers calculated with these methods were compared with those calculated using phakometry data available for a subgroup of 66 emmetropic eyes (66 subjects). Results: Lens powers obtained from the Bennett method corresponded well with those obtained by phakometry for emmetropic eyes, although individual differences up to 3.5D occurred. Lens powers obtained from the modified¬Stenström and Bennett¬Rabbetts methods deviated significantly from those obtained with either the Bennett method or phakometry. Customizing the c constants improved this agreement, but applying these constants to the entire group gave mean lens power differences of 0.71 ± 0.56D compared with the Bennett method. By further optimizing the c constants, the agreement with the Bennett method was within ± 1D for 95% of the eyes. Conclusion: With appropriate constants, the modified¬Stenström and Bennett¬Rabbetts methods provide a good approximation of the Bennett lens power in emmetropic and myopic eyes.
Resumo:
Purpose. To devise and validate artist-rendered grading scales for contact lens complications Methods. Each of eight tissue complications of contact lens wear (listed under 'Results') was painted by a skilled ophthalmic artist (Terry R. Tarrant) in five grades of severity: 0 (normal), 1 (trace), 2 (mild), 3 (moderate) and 4 (severe). A representative slit lamp photograph of a tissue response of each of the eight complications was shown to 404 contact lens practitioners who had never before used clinical grading scales. The practitioners were asked to grade each tissue response to the nearest 0.1 grade unit by interpolation. Results. The standard deviation (± s.d.) of the 404 responses for each tissue complication is tabulated below:_ing_ 0.5 Endothelial pplymegethisjij-4 0.7 Epithelial microcysts 0.5 Endothelial blebs_ 0.4 Stromal edema_onjunctiva! hyperemia 0.4 Stromal neovascularization 0.4 Papillary conjunctivitis 0.5 The frequency distributions and best-fit normal curves were also plotted. The precision of grading (s.d. x 2) ranged from 0.8 to 1.4, with a mean precision of 1.0. Conclusions. Grading scales afford contact lens practitioners with a method of quantifying the severity of adverse tissue responses to contact lens wear. It is noteworthy that the statistically verified precision of grading (1.0 scale unit) concurs precisely with the essential design feature of the grading scales that each grading step of 1.0 corresponds to clinically significant difference in severity. Thus, as a general rule, a difference or change in grade of > 1.0 can be taken to be both clinically and statistically significant when using these grading scales. Trained observers are likely to achieve even greater grading precision. Supported by Hydron Limited.
Resumo:
Purpose. To investigate how temporal processing is altered in myopia and during myopic progression. Methods. In backward visual masking, a target's visibility is reduced by a mask presented quickly after the target. Thirty emmetropes, 40 low myopes, and 22 high myopes aged 18 to 26 years completed location and resolution masking tasks. The location task examined the ability to detect letters with low contrast and large stimulus size. The resolution task involved identifying a small letter and tested resolution and color discrimination. Target and mask stimuli were presented at nine short interstimulus intervals (12 to 259 ms) and at 1000 ms (long interstimulus interval condition). Results. In comparison with emmetropes, myopes had reduced ability in both locating and identifying briefly presented stimuli but were more affected by backward masking for a low contrast location task than for a resolution task. Performances of low and high myopes, as well as stable and progressing myopes, were similar for both masking tasks. Task performance was not correlated with myopia magnitude. Conclusions. Myopes were more affected than emmetropes by masking stimuli for the location task. This was not affected by magnitude or progression rate of myopia, suggesting that myopes have the propensity for poor performance in locating briefly presented low contrast objects at an early stage of myopia development.