934 resultados para Geometric Sum
Resumo:
Introduction: An observer, looking sideways from a moving vehicle, while wearing a neutral density filter over one eye, can have a distorted perception of speed, known as the Enright phenomenon. The purpose of this study was to determine how the Enright phenomenon influences driving behaviour. Methods: A geometric model of the Enright phenomenon was developed. Ten young, visually normal, participants (mean age = 25.4 years) were tested on a straight section of a closed driving circuit and instructed to look out of the right side of the vehicle and drive at either 40 Km/h or 60 Km/h under the following binocular viewing conditions: with a 0.9 ND filter over the left eye (leading eye); 0.9 ND filter over the right eye (trailing eye); 0.9 ND filters over both eyes, and with no filters over either eye. The order of filter conditions was randomised and the speed driven recorded for each condition. Results: Speed judgments did not differ significantly between the two baseline conditions (no filters and both eyes filtered) for either speed tested. For the baseline conditions, when subjects were asked to drive at 60 Km/h they matched this speed well (61 ± 10.2 Km/h) but drove significantly faster than requested (51.6 ± 9.4 Km/h) when asked to drive at 40 Km/h. Subjects significantly exceeded baseline speeds by 8.7± 5.0 Km/h, when the trailing eye was filtered and travelled slower than baseline speeds by 3.7± 4.6 Km/h when the leading eye was filtered. Conclusions: This is the first quantitative study demonstrating how the Enright effect can influence perceptions of driving speed, and demonstrates that monocular filtering of an eye can significantly impact driving speeds, albeit to a lesser extent than predicted by geometric models of the phenomenon.
Resumo:
Natural convection in a triangular enclosure subject to non-uniformly cooling at the inclined surfaces and uniformly heating at the base is investigated numerically. The numerical simulations of the unsteady flows over a range of Rayleigh numbers and aspect ratios are carried out using Finite Volume Method. Since the upper surface is cooled and the bottom surface is heated, the air flow in the enclosure is potentially unstable to Rayleigh Benard instability. It is revealed that the transient flow development in the enclosure can be classified into three distinct stages; an early stage, a transitional stage and a steady stage. It is also found that the flow inside the enclosure strongly depends on the governing parameters, Rayleigh number and aspect ratio. The asymmetric behaviour of the flow about the geometric centre line is discussed in detailed. The heat transfer through the roof and the ceiling as a form of Nusselt number is also reported in this study.
Resumo:
The automated extraction of roads from aerial imagery can be of value for tasks including mapping, surveillance and change detection. Unfortunately, there are no public databases or standard evaluation protocols for evaluating these techniques. Many techniques are further hindered by a reliance on manual initialisation, making large scale application of the techniques impractical. In this paper, we present a public database and evaluation protocol for the evaluation of road extraction algorithms, and propose an improved automatic seed finding technique to initialise road extraction, based on a combination of geometric and colour features.
Resumo:
Rats are superior to the most advanced robots when it comes to creating and exploiting spatial representations. A wild rat can have a foraging range of hundreds of meters, possibly kilometers, and yet the rodent can unerringly return to its home after each foraging mission, and return to profitable foraging locations at a later date (Davis, et al., 1948). The rat runs through undergrowth and pipes with few distal landmarks, along paths where the visual, textural, and olfactory appearance constantly change (Hardy and Taylor, 1980; Recht, 1988). Despite these challenges the rat builds, maintains, and exploits internal representations of large areas of the real world throughout its two to three year lifetime. While algorithms exist that allow robots to build maps, the questions of how to maintain those maps and how to handle change in appearance over time remain open. The robotic approach to map building has been dominated by algorithms that optimise the geometry of the map based on measurements of distances to features. In a robotic approach, measurements of distance to features are taken with range-measuring devices such as laser range finders or ultrasound sensors, and in some cases estimates of depth from visual information. The features are incorporated into the map based on previous readings of other features in view and estimates of self-motion. The algorithms explicitly model the uncertainty in measurements of range and the measurement of self-motion, and use probability theory to find optimal solutions for the geometric configuration of the map features (Dissanayake, et al., 2001; Thrun and Leonard, 2008). Some of the results from the application of these algorithms have been impressive, ranging from three-dimensional maps of large urban strucutures (Thrun and Montemerlo, 2006) to natural environments (Montemerlo, et al., 2003).
Resumo:
In topological mapping, perceptual aliasing can cause different places to appear indistinguishable to the robot. In case of severely corrupted or non-available odometry information, topological mapping is difficult as the robot is challenged with the loop-closing problem; that is to determine whether it has visited a particular place before. In this article we propose to use neighbourhood information to disambiguate otherwise indistinguishable places. Using neighbourhood information for place disambiguation is an approach that neither depends on a specific choice of sensors nor requires geometric information such as odometry. Local neighbourhood information is extracted from a sequence of observations of visited places. In experiments using either sonar or visual observations from an indoor environment the benefits of using neighbourhood clues for the disambiguation of otherwise identical vertices are demonstrated. Over 90% of the maps we obtain are isomorphic with the ground truth. The choice of the robot’s sensors does not impact the results of the experiments much.
Resumo:
At first glance, the gallery seems to be empty. Upon entering however, 11:59 reveals itself to be masking tape placed lackadaisically in seemingly geometric forms. Enter the gallery at 11:59 and you would witness the light and shadows correlate with the gestural marks that have been made. Exploring ideas of time, space, gesture, value and mark-making, this work can be interpreted to be overflowing with confidence and/or impotence. It whispers about site and encounters, over-complication and simplicity, and boldness and hesitancy.
Resumo:
Virtual environments can provide, through digital games and online social interfaces, extremely exciting forms of interactive entertainment. Because of their capability in displaying and manipulating information in natural and intuitive ways, such environments have found extensive applications in decision support, education and training in the health and science domains amongst others. Currently, the burden of validating both the interactive functionality and visual consistency of a virtual environment content is entirely carried out by developers and play-testers. While considerable research has been conducted in assisting the design of virtual world content and mechanics, to date, only limited contributions have been made regarding the automatic testing of the underpinning graphics software and hardware. The aim of this thesis is to determine whether the correctness of the images generated by a virtual environment can be quantitatively defined, and automatically measured, in order to facilitate the validation of the content. In an attempt to provide an environment-independent definition of visual consistency, a number of classification approaches were developed. First, a novel model-based object description was proposed in order to enable reasoning about the color and geometry change of virtual entities during a play-session. From such an analysis, two view-based connectionist approaches were developed to map from geometry and color spaces to a single, environment-independent, geometric transformation space; we used such a mapping to predict the correct visualization of the scene. Finally, an appearance-based aliasing detector was developed to show how incorrectness too, can be quantified for debugging purposes. Since computer games heavily rely on the use of highly complex and interactive virtual worlds, they provide an excellent test bed against which to develop, calibrate and validate our techniques. Experiments were conducted on a game engine and other virtual worlds prototypes to determine the applicability and effectiveness of our algorithms. The results show that quantifying visual correctness in virtual scenes is a feasible enterprise, and that effective automatic bug detection can be performed through the techniques we have developed. We expect these techniques to find application in large 3D games and virtual world studios that require a scalable solution to testing their virtual world software and digital content.
Resumo:
This paper presents an approach to building an observation likelihood function from a set of sparse, noisy training observations taken from known locations by a sensor with no obvious geometric model. The basic approach is to fit an interpolant to the training data, representing the expected observation, and to assume additive sensor noise. This paper takes a Bayesian view of the problem, maintaining a posterior over interpolants rather than simply the maximum-likelihood interpolant, giving a measure of uncertainty in the map at any point. This is done using a Gaussian process framework. To validate the approach experimentally, a model of an environment is built using observations from an omni-directional camera. After a model has been built from the training data, a particle filter is used to localise while traversing this environment
Resumo:
A simple phenomenological model for the relationship between structure and composition of the high Tc cuprates is presented. The model is based on two simple crystal chemistry principles: unit cell doping and charge balance within unit cells. These principles are inspired by key experimental observations of how the materials accommodate large deviations from stoichiometry. Consistent explanations for significant HTSC properties can be explained without any additional assumptions while retaining valuable insight for geometric interpretation. Combining these two chemical principles with a review of Crystal Field Theory (CFT) or Ligand Field Theory (LFT), it becomes clear that the two oxidation states in the conduction planes (typically d8 and d9) belong to the most strongly divergent d-levels as a function of deformation from regular octahedral coordination. This observation offers a link to a range of coupling effects relating vibrations and spin waves through application of Hund’s rules. An indication of this model’s capacity to predict physical properties for HTSC is provided and will be elaborated in subsequent publications. Simple criteria for the relationship between structure and composition in HTSC systems may guide chemical syntheses within new material systems.
Resumo:
We derive an explicit method of computing the composition step in Cantor’s algorithm for group operations on Jacobians of hyperelliptic curves. Our technique is inspired by the geometric description of the group law and applies to hyperelliptic curves of arbitrary genus. While Cantor’s general composition involves arithmetic in the polynomial ring F_q[x], the algorithm we propose solves a linear system over the base field which can be written down directly from the Mumford coordinates of the group elements. We apply this method to give more efficient formulas for group operations in both affine and projective coordinates for cryptographic systems based on Jacobians of genus 2 hyperelliptic curves in general form.
Resumo:
Feature extraction and selection are critical processes in developing facial expression recognition (FER) systems. While many algorithms have been proposed for these processes, direct comparison between texture, geometry and their fusion, as well as between multiple selection algorithms has not been found for spontaneous FER. This paper addresses this issue by proposing a unified framework for a comparative study on the widely used texture (LBP, Gabor and SIFT) and geometric (FAP) features, using Adaboost, mRMR and SVM feature selection algorithms. Our experiments on the Feedtum and NVIE databases demonstrate the benefits of fusing geometric and texture features, where SIFT+FAP shows the best performance, while mRMR outperforms Adaboost and SVM. In terms of computational time, LBP and Gabor perform better than SIFT. The optimal combination of SIFT+FAP+mRMR also exhibits a state-of-the-art performance.
Resumo:
Finite element analyses of the human body in seated postures requires digital models capable of providing accurate and precise prediction of the tissue-level response of the body in the seated posture. To achieve such models, the human anatomy must be represented with high fidelity. This information can readily be defined using medical imaging techniques such as Magnetic Resonance Imaging (MRI) or Computed Tomography (CT). Current practices for constructing digital human models, based on the magnetic resonance (MR) images, in a lying down (supine) posture have reduced the error in the geometric representation of human anatomy relative to reconstructions based on data from cadaveric studies. Nonetheless, the significant differences between seated and supine postures in segment orientation, soft-tissue deformation and soft tissue strain create a need for data obtained in postures more similar to the application posture. In this study, we present a novel method for creating digital human models based on seated MR data. An adult-male volunteer was scanned in a simulated driving posture using a FONAR 0.6T upright MRI scanner with a T1 scanning protocol. To compensate for unavoidable image distortion near the edges of the study, images of the same anatomical structures were obtained in transverse and sagittal planes. Combinations of transverse and sagittal images were used to reconstruct the major anatomical features from the buttocks through the knees, including bone, muscle and fat tissue perimeters, using Solidworks® software. For each MR image, B-splines were created as contours for the anatomical structures of interest, and LOFT commands were used to interpolate between the generated Bsplines. The reconstruction of the pelvis, from MR data, was enhanced by the use of a template model generated in previous work CT images. A non-rigid registration algorithm was used to fit the pelvis template into the MR data. Additionally, MR image processing was conducted to both the left and the right sides of the model due to the intended asymmetric posture of the volunteer during the MR measurements. The presented subject-specific, three-dimensional model of the buttocks and thighs will add value to optimisation cycles in automotive seat development when used in simulating human interaction with automotive seats.
Resumo:
Finite Element Modeling (FEM) has become a vital tool in the automotive design and development processes. FEM of the human body is a technique capable of estimating parameters that are difficult to measure in experimental studies with the human body segments being modeled as complex and dynamic entities. Several studies have been dedicated to attain close-to-real FEMs of the human body (Pankoke and Siefert 2007; Amann, Huschenbeth et al. 2009; ESI 2010). The aim of this paper is to identify and appraise the state of-the art models of the human body which incorporate detailed pelvis and/or lower extremity models. Six databases and search engines were used to obtain literature, and the search was limited to studies published in English since 2000. The initial search results identified 636 pelvis-related papers, 834 buttocks-related papers, 505 thigh-related papers, 927 femur-related papers, 2039 knee-related papers, 655 shank-related papers, 292 tibia-related papers, 110 fibula-related papers, 644 ankle related papers, and 5660 foot-related papers. A refined search returned 100 pelvis-related papers, 45 buttocks related papers, 65 thigh-related papers, 162 femur-related papers, 195 kneerelated papers, 37 shank-related papers, 80 tibia-related papers, 30 fibula-related papers and 102 ankle-related papers and 246 foot-related papers. The refined literature list was further restricted by appraisal against a modified LOW appraisal criteria. Studies with unclear methodologies, with a focus on populations with pathology or with sport related dynamic motion modeling were excluded. The final literature list included fifteen models and each was assessed against the percentile the model represents, the gender the model was based on, the human body segment/segments included in the model, the sample size used to develop the model, the source of geometric/anthropometric values used to develop the model, the posture the model represents and the finite element solver used for the model. The results of this literature review provide indication of bias in the available models towards 50th percentile male modeling with a notable concentration on the pelvis, femur and buttocks segments.
Resumo:
Magnetic Resonance Imaging was used to study changes in the crystalline lens and ciliary body with accommodation and aging. Monocular images were obtained in 15 young (19-29 years) and 15 older (60-70 years) emmetropes when viewing at far (6m) and at individual near points (14.5 to 20.9 cm) in the younger group. With accommodation, lens thickness increased (mean±95% CI: 0.33±0.06mm) by a similar magnitude to the decrease in anterior chamber depth (0.31±0.07mm) and equatorial diameter (0.32±0.04mm) with a decrease in the radius of curvature of the posterior lens surface (0.58±0.30mm). Anterior lens surface shape could not be determined due to the overlapping region with the iris. Ciliary ring diameter decreased (0.44±0.17mm) with no decrease in circumlental space or forward ciliary body movement. With aging, lens thickness increased (mean±95% CI: 0.97±0.24mm) similar in magnitude to the sum of the decrease in anterior chamber depth (0.45±0.21mm) and increase in anterior segment depth (0.52±0.23mm). Equatorial lens diameter increased (0.28±0.23mm) with no change in the posterior lens surface radius of curvature. Ciliary ring diameter decreased (0.57±0.41mm) with reduced circumlental space (0.43±0.15mm) and no forward ciliary body movement. Accommodative changes support the Helmholtz theory of accommodation including an increase in posterior lens surface curvature. Certain aspects of aging changes mimic accommodation.
Resumo:
In addition to his work on physical optics, Thomas Young (1773-1829) made several contributions to geometrical optics, most of which received little recognition in his time or since. We describe and assess some of these contributions: Young’s construction (the basis for much of his geometric work), paraxial refraction equations, oblique astigmatism and field curvature, and gradient-index optics.