891 resultados para DISTANCE GEOMETRY
Resumo:
A visual telepresence system has been developed at the University of Reading which utilizes eye tracing to adjust the horizontal orientation of the cameras and display system according to the convergence state of the operator's eyes. Slaving the cameras to the operator's direction of gaze enables the object of interest to be centered on the displays. The advantage of this is that the camera field of view may be decreased to maximize the achievable depth resolution. An active camera system requires an active display system if appropriate binocular cues are to be preserved. For some applications, which critically depend upon the veridical perception of the object's location and dimensions, it is imperative that the contribution of binocular cues to these judgements be ascertained because they are directly influenced by camera and display geometry. Using the active telepresence system, we investigated the contribution of ocular convergence information to judgements of size, distance and shape. Participants performed an open- loop reach and grasp of the virtual object under reduced cue conditions where the orientation of the cameras and the displays were either matched or unmatched. Inappropriate convergence information produced weak perceptual distortions and caused problems in fusing the images.
Resumo:
One of the key issues in e-learning environments is the possibility of creating and evaluating exercises. However, the lack of tools supporting the authoring and automatic checking of exercises for specifics topics (e.g., geometry) drastically reduces advantages in the use of e-learning environments on a larger scale, as usually happens in Brazil. This paper describes an algorithm, and a tool based on it, designed for the authoring and automatic checking of geometry exercises. The algorithm dynamically compares the distances between the geometric objects of the student`s solution and the template`s solution, provided by the author of the exercise. Each solution is a geometric construction which is considered a function receiving geometric objects (input) and returning other geometric objects (output). Thus, for a given problem, if we know one function (construction) that solves the problem, we can compare it to any other function to check whether they are equivalent or not. Two functions are equivalent if, and only if, they have the same output when the same input is applied. If the student`s solution is equivalent to the template`s solution, then we consider the student`s solution as a correct solution. Our software utility provides both authoring and checking tools to work directly on the Internet, together with learning management systems. These tools are implemented using the dynamic geometry software, iGeom, which has been used in a geometry course since 2004 and has a successful track record in the classroom. Empowered with these new features, iGeom simplifies teachers` tasks, solves non-trivial problems in student solutions and helps to increase student motivation by providing feedback in real time. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Pós-graduação em Ciências Biológicas (Zoologia) - IBRC
Resumo:
Context. The angular diameter distances toward galaxy clusters can be determined with measurements of Sunyaev-Zel'dovich effect and X-ray surface brightness combined with the validity of the distance-duality relation, D-L(z)(1 + z)(2)/D-A(z) = 1, where D-L(z) and D-A(z) are, respectively, the luminosity and angular diameter distances. This combination enables us to probe galaxy cluster physics or even to test the validity of the distance-duality relation itself. Aims. We explore these possibilities based on two different, but complementary approaches. Firstly, in order to constrain the possible galaxy cluster morphologies, the validity of the distance-duality relation (DD relation) is assumed in the Lambda CDM framework (WMAP7). Secondly, by adopting a cosmological-model-independent test, we directly confront the angular diameters from galaxy clusters with two supernovae Ia (SNe Ia) subsamples (carefully chosen to coincide with the cluster positions). The influence of the different SNe Ia light-curve fitters in the previous analysis are also discussed. Methods. We assumed that eta is a function of the redshift parametrized by two different relations: eta(z) = 1 +eta(0)z, and eta(z) = 1 + eta(0)z/(1 + z), where eta(0) is a constant parameter quantifying the possible departure from the strict validity of the DD relation. In order to determine the probability density function (PDF) of eta(0), we considered the angular diameter distances from galaxy clusters recently studied by two different groups by assuming elliptical and spherical isothermal beta models and spherical non-isothermal beta model. The strict validity of the DD relation will occur only if the maximum value of eta(0) PDF is centered on eta(0) = 0. Results. For both approaches we find that the elliptical beta model agrees with the distance-duality relation, whereas the non-isothermal spherical description is, in the best scenario, only marginally compatible. We find that the two-light curve fitters (SALT2 and MLCS2K2) present a statistically significant conflict, and a joint analysis involving the different approaches suggests that clusters are endowed with an elliptical geometry as previously assumed. Conclusions. The statistical analysis presented here provides new evidence that the true geometry of clusters is elliptical. In principle, it is remarkable that a local property such as the geometry of galaxy clusters might be constrained by a global argument like the one provided by the cosmological distance-duality relation.
Resumo:
[EN] In this paper we present a variational technique for the reconstruction of 3D cylindrical surfaces. Roughly speaking by a cylindrical surface we mean a surface that can be parameterized using the projection on a cylinder in terms of two coordinates, representing the displacement and angle in a cylindrical coordinate system respectively. The starting point for our method is a set of different views of a cylindrical surface, as well as a precomputed disparity map estimation between pair of images. The proposed variational technique is based on an energy minimization where we balance on the one hand the regularity of the cylindrical function given by the distance of the surface points to cylinder axis, and on the other hand, the distance between the projection of the surface points on the images and the expected location following the precomputed disparity map estimation between pair of images. One interesting advantage of this approach is that we regularize the 3D surface by means of a bi-dimensio al minimization problem. We show some experimental results for large stereo sequences.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
During decades Distance Transforms have proven to be useful for many image processing applications, and more recently, they have started to be used in computer graphics environments. The goal of this paper is to propose a new technique based on Distance Transforms for detecting mesh elements which are close to the objects' external contour (from a given point of view), and using this information for weighting the approximation error which will be tolerated during the mesh simplification process. The obtained results are evaluated in two ways: visually and using an objective metric that measures the geometrical difference between two polygonal meshes.
Resumo:
Most available studies of interconnected matrix porosity of crystalline rocks are based on laboratory investigations; that is, work on samples that have undergone stress relaxation and were affected by drilling and sample preparation. The extrapolation of the results to in situ conditions is therefore associated with considerable uncertainty, and this was the motivation to conduct the ‘in situ Connected Porosity’ experiment at the Grimsel Test Site (Central Swiss Alps). An acrylic resin doped with fluorescent agents was used to impregnate the microporous granitic matrix in situ around an injection borehole, and samples were obtained by overcoring. The 3-D structure of the porespace, represented by microcracks, was studied by U-stage fluorescence microscopy. Petrophysical methods, including the determination of porosity, permeability and P -wave velocity, were also applied. Investigations were conducted both on samples that were impregnated in situ and on non-impregnated samples, so that natural features could be distinguished from artefacts. The investigated deformed granites display complex microcrack populations representing a polyphase deformation at varying conditions. The crack population is dominated by open cleavage cracks in mica and grain boundary cracks. The porosity of non-impregnated samples lies slightly above 1 per cent, which is 2–2.5 times higher than the in situ porosity obtained for impregnated samples. Measurements of seismic velocities (Vp ) on spherical rock samples as a function of confining pressure, spatial direction and water saturation for both non-impregnated and impregnated samples provide further constraints on the distinction between natural and induced crack types. The main conclusions are that (1) an interconnected network of microcracks exists in the whole granitic matrix, irrespective of the distance to ductile and brittle shear zones, and (2) conventional laboratory methods overestimate the matrix porosity. Calculations of contaminant transport through fractured media often rely on matrix diffusion as a retardation mechanism.
Resumo:
In combined clinical optoacoustic (OA) and ultrasound (US) imaging, epi-mode irradiation and detection integrated into one single probe offers flexible imaging of the human body. The imaging depth in epi-illumination is, however, strongly affected by clutter. As shown in previous phantom experiments, the location of irradiation plays an important role in clutter generation. We investigated the influence of the irradiation geometry on the local image contrast of clinical images, by varying the separation distance between the irradiated area and the acoustic imaging plane of a linear ultrasound transducer in an automated scanning setup. The results for different volunteers show that the image contrast can be enhanced on average by 25% and locally by more than a factor of two, when the irradiated area is slightly separated from the probe. Our findings have an important impact on the design of future optoacoustic probes for clinical application.
Resumo:
The International Standard ISO 140-5 on field measurements of airborne sound insulation of façades establishes that the directivity of the measurement loudspeaker should be such that the variation in the local direct sound pressure level (ΔSPL) on the sample is ΔSPL < 5 dB (or ΔSPL < 10 dB for large façades). This condition is usually not very easy to accomplish nor is it easy to verify whether the loudspeaker produces such a uniform level. Direct sound pressure levels on the ISO standard façade essentially depend on the distance and directivity of the loudspeaker used. This paper presents a comprehensive analysis of the test geometry for measuring sound insulation and explains how the loudspeaker directivity, combined with distance, affects the acoustic level distribution on the façade. The first sections of the paper are focused on analysing the measurement geometry and its influence on the direct acoustic level variations on the façade. The most favourable and least favourable positions to minimise these direct acoustic level differences are found, and the angles covered by the façade in the reference system of the loudspeaker are also determined. Then, the maximum dimensions of the façade that meet the conditions of the ISO 140-5 standard are obtained for the ideal omnidirectional sound source and the piston radiating in an infinite baffle, which is chosen as the typical radiation pattern for loudspeakers. Finally, a complete study of the behaviour of different loudspeaker radiation models (such as those usually utilised in the ISO 140-5 measurements) is performed, comparing their radiation maps on the façade for searching their maximum dimensions and the most appropriate radiation configurations.
Resumo:
The calibration coefficients of two commercial anemometers equipped with different rotors were studied. The rotor cups had the same conical shape, while the size and distance to the rotation axis varied.The analysis was based on the 2-cup positions analytical model, derived using perturbation methods to include second-order effects such as pressure distribution along the rotating cups and friction.Thecomparison with the experimental data indicates a nonuniformdistribution of aerodynamic forces on the rotating cups, with higher forces closer to the rotating axis. The 2-cup analytical model is proven to be accurate enough to study the effect of complex forces on cup anemometer performance.
Resumo:
The present paper describes the preliminary stages of the development of a new, comprehensive model conceived to simulate the evacuation of transport airplanes in certification studies. Two previous steps were devoted to implementing an efficient procedure to define the whole geometry of the cabin, and setting up an algorithm for assigning seats to available exits. Now, to clarify the role of the cabin arrangement in the evacuation process, the paper addresses the influence of several restrictions on the seat-to-exit assignment algorithm, maintaining a purely geometrical approach for consistency. Four situations are considered: first, an assignment method without limitations to search the minimum for the total distance run by all passengers along their escaping paths; second, a protocol that restricts the number of evacuees through each exit according to updated FAR 25 capacity; third, a procedure which tends to the best proportional sharing among exits but obliges to each passenger to egress through the nearest fore or rear exits; and fourth, a scenario which includes both restrictions. The four assignment strategies are applied to turboprops, and narrow body and wide body jets. Seat to exit distance and number of evacuees per exit are the main output variables. The results show the influence of airplane size and the impact of non-symmetries and inappropriate matching between size and longitudinal location of exits.
Resumo:
This paper deals with pattern recognition of the shape of the boundary of closed figures on the basis of a circular sequence of measurements taken on the boundary at equal intervals of a suitably chosen argument with an arbitrary starting point. A distance measure between two boundaries is defined in such a way that it has zero value when the associated sequences of measurements coincide by shifting the starting point of one of the sequences. Such a distance measure, which is invariant to the starting point of the sequence of measurements, is used in identification or discrimination by the shape of the boundary of a closed figure. The mean shape of a given set of closed figures is defined, and tests of significance of differences in mean shape between populations are proposed.
Reverse Geometry Hybrid Contact Lens Fitting in a Case of Donor-Host Misalignment after Keratoplasty
Resumo:
Purpose: To report the successful outcome obtained after fitting a new hybrid contact lens in a cornea with an area of donor-host misalignment and significant levels of irregular astigmatism after penetrating keratoplasty (PKP). Materials and methods: A 41-year-old female with bilateral asymmetric keratoconus underwent PKP in her left eye due to the advanced status of the disease. One year after surgery, the patient referred a poor visual acuity and quality in this eye. The fitting of different types of rigid gas permeable contact lenses was performed, but with an unsuccessful outcome due to contact lens stability problems and uncomfortable wear. Scheimpflug imaging evaluation revealed that a donor-host misalignment was present at the nasal area. Contact lens fitting with a reverse geometry hybrid contact lens (Clearkone, SynergEyes Carlsbad) was then fitted. Visual, refractive, and ocular aberrometric outcomes were evaluated during a 1-year period after the fitting. Results: Uncorrected distance visual acuity improved from a prefitting value of 20/200 to a best corrected postfitting value of 20/20. Prefitting manifest refraction was +5.00 sphere and -5.50 cylinder at 75°, with a corrected distance visual acuity of 20/30. Higher order root mean square (RMS) for a 5 mm pupil changed from a prefitting value of 6.83 µm to a postfitting value of 1.57 µm (5 mm pupil). The contact lens wearing was referred as comfortable, with no anterior segment alterations. Conclusion: The SynergEyes Clearkone contact lens seems to be another potentially useful option for the visual rehabilitation after PKP, especially in cases of donor-host misalignment.
Resumo:
The Iterative Closest Point algorithm (ICP) is commonly used in engineering applications to solve the rigid registration problem of partially overlapped point sets which are pre-aligned with a coarse estimate of their relative positions. This iterative algorithm is applied in many areas such as the medicine for volumetric reconstruction of tomography data, in robotics to reconstruct surfaces or scenes using range sensor information, in industrial systems for quality control of manufactured objects or even in biology to study the structure and folding of proteins. One of the algorithm’s main problems is its high computational complexity (quadratic in the number of points with the non-optimized original variant) in a context where high density point sets, acquired by high resolution scanners, are processed. Many variants have been proposed in the literature whose goal is the performance improvement either by reducing the number of points or the required iterations or even enhancing the complexity of the most expensive phase: the closest neighbor search. In spite of decreasing its complexity, some of the variants tend to have a negative impact on the final registration precision or the convergence domain thus limiting the possible application scenarios. The goal of this work is the improvement of the algorithm’s computational cost so that a wider range of computationally demanding problems from among the ones described before can be addressed. For that purpose, an experimental and mathematical convergence analysis and validation of point-to-point distance metrics has been performed taking into account those distances with lower computational cost than the Euclidean one, which is used as the de facto standard for the algorithm’s implementations in the literature. In that analysis, the functioning of the algorithm in diverse topological spaces, characterized by different metrics, has been studied to check the convergence, efficacy and cost of the method in order to determine the one which offers the best results. Given that the distance calculation represents a significant part of the whole set of computations performed by the algorithm, it is expected that any reduction of that operation affects significantly and positively the overall performance of the method. As a result, a performance improvement has been achieved by the application of those reduced cost metrics whose quality in terms of convergence and error has been analyzed and validated experimentally as comparable with respect to the Euclidean distance using a heterogeneous set of objects, scenarios and initial situations.