946 resultados para 3D object manipulation
Resumo:
In this paper we present a study of the computational cost of the GNG3D algorithm for mesh optimization. This algorithm has been implemented taking as a basis a new method which is based on neural networks and consists on two differentiated phases: an optimization phase and a reconstruction phase. The optimization phase is developed applying an optimization algorithm based on the Growing Neural Gas model, which constitutes an unsupervised incremental clustering algorithm. The primary goal of this phase is to obtain a simplified set of vertices representing the best approximation of the original 3D object. In the reconstruction phase we use the information provided by the optimization algorithm to reconstruct the faces thus obtaining the optimized mesh. The computational cost of both phases is calculated, showing some examples.
Resumo:
This paper describes a study and analysis of surface normal-base descriptors for 3D object recognition. Specifically, we evaluate the behaviour of descriptors in the recognition process using virtual models of objects created from CAD software. Later, we test them in real scenes using synthetic objects created with a 3D printer from the virtual models. In both cases, the same virtual models are used on the matching process to find similarity. The difference between both experiments is in the type of views used in the tests. Our analysis evaluates three subjects: the effectiveness of 3D descriptors depending on the viewpoint of camera, the geometry complexity of the model and the runtime used to do the recognition process and the success rate to recognize a view of object among the models saved in the database.
Resumo:
The organisation of the human neuromuscular-skeletal system allows an extremely wide variety of actions to be performed, often with great dexterity. Adaptations associated with skill acquisition occur at all levels of the neuromuscular-skeletal system although all neural adaptations are inevitably constrained by the organisation of the actuating apparatus (muscles and bones). We quantified the extent to which skill acquisition in an isometric task set is influenced by the mechanical properties of the muscles used to produce the required actions. Initial performance was greatly dependent upon the specific combination of torques required in each variant of the experimental task. Five consecutive days of practice improved the performance to a similar degree across eight actions despite differences in the torques required about the elbow and forearm. The proportional improvement in performance was also similar when the actions were performed at either 20 or 40% of participants' maximum voluntary torque capacity. The skill acquired during practice was successfully extrapolated to variants of the task requiring more torque than that required during practice. We conclude that while the extent to which skill can be acquired in isometric actions is independent of the specific combination of joint torques required for target acquisition, the nature of the kinetic adaptations leading to the performance improvement in isometric actions is influenced by the neural and mechanical properties of the actuating muscles.
Resumo:
Dedicated to the memory of our colleague Vasil Popov January 14, 1942 – May 31, 1990 * Partially supported by ISF-Center of Excellence, and by The Hermann Minkowski Center for Geometry at Tel Aviv University, Israel
Resumo:
A certain type of bacterial inclusion, known as a bacterial microcompartment, was recently identified and imaged through cryo-electron tomography. A reconstructed 3D object from single-axis limited angle tilt-series cryo-electron tomography contains missing regions and this problem is known as the missing wedge problem. Due to missing regions on the reconstructed images, analyzing their 3D structures is a challenging problem. The existing methods overcome this problem by aligning and averaging several similar shaped objects. These schemes work well if the objects are symmetric and several objects with almost similar shapes and sizes are available. Since the bacterial inclusions studied here are not symmetric, are deformed, and show a wide range of shapes and sizes, the existing approaches are not appropriate. This research develops new statistical methods for analyzing geometric properties, such as volume, symmetry, aspect ratio, polyhedral structures etc., of these bacterial inclusions in presence of missing data. These methods work with deformed and non-symmetric varied shaped objects and do not necessitate multiple objects for handling the missing wedge problem. The developed methods and contributions include: (a) an improved method for manual image segmentation, (b) a new approach to 'complete' the segmented and reconstructed incomplete 3D images, (c) a polyhedral structural distance model to predict the polyhedral shapes of these microstructures, (d) a new shape descriptor for polyhedral shapes, named as polyhedron profile statistic, and (e) the Bayes classifier, linear discriminant analysis and support vector machine based classifiers for supervised incomplete polyhedral shape classification. Finally, the predicted 3D shapes for these bacterial microstructures belong to the Johnson solids family, and these shapes along with their other geometric properties are important for better understanding of their chemical and biological characteristics.
Resumo:
Two novel studies examining the capacity and characteristics of working memory for object weights, experienced through lifting, were completed. Both studies employed visually identical objects of varying weight and focused on memories linking object locations and weights. Whereas numerous studies have examined the capacity of visual working memory, the capacity of sensorimotor memory involved in motor control and object manipulation has not yet been explored. In addition to assessing working memory for object weights using an explicit perceptual test, we also assessed memory for weight using an implicit measure based on motor performance. The vertical lifting or LF and the horizontal GF applied during lifts, measured from force sensors embedded in the object handles, were used to assess participants’ ability to predict object weights. In Experiment 1, participants were presented with sets of 3, 4, 5, 7 or 9 objects. They lifted each object in the set and then repeated this procedure 10 times with the objects lifted either in a fixed or random order. Sensorimotor memory was examined by assessing, as a function of object set size, how lifting forces changed across successive lifts of a given object. The results indicated that force scaling for weight improved across the repetitions of lifts, and was better for smaller set sizes when compared to the larger set sizes, with the latter effect being clearest when objects were lifting in a random order. However, in general the observed force scaling was poorly scaled. In Experiment 2, working memory was examined in two ways: by determining participants’ ability to detect a change in the weight of one of 3 to 6 objects lifted twice, and by simultaneously measuring the fingertip forces applied when lifting the objects. The results showed that, even when presented with 6 objects, participants were extremely accurate in explicitly detecting which object changed weight. In addition, force scaling for object weight, which was generally quite weak, was similar across set sizes. Thus, a capacity limit less than 6 was not found for either the explicit or implicit measures collected.
Resumo:
Aims: To develop clinical protocols for acquiring PET images, performing CT-PET registration and tumour volume definition based on the PET image data, for radiotherapy for lung cancer patients and then to test these protocols with respect to levels of accuracy and reproducibility. Method: A phantom-based quality assurance study of the processes associated with using registered CT and PET scans for tumour volume definition was conducted to: (1) investigate image acquisition and manipulation techniques for registering and contouring CT and PET images in a radiotherapy treatment planning system, and (2) determine technology-based errors in the registration and contouring processes. The outcomes of the phantom image based quality assurance study were used to determine clinical protocols. Protocols were developed for (1) acquiring patient PET image data for incorporation into the 3DCRT process, particularly for ensuring that the patient is positioned in their treatment position; (2) CT-PET image registration techniques and (3) GTV definition using the PET image data. The developed clinical protocols were tested using retrospective clinical trials to assess levels of inter-user variability which may be attributed to the use of these protocols. A Siemens Somatom Open Sensation 20 slice CT scanner and a Philips Allegro stand-alone PET scanner were used to acquire the images for this research. The Philips Pinnacle3 treatment planning system was used to perform the image registration and contouring of the CT and PET images. Results: Both the attenuation-corrected and transmission images obtained from standard whole-body PET staging clinical scanning protocols were acquired and imported into the treatment planning system for the phantom-based quality assurance study. Protocols for manipulating the PET images in the treatment planning system, particularly for quantifying uptake in volumes of interest and window levels for accurate geometric visualisation were determined. The automatic registration algorithms were found to have sub-voxel levels of accuracy, with transmission scan-based CT-PET registration more accurate than emission scan-based registration of the phantom images. Respiration induced image artifacts were not found to influence registration accuracy while inadequate pre-registration over-lap of the CT and PET images was found to result in large registration errors. A threshold value based on a percentage of the maximum uptake within a volume of interest was found to accurately contour the different features of the phantom despite the lower spatial resolution of the PET images. Appropriate selection of the threshold value is dependant on target-to-background ratios and the presence of respiratory motion. The results from the phantom-based study were used to design, implement and test clinical CT-PET fusion protocols. The patient PET image acquisition protocols enabled patients to be successfully identified and positioned in their radiotherapy treatment position during the acquisition of their whole-body PET staging scan. While automatic registration techniques were found to reduce inter-user variation compared to manual techniques, there was no significant difference in the registration outcomes for transmission or emission scan-based registration of the patient images, using the protocol. Tumour volumes contoured on registered patient CT-PET images using the tested threshold values and viewing windows determined from the phantom study, demonstrated less inter-user variation for the primary tumour volume contours than those contoured using only the patient’s planning CT scans. Conclusions: The developed clinical protocols allow a patient’s whole-body PET staging scan to be incorporated, manipulated and quantified in the treatment planning process to improve the accuracy of gross tumour volume localisation in 3D conformal radiotherapy for lung cancer. Image registration protocols which factor in potential software-based errors combined with adequate user training are recommended to increase the accuracy and reproducibility of registration outcomes. A semi-automated adaptive threshold contouring technique incorporating a PET windowing protocol, accurately defines the geometric edge of a tumour volume using PET image data from a stand alone PET scanner, including 4D target volumes.
Resumo:
On obstacle-cluttered construction sites, understanding the motion characteristics of objects is important for anticipating collisions and preventing accidents. This study investigates algorithms for object identification applications that can be used by heavy equipment operators to effectively monitor congested local environment. The proposed framework contains algorithms for three-dimensional spatial modeling and image matching that are based on 3D images scanned by a high-frame rate range sensor. The preliminary results show that an occupancy grid spatial modeling algorithm can successfully build the most pertinent spatial information, and that an image matching algorithm is best able to identify which objects are in the scanned scene.
Resumo:
In this paper, we present an unsupervised graph cut based object segmentation method using 3D information provided by Structure from Motion (SFM), called Grab- CutSFM. Rather than focusing on the segmentation problem using a trained model or human intervention, our approach aims to achieve meaningful segmentation autonomously with direct application to vision based robotics. Generally, object (foreground) and background have certain discriminative geometric information in 3D space. By exploring the 3D information from multiple views, our proposed method can segment potential objects correctly and automatically compared to conventional unsupervised segmentation using only 2D visual cues. Experiments with real video data collected from indoor and outdoor environments verify the proposed approach.
Resumo:
Utilising cameras as a means to survey the surrounding environment is becoming increasingly popular in a number of different research areas and applications. Central to using camera sensors as input to a vision system, is the need to be able to manipulate and process the information captured in these images. One such application, is the use of cameras to monitor the quality of airport landing lighting at aerodromes where a camera is placed inside an aircraft and used to record images of the lighting pattern during the landing phase of a flight. The images are processed to determine a performance metric. This requires the development of custom software for the localisation and identification of luminaires within the image data. However, because of the necessity to keep airport operations functioning as efficiently as possible, it is difficult to collect enough image data to develop, test and validate any developed software. In this paper, we present a technique to model a virtual landing lighting pattern. A mathematical model is postulated which represents the glide path of the aircraft including random deviations from the expected path. A morphological method has been developed to localise and track the luminaires under different operating conditions. © 2011 IEEE.
Resumo:
Empirical studies concerning face recognition suggest that faces may be stored in memory by a few canonical representations. In cortical area V1 exist double-opponent colour blobs, also simple, complex and end-stopped cells which provide input for a multiscale line/edge representation, keypoints for dynamic feature routine, and saliency maps for Focus-of-Attention.
Resumo:
This paper reports on the creation of an interface for 3D virtual environments, computer-aided design applications or computer games. Standard computer interfaces are bound to 2D surfaces, e.g., computer mouses, keyboards, touch pads or touch screens. The Smart Object is intended to provide the user with a 3D interface by using sensors that register movement (inertial measurement unit), touch (touch screen) and voice (microphone). The design and development process as well as the tests and results are presented in this paper. The Smart Object was developed by a team of four third-year engineering students from diverse scientific backgrounds and nationalities during one semester.
Resumo:
Le design d'éclairage est une tâche qui est normalement faite manuellement, où les artistes doivent manipuler les paramètres de plusieurs sources de lumière pour obtenir le résultat désiré. Cette tâche est difficile, car elle n'est pas intuitive. Il existe déjà plusieurs systèmes permettant de dessiner directement sur les objets afin de positionner ou modifier des sources de lumière. Malheureusement, ces systèmes ont plusieurs limitations telles qu'ils ne considèrent que l'illumination locale, la caméra est fixe, etc. Dans ces deux cas, ceci représente une limitation par rapport à l'exactitude ou la versatilité de ces systèmes. L'illumination globale est importante, car elle ajoute énormément au réalisme d'une scène en capturant toutes les interréflexions de la lumière sur les surfaces. Ceci implique que les sources de lumière peuvent avoir de l'influence sur des surfaces qui ne sont pas directement exposées. Dans ce mémoire, on se consacre à un sous-problème du design de l'éclairage: la sélection et la manipulation de l'intensité de sources de lumière. Nous présentons deux systèmes permettant de peindre sur des objets dans une scène 3D des intentions de lumière incidente afin de modifier l'illumination de la surface. De ces coups de pinceau, le système trouve automatiquement les sources de lumière qui devront être modifiées et change leur intensité pour effectuer les changements désirés. La nouveauté repose sur la gestion de l'illumination globale, des surfaces transparentes et des milieux participatifs et sur le fait que la caméra n'est pas fixe. On présente également différentes stratégies de sélection de modifications des sources de lumière. Le premier système utilise une carte d'environnement comme représentation intermédiaire de l'environnement autour des objets. Le deuxième système sauvegarde l'information de l'environnement pour chaque sommet de chaque objet.
Resumo:
We developed an object-oriented cross-platform program to perform three-dimensional (3D) analysis of hip joint morphology using two-dimensional (2D) anteroposterior (AP) pelvic radiographs. Landmarks extracted from 2D AP pelvic radiographs and optionally an additional lateral pelvic X-ray were combined with a cone beam projection model to reconstruct 3D hip joints. Since individual pelvic orientation can vary considerably, a method for standardizing pelvic orientation was implemented to determine the absolute tilt/rotation. The evaluation of anatomically morphologic differences was achieved by reconstructing the projected acetabular rim and the measured hip parameters as if obtained in a standardized neutral orientation. The program had been successfully used to interactively objectify acetabular version in hips with femoro-acetabular impingement or developmental dysplasia. Hip(2)Norm is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway) for graphical user interface (GUI) and is transportable to any platform.