965 resultados para object modeling from images


Relevância:

50.00% 50.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Sex differences in cognition have been largely investigated. The most consistent sex differences favoring females are observed in object location memory involving the left hemisphere whereas the most consistent sex differences favoring males are observed in tasks that require mental rotation involving the right hemisphere. Here we used a task involving these two abilities to see the impact of mental rotation on object location memory. To that end we used a combination of behavioral and event-related potential (ERP) electroencephalography (EEG) measures.A computer screen displayed a square frame of 4 pairs of images (a "teddy" bear, a shoe, an umbrella and a lamp) randomly arranged around a central fixation cross. After a 10-second interval for memorization, images disappeared and were replaced by a test frame with no image but a random pair of two locations marked in black. In addition, this test frame was randomly displayed either in the original orientation (0° rotation) or in the rotated one (90° clockwise - CW - or 90° counterclockwise - CCW). Preceding the test frame, an arrow indicating the presence or the absence of rotation of the frame was displayed on the screen. The task of the participants (15 females and 15 males) was to determine if two marked locations corresponded or not to a pair of identical images. Each response was followed by feedback.Findings showed no significant sex differences in the performance of the original orientation. In comparison with this position, the rotation of the frame produced an equal decrease of male and female performance. In addition, this decrease was significantly higher when the rotation of the frame was in a CCW direction. We further assessed the ERP when the arrow indicated the direction of rotation as stimulus-onset, during four time windows representing major components C1, P1, N1 and N2. Although no sex differences were observed in performance, brain activities differed according to sex. Enhanced amplitudes were found for the CCW compared to CW rotation over the right posterior areas for the P1, N1 and N2 components for men as well as for women. Major topographical differences related to sex were measured for the CW rotation condition as marked lateralized amplitude: left-hemisphere amplitude larger than right one was measured during P1 time range for men. These similar patterns prolonged from P1 to N1 for women. Early distinctions were found in interaction with sex between CCW and CW waveform amplitudes, expressing over anterior electrode sites during C1 time range (0-50 ms post-stimulus).In conclusion (i) women do not outperform men in object location memory in this study (absence of rotation condition); (ii) mental rotation, in particular the direction of rotation, influences performance on object location memory; (iii) CCW rotation is associated with activity in the right parietal hemisphere whereas the CW rotation involves the left parietal hemisphere; (iv) this last effect is less pronounced in males, which could explain why greater involvement of right parietal areas in men and of bilateral posterior areas in women is generally reported in mental rotation tasks; and (v) the early distinctions between both directions of rotation located over anterior sites could be related to sex differences in their respective involvement of control mechanisms.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Sex differences in cognition have been largely investigated. The most consistent sex differences favoring females are observed in object location memory involving the left hemisphere whereas the most consistent sex differences favoring males are observed in tasks that require mental rotation involving the right hemisphere. Here we used a task involving these two abilities to see the impact of mental rotation on object location memory. To that end we used a combination of behavioral and event-related potential (ERP) electroencephalography (EEG) measures.A computer screen displayed a square frame of 4 pairs of images (a "teddy" bear, a shoe, an umbrella and a lamp) randomly arranged around a central fixation cross. After a 10-second interval for memorization, images disappeared and were replaced by a test frame with no image but a random pair of two locations marked in black. In addition, this test frame was randomly displayed either in the original orientation (0° rotation) or in the rotated one (90° clockwise - CW - or 90° counterclockwise - CCW). Preceding the test frame, an arrow indicating the presence or the absence of rotation of the frame was displayed on the screen. The task of the participants (15 females and 15 males) was to determine if two marked locations corresponded or not to a pair of identical images. Each response was followed by feedback.Findings showed no significant sex differences in the performance of the original orientation. In comparison with this position, the rotation of the frame produced an equal decrease of male and female performance. In addition, this decrease was significantly higher when the rotation of the frame was in a CCW direction. We further assessed the ERP when the arrow indicated the direction of rotation as stimulus-onset, during four time windows representing major components C1, P1, N1 and N2. Although no sex differences were observed in performance, brain activities differed according to sex. Enhanced amplitudes were found for the CCW compared to CW rotation over the right posterior areas for the P1, N1 and N2 components for men as well as for women. Major topographical differences related to sex were measured for the CW rotation condition as marked lateralized amplitude: left-hemisphere amplitude larger than right one was measured during P1 time range for men. These similar patterns prolonged from P1 to N1 for women. Early distinctions were found in interaction with sex between CCW and CW waveform amplitudes, expressing over anterior electrode sites during C1 time range (0-50 ms post-stimulus).In conclusion (i) women do not outperform men in object location memory in this study (absence of rotation condition); (ii) mental rotation, in particular the direction of rotation, influences performance on object location memory; (iii) CCW rotation is associated with activity in the right parietal hemisphere whereas the CW rotation involves the left parietal hemisphere; (iv) this last effect is less pronounced in males, which could explain why greater involvement of right parietal areas in men and of bilateral posterior areas in women is generally reported in mental rotation tasks; and (v) the early distinctions between both directions of rotation located over anterior sites could be related to sex differences in their respective involvement of control mechanisms.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Réalisé en cotutelle avec l'Université Bordeaux 1 (France)

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Cerebral glioma is the most prevalent primary brain tumor, which are classified broadly into low and high grades according to the degree of malignancy. High grade gliomas are highly malignant which possess a poor prognosis, and the patients survive less than eighteen months after diagnosis. Low grade gliomas are slow growing, least malignant and has better response to therapy. To date, histological grading is used as the standard technique for diagnosis, treatment planning and survival prediction. The main objective of this thesis is to propose novel methods for automatic extraction of low and high grade glioma and other brain tissues, grade detection techniques for glioma using conventional magnetic resonance imaging (MRI) modalities and 3D modelling of glioma from segmented tumor slices in order to assess the growth rate of tumors. Two new methods are developed for extracting tumor regions, of which the second method, named as Adaptive Gray level Algebraic set Segmentation Algorithm (AGASA) can also extract white matter and grey matter from T1 FLAIR an T2 weighted images. The methods were validated with manual Ground truth images, which showed promising results. The developed methods were compared with widely used Fuzzy c-means clustering technique and the robustness of the algorithm with respect to noise is also checked for different noise levels. Image texture can provide significant information on the (ab)normality of tissue, and this thesis expands this idea to tumour texture grading and detection. Based on the thresholds of discriminant first order and gray level cooccurrence matrix based second order statistical features three feature sets were formulated and a decision system was developed for grade detection of glioma from conventional T2 weighted MRI modality.The quantitative performance analysis using ROC curve showed 99.03% accuracy for distinguishing between advanced (aggressive) and early stage (non-aggressive) malignant glioma. The developed brain texture analysis techniques can improve the physician’s ability to detect and analyse pathologies leading to a more reliable diagnosis and treatment of disease. The segmented tumors were also used for volumetric modelling of tumors which can provide an idea of the growth rate of tumor; this can be used for assessing response to therapy and patient prognosis.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This work presents an efficient method for volume rendering of glioma tumors from segmented 2D MRI Datasets with user interactive control, by replacing manual segmentation required in the state of art methods. The most common primary brain tumors are gliomas, evolving from the cerebral supportive cells. For clinical follow-up, the evaluation of the pre- operative tumor volume is essential. Tumor portions were automatically segmented from 2D MR images using morphological filtering techniques. These seg- mented tumor slices were propagated and modeled with the software package. The 3D modeled tumor consists of gray level values of the original image with exact tumor boundary. Axial slices of FLAIR and T2 weighted images were used for extracting tumors. Volumetric assessment of tumor volume with manual segmentation of its outlines is a time-consuming proc- ess and is prone to error. These defects are overcome in this method. Authors verified the performance of our method on several sets of MRI scans. The 3D modeling was also done using segmented 2D slices with the help of a medical software package called 3D DOCTOR for verification purposes. The results were validated with the ground truth models by the Radi- ologist.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper describes a general, trainable architecture for object detection that has previously been applied to face and peoplesdetection with a new application to car detection in static images. Our technique is a learning based approach that uses a set of labeled training data from which an implicit model of an object class -- here, cars -- is learned. Instead of pixel representations that may be noisy and therefore not provide a compact representation for learning, our training images are transformed from pixel space to that of Haar wavelets that respond to local, oriented, multiscale intensity differences. These feature vectors are then used to train a support vector machine classifier. The detection of cars in images is an important step in applications such as traffic monitoring, driver assistance systems, and surveillance, among others. We show several examples of car detection on out-of-sample images and show an ROC curve that highlights the performance of our system.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Most psychophysical studies of object recognition have focussed on the recognition and representation of individual objects subjects had previously explicitely been trained on. Correspondingly, modeling studies have often employed a 'grandmother'-type representation where the objects to be recognized were represented by individual units. However, objects in the natural world are commonly members of a class containing a number of visually similar objects, such as faces, for which physiology studies have provided support for a representation based on a sparse population code, which permits generalization from the learned exemplars to novel objects of that class. In this paper, we present results from psychophysical and modeling studies intended to investigate object recognition in natural ('continuous') object classes. In two experiments, subjects were trained to perform subordinate level discrimination in a continuous object class - images of computer-rendered cars - created using a 3D morphing system. By comparing the recognition performance of trained and untrained subjects we could estimate the effects of viewpoint-specific training and infer properties of the object class-specific representation learned as a result of training. We then compared the experimental findings to simulations, building on our recently presented HMAX model of object recognition in cortex, to investigate the computational properties of a population-based object class representation as outlined above. We find experimental evidence, supported by modeling results, that training builds a viewpoint- and class-specific representation that supplements a pre-existing repre-sentation with lower shape discriminability but possibly greater viewpoint invariance.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper describes a new method for reconstructing 3D surface using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed object's surface is represented a set of triangular facets. We empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points optimally cluster closely on a highly curved part of the surface and are widely, spread on smooth or fat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not undersampled or underrepresented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper describes a new method for reconstructing 3D surface points and a wireframe on the surface of a freeform object using a small number, e.g. 10, of 2D photographic images. The images are taken at different viewing directions by a perspective camera with full prior knowledge of the camera configurations. The reconstructed surface points are frontier points and the wireframe is a network of contour generators. Both of them are reconstructed by pairing apparent contours in the 2D images. Unlike previous works, we empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points automatically cluster closely on a highly curved part of the surface and are widely spread on smooth or flat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not under-sampled or under-represented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method. Given that the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object. The unique pattern of the reconstructed points and contours may be used in 31) object recognition and measurement without computationally intensive full surface reconstruction. The results are obtained from both computer-generated and real objects. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper describes a method for reconstructing 3D frontier points, contour generators and surfaces of anatomical objects or smooth surfaces from a small number, e. g. 10, of conventional 2D X-ray images. The X-ray images are taken at different viewing directions with full prior knowledge of the X-ray source and sensor configurations. Unlike previous works, we empirically demonstrate that if the viewing directions are uniformly distributed around the object's viewing sphere, then the reconstructed 3D points automatically cluster closely on a highly curved part of the surface and are widely spread on smooth or flat parts. The advantage of this property is that the reconstructed points along a surface or a contour generator are not under-sampled or under-represented because surfaces or contours should be sampled or represented with more densely points where their curvatures are high. The more complex the contour's shape, the greater is the number of points required, but the greater the number of points is automatically generated by the proposed method. Given that the number of viewing directions is fixed and the viewing directions are uniformly distributed, the number and distribution of the reconstructed points depend on the shape or the curvature of the surface regardless of the size of the surface or the size of the object. The technique may be used not only in medicine but also in industrial applications.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Airborne LIght Detection And Ranging (LIDAR) provides accurate height information for objects on the earth, which makes LIDAR become more and more popular in terrain and land surveying. In particular, LIDAR data offer vital and significant features for land-cover classification which is an important task in many application domains. In this paper, an unsupervised approach based on an improved fuzzy Markov random field (FMRF) model is developed, by which the LIDAR data, its co-registered images acquired by optical sensors, i.e. aerial color image and near infrared image, and other derived features are fused effectively to improve the ability of the LIDAR system for the accurate land-cover classification. In the proposed FMRF model-based approach, the spatial contextual information is applied by modeling the image as a Markov random field (MRF), with which the fuzzy logic is introduced simultaneously to reduce the errors caused by the hard classification. Moreover, a Lagrange-Multiplier (LM) algorithm is employed to calculate a maximum A posteriori (MAP) estimate for the classification. The experimental results have proved that fusing the height data and optical images is particularly suited for the land-cover classification. The proposed approach works very well for the classification from airborne LIDAR data fused with its coregistered optical images and the average accuracy is improved to 88.9%.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The extended flight of the Airborne Ionospheric Observatory during the Geospace Environment Modeling (GEM) Pilot program on January 16, 1990, allowed continuous all-sky monitoring of the two-dimensional ionospheric footprint of the northward interplanetary magnetic field (IMF) cusp in several wavelengths. Especially important in determining the locus of magnetosheath electron precipitation was the 630.0-nm red line emission. The most striking morphological change in the images was the transient appearance of zonally elongated regions of enhanced 630.0-nm emission which resembled “rays” emanating from the centroid of the precipitation. The appearance of these rays was strongly correlated with the Y component of the IMF: when the magnitude of By was large compared to Bz, the rays appeared; otherwise, the distribution was relatively unstructured. Late in the flight the field of view of the imager included the field of view of flow measurements from the European incoherent scatter radar (EISCAT). The rays visible in 630.0-nm emission exactly aligned with the position of strong flow jets observed by EISCAT. We attribute this correspondence to the requirement of quasi-neutrality; namely, the soft electrons have their largest precipitating fluxes where the bulk of the ions precipitate. The ions, in regions of strong convective flow, are spread out farther along the flow path than in regions of weaker flow. The occurrence and direction of these flow bursts are controlled by the IMF in a manner consistent with newly opened flux tubes; i.e., when |By| > |Bz|, tension in the reconnected field lines produce east-west flow regions downstream of the ionospheric projection of the x line. We interpret the optical rays (flow bursts), which typically last between 5 and 15 min, as evidence of periods of enhanced dayside (or lobe) reconnection when |By| > |Bz|. The length of the reconnection pulse is difficult to determine, however, since strong zonal flows would be expected to persist until the tension force in the field line has decayed, even if the duration of the enhanced reconnection was relatively short.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The issue of how children learn the meaning of words is fundamental to developmental psychology. The recent attempts to develop or evolve efficient communication protocols among interacting robots or Virtual agents have brought that issue to a central place in more applied research fields, such as computational linguistics and neural networks, as well. An attractive approach to learning an object-word mapping is the so-called cross-situational learning. This learning scenario is based on the intuitive notion that a learner can determine the meaning of a word by finding something in common across all observed uses of that word. Here we show how the deterministic Neural Modeling Fields (NMF) categorization mechanism can be used by the learner as an efficient algorithm to infer the correct object-word mapping. To achieve that we first reduce the original on-line learning problem to a batch learning problem where the inputs to the NMF mechanism are all possible object-word associations that Could be inferred from the cross-situational learning scenario. Since many of those associations are incorrect, they are considered as clutter or noise and discarded automatically by a clutter detector model included in our NMF implementation. With these two key ingredients - batch learning and clutter detection - the NMF mechanism was capable to infer perfectly the correct object-word mapping. (C) 2009 Elsevier Ltd. All rights reserved.