990 resultados para Virtual Reconstruction
Resumo:
National Key Technology R&D Program of China [2008BAK50B05]; Chinese Academy of Sciences [KZCX-YW-Q06, KZCX2-YW-Q03-06]
Resumo:
Semisupervised dimensionality reduction has been attracting much attention as it not only utilizes both labeled and unlabeled data simultaneously, but also works well in the situation of out-of-sample. This paper proposes an effective approach of semisupervised dimensionality reduction through label propagation and label regression. Different from previous efforts, the new approach propagates the label information from labeled to unlabeled data with a well-designed mechanism of random walks, in which outliers are effectively detected and the obtained virtual labels of unlabeled data can be well encoded in a weighted regression model. These virtual labels are thereafter regressed with a linear model to calculate the projection matrix for dimensionality reduction. By this means, when the manifold or the clustering assumption of data is satisfied, the labels of labeled data can be correctly propagated to the unlabeled data; and thus, the proposed approach utilizes the labeled and the unlabeled data more effectively than previous work. Experimental results are carried out upon several databases, and the advantage of the new approach is well demonstrated.
Resumo:
This paper focuses on the problem of incomplete data in the applications of the circular cone-beam computed tomography. This problem is frequently encountered in medical imaging sciences and some other industrial imaging systems. For example, it is crucial when the high density region of objects can only be penetrated by X-rays in a limited angular range. As the projection data are only available in an angular range, the above mentioned incomplete data problem can be attributed to the limited angle problem, which is an ill-posed inverse problem. This paper reports a modified total variation minimisation method to reduce the data insufficiency in tomographic imaging. This proposed method is robust and efficient in the task of reconstruction by showing the convergence of the alternating minimisation method. The results demonstrate that this new reconstruction method brings reasonable performance. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The structural evolution of a single-layer latex film during annealing was studied via grazing incidence ultrasmall-angle X-ray scattering (GIUSAXS) and atomic force microscopy (AFM). The latex particles were composed of a low-T-g (-54 degrees C) core (n-butylacrylate, 30 wt %) and a high-T-g (41 degrees C) shell (t-butylacrylate, 70 wt %) and had an overall diameter of about 500 nm. GIUSAXS data indicate that the q(y) scan at q(z) = 0.27 nm(-1) (out-of-plane scan) contains information about both the structure factor and the form factor. The GIUSAXS data on latex films annealed at various temperatures ranging from room temperature to 140 degrees C indicate that the structure of the latex thin film beneath the surface changed significantly. The evolution of the out-of-plane scan plot reveals the surface reconstruction of the film. Furthermore, we also followed the time-dependent behavior of structural evolution when the latex film was annealed at a relatively low temperature (60 degrees C) where restructuring within the film can be followed that cannot be detected by AFM, which detects only surface morphology.
Resumo:
C-37 unsaturated alkenones were analyzed on a core retrieved from the middle Okinawa Trough. The calculated U-37(K') displays a trend generally parallel with those of the oxygen isotopic compositions of two planktonic foraminiferal species, Neogloboquadrina dutertrei and Globigerinoides sacculifer, suggesting that in this region, SST has varied in phase with global ice volume change since the last glacial -interglacial cycle. The U-37(K')-derived SST ranged from ca. 24.0 to 27.5 degrees C, with the highest value 27.5 degrees C occurring in marine isotope stage 5 and the lowest similar to 24.0 degrees C in marine isotope stage 2. This trend is consistent with the continental records from the East Asian monsoon domain and the marine records from the Equatorial Pacific. The deglacial increase of the U-37(K')-derived SST is similar to 2.4 degrees C from the Last Glacial Maximum to the Holocene. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
Empirical Orthogonal Function (EOF) analysis is used in this study to generate main eigenvector fields of historical temperature for the China Seas (here referring to Chinese marine territories) and adjacent waters from 1930 to 2002 (510 143 profiles). A good temperature profile is reconstructed based on several subsurface in situ temperature observations and the thermocline was estimated using the model. The results show that: 1) For the study area, the former four principal components can explain 95% of the overall variance, and the vertical distribution of temperature is most stable using the in situ temperature observations near the surface. 2) The model verifications based on the observed CTD data from the East China Sea (ECS), South China Sea (SCS) and the areas around Taiwan Island show that the reconstructed profiles have high correlation with the observed ones with the confidence level > 95%, especially to describe the characteristics of the thermocline well. The average errors between the reconstructed and observed profiles in these three areas are 0.69A degrees C, 0.52A degrees C and 1.18A degrees C respectively. It also shows the model RMS error is less than or close to the climatological error. The statistical model can be used to well estimate the temperature profile vertical structure. 3) Comparing the thermocline characteristics between the reconstructed and observed profiles, the results in the ECS show that the average absolute errors are 1.5m, 1.4 m and 0.17A degrees C/m, and the average relative errors are 24.7%, 8.9% and 22.6% for the upper, lower thermocline boundaries and the gradient, respectively. Although the relative errors are obvious, the absolute error is small. In the SCS, the average absolute errors are 4.1 m, 27.7 m and 0.007A degrees C/m, and the average relative errors are 16.1%, 16.8% and 9.5% for the upper, lower thermocline boundaries and the gradient, respectively. The average relative errors are all < 20%. Although the average absolute error of the lower thermocline boundary is considerable, but contrast to the spatial scale of average depth of the lower thermocline boundary (165 m), the average relative error is small (16.8%). Therefore the model can be used to well estimate the thermocline.
Resumo:
利用虚拟现实技术虚拟出月球机器人在月面上的作业环境和作业过程,是提高机器人作业的安全系数和工作效率的一条有效途径。在3D重建得到的虚拟月面环境中,如果采用通常的单纯基于运动学(或者动力学)模型的仿真方法,对机器人的作业和运动进行虚拟,那么机器人与地形交互的过程中容易产生接触偏差。而且,随着仿真时间的推进,这种接触偏差会逐渐积累并不断增大,进而严重影响仿真测试的精度和效果。为了消除月球机器人仿真中的轮地交互误差,在分析误差来源的基础上,提出了基于运动学优化的解决方法。最后利用实际的虚拟现实仿真系统,验证了所提出方法的有效性。
Resumo:
We address the computational role that the construction of a complete surface representation may play in the recovery of 3--D structure from motion. We present a model that combines a feature--based structure--from- -motion algorithm with smooth surface interpolation. This model can represent multiple surfaces in a given viewing direction, incorporates surface constraints from object boundaries, and groups image features using their 2--D image motion. Computer simulations relate the model's behavior to perceptual observations. In a companion paper, we discuss further perceptual experiments regarding the role of surface reconstruction in the human recovery of 3--D structure from motion.
Resumo:
In this note, I propose two extensions to the Java virtual machine (or VM) to allow dynamic languages such as Dylan, Scheme and Smalltalk to be efficiently implemented on the VM. These extensions do not affect the performance of pure Java programs on the machine. The first extension allows for efficient encoding of dynamic data; the second allows for efficient encoding of language-specific computational elements.
Resumo:
Recovering a volumetric model of a person, car, or other object of interest from a single snapshot would be useful for many computer graphics applications. 3D model estimation in general is hard, and currently requires active sensors, multiple views, or integration over time. For a known object class, however, 3D shape can be successfully inferred from a single snapshot. We present a method for generating a ``virtual visual hull''-- an estimate of the 3D shape of an object from a known class, given a single silhouette observed from an unknown viewpoint. For a given class, a large database of multi-view silhouette examples from calibrated, though possibly varied, camera rigs are collected. To infer a novel single view input silhouette's virtual visual hull, we search for 3D shapes in the database which are most consistent with the observed contour. The input is matched to component single views of the multi-view training examples. A set of viewpoint-aligned virtual views are generated from the visual hulls corresponding to these examples. The 3D shape estimate for the input is then found by interpolating between the contours of these aligned views. When the underlying shape is ambiguous given a single view silhouette, we produce multiple visual hull hypotheses; if a sequence of input images is available, a dynamic programming approach is applied to find the maximum likelihood path through the feasible hypotheses over time. We show results of our algorithm on real and synthetic images of people.
Resumo:
Three-dimensional models which contain both geometry and texture have numerous applications such as urban planning, physical simulation, and virtual environments. A major focus of computer vision (and recently graphics) research is the automatic recovery of three-dimensional models from two-dimensional images. After many years of research this goal is yet to be achieved. Most practical modeling systems require substantial human input and unlike automatic systems are not scalable. This thesis presents a novel method for automatically recovering dense surface patches using large sets (1000's) of calibrated images taken from arbitrary positions within the scene. Physical instruments, such as Global Positioning System (GPS), inertial sensors, and inclinometers, are used to estimate the position and orientation of each image. Essentially, the problem is to find corresponding points in each of the images. Once a correspondence has been established, calculating its three-dimensional position is simply a matter of geometry. Long baseline images improve the accuracy. Short baseline images and the large number of images greatly simplifies the correspondence problem. The initial stage of the algorithm is completely local and scales linearly with the number of images. Subsequent stages are global in nature, exploit geometric constraints, and scale quadratically with the complexity of the underlying scene. We describe techniques for: 1) detecting and localizing surface patches; 2) refining camera calibration estimates and rejecting false positive surfels; and 3) grouping surface patches into surfaces and growing the surface along a two-dimensional manifold. We also discuss a method for producing high quality, textured three-dimensional models from these surfaces. Some of the most important characteristics of this approach are that it: 1) uses and refines noisy calibration estimates; 2) compensates for large variations in illumination; 3) tolerates significant soft occlusion (e.g. tree branches); and 4) associates, at a fundamental level, an estimated normal (i.e. no frontal-planar assumption) and texture with each surface patch.
Resumo:
The problem of automatic face recognition is to visually identify a person in an input image. This task is performed by matching the input face against the faces of known people in a database of faces. Most existing work in face recognition has limited the scope of the problem, however, by dealing primarily with frontal views, neutral expressions, and fixed lighting conditions. To help generalize existing face recognition systems, we look at the problem of recognizing faces under a range of viewpoints. In particular, we consider two cases of this problem: (i) many example views are available of each person, and (ii) only one view is available per person, perhaps a driver's license or passport photograph. Ideally, we would like to address these two cases using a simple view-based approach, where a person is represented in the database by using a number of views on the viewing sphere. While the view-based approach is consistent with case (i), for case (ii) we need to augment the single real view of each person with synthetic views from other viewpoints, views we call 'virtual views'. Virtual views are generated using prior knowledge of face rotation, knowledge that is 'learned' from images of prototype faces. This prior knowledge is used to effectively rotate in depth the single real view available of each person. In this thesis, I present the view-based face recognizer, techniques for synthesizing virtual views, and experimental results using real and virtual views in the recognizer.