975 resultados para 3d point cloud


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Micellar solutions of polystyrene-block-polybutadiene and polystyrene-block-polyisoprene in propane are found to exhibit significantly lower cloud pressures than the corresponding hypothetical nonmicellar solutions. Such a cloud-pressure reduction indicates the extent to which micelle formation enhances the apparent diblock solubility in near-critical and hence compressible propane. Concentration-dependent pressure-temperature points beyond which no micelles can be formed, referred to as the micellization end points, are found to depend on the block type, size, and ratio. The cloud-pressure reduction and the micellization end point measured for styrene-diene diblocks in propane should be characteristic of all amphiphilic diblock copolymer solutions that form micelles in compressible solvents.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] In this paper we present a variational technique for the reconstruction of 3D cylindrical surfaces. Roughly speaking by a cylindrical surface we mean a surface that can be parameterized using the projection on a cylinder in terms of two coordinates, representing the displacement and angle in a cylindrical coordinate system respectively. The starting point for our method is a set of different views of a cylindrical surface, as well as a precomputed disparity map estimation between pair of images. The proposed variational technique is based on an energy minimization where we balance on the one hand the regularity of the cylindrical function given by the distance of the surface points to cylinder axis, and on the other hand, the distance between the projection of the surface points on the images and the expected location following the precomputed disparity map estimation between pair of images. One interesting advantage of this approach is that we regularize the 3D surface by means of a bi-dimensio al minimization problem. We show some experimental results for large stereo sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] In this paper, we present a vascular tree model made with synthetic materials and which allows us to obtain images to make a 3D reconstruction.We have used PVC tubes of several diameters and lengths that will let us evaluate the accuracy of our 3D reconstruction. In order to calibrate the camera we have used a corner detector. Also we have used Optical Flow techniques to follow the points through the images going and going back. We describe two general techniques to extract a sequence of corresponding points from multiple views of an object. The resulting sequence of points will be used later to reconstruct a set of 3D points representing the object surfaces on the scene. We have made the 3D reconstruction choosing by chance a couple of images and we have calculated the projection error. After several repetitions, we have found the best 3D location for the point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] In this paper, we present a vascular tree model made with synthetic materials and which allows us to obtain images to make a 3D reconstruction. In order to create this model, we have used PVC tubes of several diameters and lengths that will let us evaluate the accuracy of our 3D reconstruction. We have made the 3D reconstruction from a series of images that we have from our model and after we have calibrated the camera. In order to calibrate it we have used a corner detector. Also we have used Optical Flow techniques to follow the points through the images going and going back. Once we have the set of images where we have located a point, we have made the 3D reconstruction choosing by chance a couple of images and we have calculated the projection error. After several repetitions, we have found the best 3D location for the point.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Zusammenfassung:Hintergrund/Ziel: Die Beschreibung der funktionellen Einteilung der Leber basiert auf dem Schema von Claude de Couinaud. Die Grenze zwischen der rechten und linken Leberhälfte scheint leicht durch die Lage der mittleren Lebervene lokalisierbar. Nach der gängigen Meinung wird diese Grenze nicht durch die Trias aus Pfortader, Arterie und Gallengang überschritten. Es soll untersucht werden, ob die Lage dieser gefäßarmen Zone zwischen den Pfortaderästen benachbarter Segmente von der Lage der Grenzebene durch die mittlere Lebervene abweicht.Methode: Bei 73 Patienten wurden im Rahmen der normalen präoperativen Diagnostik dreiphasige Spiral-CT Untersuchungen durchgeführt. Aus diesen Daten wurden dreidimensionale Rekonstruktionen erzeugt und ausgewertet. Ergebnisse: In der vorliegenden Untersuchung konnte gezeigt werden, dass die mittlere Sektorengrenze unterschiedliche Positionen einnimmt, je nach welchem Gefäßsystem sie bestimmt wird. Die mittlere Sektorengrenze zeigt hierbei einen Unterschied in ihrer Lage von 14,2° im Median. An der ventralen Leberoberfläche liegt die Grenzebene nach der mittleren Lebervene damit rechts lateral der gefäßarmen Zone zwischen den Pfortaderästen.Schlussfolgerung: Der Unterschied der Grenzebenen ist in dreidimensionalen Rekonstruktionen demonstrierbar und findet Anwendung bei der Segmentzuordnung von Läsionen. Diese Rekonstruktionen erleichtern die interdisziplinäre Kommunikation und erlauben eine vereinfachte und möglicherweise präzisere Operationsplanung.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis a mathematical model was derived that describes the charge and energy transport in semiconductor devices like transistors. Moreover, numerical simulations of these physical processes are performed. In order to accomplish this, methods of theoretical physics, functional analysis, numerical mathematics and computer programming are applied. After an introduction to the status quo of semiconductor device simulation methods and a brief review of historical facts up to now, the attention is shifted to the construction of a model, which serves as the basis of the subsequent derivations in the thesis. Thereby the starting point is an important equation of the theory of dilute gases. From this equation the model equations are derived and specified by means of a series expansion method. This is done in a multi-stage derivation process, which is mainly taken from a scientific paper and which does not constitute the focus of this thesis. In the following phase we specify the mathematical setting and make precise the model assumptions. Thereby we make use of methods of functional analysis. Since the equations we deal with are coupled, we are concerned with a nonstandard problem. In contrary, the theory of scalar elliptic equations is established meanwhile. Subsequently, we are preoccupied with the numerical discretization of the equations. A special finite-element method is used for the discretization. This special approach has to be done in order to make the numerical results appropriate for practical application. By a series of transformations from the discrete model we derive a system of algebraic equations that are eligible for numerical evaluation. Using self-made computer programs we solve the equations to get approximate solutions. These programs are based on new and specialized iteration procedures that are developed and thoroughly tested within the frame of this research work. Due to their importance and their novel status, they are explained and demonstrated in detail. We compare these new iterations with a standard method that is complemented by a feature to fit in the current context. A further innovation is the computation of solutions in three-dimensional domains, which are still rare. Special attention is paid to applicability of the 3D simulation tools. The programs are designed to have justifiable working complexity. The simulation results of some models of contemporary semiconductor devices are shown and detailed comments on the results are given. Eventually, we make a prospect on future development and enhancements of the models and of the algorithms that we used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This PhD thesis discusses the impact of Cloud Computing infrastructures on Digital Forensics in the twofold role of target of investigations and as a helping hand to investigators. The Cloud offers a cheap and almost limitless computing power and storage space for data which can be leveraged to commit either new or old crimes and host related traces. Conversely, the Cloud can help forensic examiners to find clues better and earlier than traditional analysis applications, thanks to its dramatically improved evidence processing capabilities. In both cases, a new arsenal of software tools needs to be made available. The development of this novel weaponry and its technical and legal implications from the point of view of repeatability of technical assessments is discussed throughout the following pages and constitutes the unprecedented contribution of this work

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present thesis we address the problem of detecting and localizing a small spherical target with characteristic electrical properties inside a volume of cylindrical shape, representing female breast, with MWI. One of the main works of this project is to properly extend the existing linear inversion algorithm from planar slice to volume reconstruction; results obtained, under the same conditions and experimental setup are reported for the two different approaches. Preliminar comparison and performance analysis of the reconstruction algorithms is performed via numerical simulations in a software-created environment: a single dipole antenna is used for illuminating the virtual breast phantom from different positions and, for each position, the corresponding scattered field value is registered. Collected data are then exploited in order to reconstruct the investigation domain, along with the scatterer position, in the form of image called pseudospectrum. During this process the tumor is modeled as a dielectric sphere of small radius and, for electromagnetic scattering purposes, it's treated as a point-like source. To improve the performance of reconstruction technique, we repeat the acquisition for a number of frequencies in a given range: the different pseudospectra, reconstructed from single frequency data, are incoherently combined with MUltiple SIgnal Classification (MUSIC) method which returns an overall enhanced image. We exploit multi-frequency approach to test the performance of 3D linear inversion reconstruction algorithm while varying the source position inside the phantom and the height of antenna plane. Analysis results and reconstructed images are then reported. Finally, we perform 3D reconstruction from experimental data gathered with the acquisition system in the microwave laboratory at DIFA, University of Bologna for a recently developed breast-phantom prototype; obtained pseudospectrum and performance analysis for the real model are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work covers the synthesis of second-generation, ethylene glycol dendrons covalently linked to a surface anchor that contains two, three, or four catechol groups, the molecular assembly in aqueous buffer on titanium oxide surfaces, and the evaluation of the resistance of the monomolecular adlayers against nonspecific protein adsorption in contact with full blood serum. The results were compared to those of a linear poly(ethylene glycol) (PEG) analogue with the same molecular weight. The adsorption kinetics as well as resulting surface coverages were monitored by ex situ spectroscopic ellipsometry (VASE), in situ optical waveguide lightmode spectroscopy (OWLS), and quartz crystal microbalance with dissipation (QCM-D) investigations. The expected compositions of the macromolecular films were verified by X-ray photoelectron spectroscopy (XPS). The results of the adsorption study, performed in a high ionic strength ("cloud-point") buffer at room temperature, demonstrate that the adsorption kinetics increase with increasing number of catechol binding moieties and exceed the values found for the linear PEG analogue. This is attributed to the comparatively smaller and more confined molecular volume of the dendritic macromolecules in solution, the improved presentation of the catechol anchor, and/or their much lower cloud-point in the chosen buffer (close to room temperature). Interestingly, in terms of mechanistic aspects of "nonfouling" surface properties, the dendron films were found to be much stiffer and considerably less hydrated in comparison to the linear PEG brush surface, closer in their physicochemical properties to oligo(ethylene glycol) alkanethiol self-assembled monolayers than to conventional brush surfaces. Despite these differences, both types of polymer architectures at saturation coverage proved to be highly resistant toward protein adsorption. Although associated with higher synthesis costs, dendritic macromolecules are considered to be an attractive alternative to linear polymers for surface (bio)functionalization in view of their spontaneous formation of ultrathin, confluent, and nonfouling monolayers at room temperature and their outstanding ability to present functional ligands (coupled to the termini of the dendritic structure) at high surface densities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Constructing a 3D surface model from sparse-point data is a nontrivial task. Here, we report an accurate and robust approach for reconstructing a surface model of the proximal femur from sparse-point data and a dense-point distribution model (DPDM). The problem is formulated as a three-stage optimal estimation process. The first stage, affine registration, is to iteratively estimate a scale and a rigid transformation between the mean surface model of the DPDM and the sparse input points. The estimation results of the first stage are used to establish point correspondences for the second stage, statistical instantiation, which stably instantiates a surface model from the DPDM using a statistical approach. This surface model is then fed to the third stage, kernel-based deformation, which further refines the surface model. Handling outliers is achieved by consistently employing the least trimmed squares (LTS) approach with a roughly estimated outlier rate in all three stages. If an optimal value of the outlier rate is preferred, we propose a hypothesis testing procedure to automatically estimate it. We present here our validations using four experiments, which include 1 leave-one-out experiment, 2 experiment on evaluating the present approach for handling pathology, 3 experiment on evaluating the present approach for handling outliers, and 4 experiment on reconstructing surface models of seven dry cadaver femurs using clinically relevant data without noise and with noise added. Our validation results demonstrate the robust performance of the present approach in handling outliers, pathology, and noise. An average 95-percentile error of 1.7-2.3 mm was found when the present approach was used to reconstruct surface models of the cadaver femurs from sparse-point data with noise added.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of approximating the 3D scan of a real object through an affine combination of examples. Common approaches depend either on the explicit estimation of point-to-point correspondences or on 2-dimensional projections of the target mesh; both present drawbacks. We follow an approach similar to [IF03] by representing the target via an implicit function, whose values at the vertices of the approximation are used to define a robust cost function. The problem is approached in two steps, by approximating first a coarse implicit representation of the whole target, and then finer, local ones; the local approximations are then merged together with a Poisson-based method. We report the results of applying our method on a subset of 3D scans from the Face Recognition Grand Challenge v.1.0.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

wo methods for registering laser-scans of human heads and transforming them to a new semantically consistent topology defined by a user-provided template mesh are described. Both algorithms are stated within the Iterative Closest Point framework. The first method is based on finding landmark correspondences by iteratively registering the vicinity of a landmark with a re-weighted error function. Thin-plate spline interpolation is then used to deform the template mesh and finally the scan is resampled in the topology of the deformed template. The second algorithm employs a morphable shape model, which can be computed from a database of laser-scans using the first algorithm. It directly optimizes pose and shape of the morphable model. The use of the algorithm with PCA mixture models, where the shape is split up into regions each described by an individual subspace, is addressed. Mixture models require either blending or regularization strategies, both of which are described in detail. For both algorithms, strategies for filling in missing geometry for incomplete laser-scans are described. While an interpolation-based approach can be used to fill in small or smooth regions, the model-driven algorithm is capable of fitting a plausible complete head mesh to arbitrarily small geometry, which is known as "shape completion". The importance of regularization in the case of extreme shape completion is shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eine zunehmende Anzahl von Artikeln in Publikumszeitschriften und Journalen rückt die direkte Herstellung von Bauteilen und Figuren immer mehr in das Bewusstsein einer breiten Öffentlichkeit. Leider ergibt sich nur selten ein einigermaßen vollständiges Bild davon, wie und in welchen Lebensbereichen diese Techniken unseren Alltag verändern werden. Das liegt auch daran, dass die meisten Artikel sehr technisch geprägt sind und sich nur punktuell auf Beispiele stützen. Dieser Beitrag geht von den Bedürfnissen der Menschen aus, wie sie z.B. in der Maslow’schen Bedürfnispyramide strukturiert dargestellt sind und unterstreicht dadurch, dass 3D Printing (oder Additive Manufacturing resp. Rapid Prototyping) bereits alle Lebensbereiche erfasst hat und im Begriff ist, viele davon zu revolutionieren.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eine zunehmende Anzahl von Artikeln in Publikumszeitschriften und Journalen rückt die direkte Herstellung von Bauteilen und Figuren immer mehr in das Bewusstsein einer breiten Öffentlichkeit. Leider ergibt sich nur selten ein einigermaßen vollständiges Bild davon, wie und in welchen Lebensbereichen diese Techniken unseren Alltag verändern werden. Das liegt auch daran, dass die meisten Artikel sehr technisch geprägt sind und sich nur punktuell auf Beispiele stützen. Dieser Beitrag geht von den Bedürfnissen der Menschen aus, wie sie z.B. in der Maslow’schen Bedürfnispyramide strukturiert dargestellt sind und unterstreicht dadurch, dass 3D Printing (oder Additive Manufacturing resp. Rapid Prototyping) bereits alle Lebensbereiche erfasst hat und im Begriff ist, viele davon zu revolutionieren.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION Native-MR angiography (N-MRA) is considered an imaging alternative to contrast enhanced MR angiography (CE-MRA) for patients with renal insufficiency. Lower intraluminal contrast in N-MRA often leads to failure of the segmentation process in commercial algorithms. This study introduces an in-house 3D model-based segmentation approach used to compare both sequences by automatic 3D lumen segmentation, allowing for evaluation of differences of aortic lumen diameters as well as differences in length comparing both acquisition techniques at every possible location. METHODS AND MATERIALS Sixteen healthy volunteers underwent 1.5-T-MR Angiography (MRA). For each volunteer, two different MR sequences were performed, CE-MRA: gradient echo Turbo FLASH sequence and N-MRA: respiratory-and-cardiac-gated, T2-weighted 3D SSFP. Datasets were segmented using a 3D model-based ellipse-fitting approach with a single seed point placed manually above the celiac trunk. The segmented volumes were manually cropped from left subclavian artery to celiac trunk to avoid error due to side branches. Diameters, volumes and centerline length were computed for intraindividual comparison. For statistical analysis the Wilcoxon-Signed-Ranked-Test was used. RESULTS Average centerline length obtained based on N-MRA was 239.0±23.4 mm compared to 238.6±23.5 mm for CE-MRA without significant difference (P=0.877). Average maximum diameter obtained based on N-MRA was 25.7±3.3 mm compared to 24.1±3.2 mm for CE-MRA (P<0.001). In agreement with the difference in diameters, volumes obtained based on N-MRA (100.1±35.4 cm(3)) were consistently and significantly larger compared to CE-MRA (89.2±30.0 cm(3)) (P<0.001). CONCLUSIONS 3D morphometry shows highly similar centerline lengths for N-MRA and CE-MRA, but systematically higher diameters and volumes for N-MRA.