984 resultados para 3D Point Clouds


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper tackles the optimization of applications in multi-provider hybrid cloud scenarios from an economic point of view. In these scenarios the great majority of solutions offer the automatic allocation of resources on different cloud providers based on their current prices. However our approach is intended to introduce a novel solution by making maximum use of divide and rule. This paper describes a methodology to create cost aware cloud applications that can be broken down into the three most important components in cloud infrastructures: computation, network and storage. A real videoconference system has been modified in order to evaluate this idea with both theoretical and empirical experiments. This system has become a widely used tool in several national and European projects for e-learning and collaboration purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estudamos transições de fases quânticas em gases bosônicos ultrafrios aprisionados em redes óticas. A física desses sistemas é capturada por um modelo do tipo Bose-Hubbard que, no caso de um sistema sem desordem, em que os átomos têm interação de curto alcance e o tunelamento é apenas entre sítios primeiros vizinhos, prevê a transição de fases quântica superfluido-isolante de Mott (SF-MI) quando a profundidade do potencial da rede ótica é variado. Num primeiro estudo, verificamos como o diagrama de fases dessa transição muda quando passamos de uma rede quadrada para uma hexagonal. Num segundo, investigamos como a desordem modifica essa transição. No estudo com rede hexagonal, apresentamos o diagrama de fases da transição SF-MI e uma estimativa para o ponto crítico do primeiro lobo de Mott. Esses resultados foram obtidos usando o algoritmo de Monte Carlo quântico denominado Worm. Comparamos nossos resultados com os obtidos a partir de uma aproximação de campo médio e com os de um sistema com uma rede ótica quadrada. Ao introduzir desordem no sistema, uma nova fase emerge no diagrama de fases do estado fundamental intermediando a fase superfluida e a isolante de Mott. Essa nova fase é conhecida como vidro de Bose (BG) e a transição de fases quântica SF-BG que ocorre nesse sistema gerou muitas controvérsias desde seus primeiros estudos iniciados no fim dos anos 80. Apesar dos avanços em direção ao entendimento completo desta transição, a caracterização básica das suas propriedades críticas ainda é debatida. O que motivou nosso estudo, foi a publicação de resultados experimentais e numéricos em sistemas tridimensionais [Yu et al. Nature 489, 379 (2012), Yu et al. PRB 86, 134421 (2012)] que violam a lei de escala $\\phi= u z$, em que $\\phi$ é o expoente da temperatura crítica, $z$ é o expoente crítico dinâmico e $ u$ é o expoente do comprimento de correlação. Abordamos essa controvérsia numericamente fazendo uma análise de escalonamento finito usando o algoritmo Worm nas suas versões quântica e clássica. Nossos resultados demonstram que trabalhos anteriores sobre a dependência da temperatura de transição superfluido-líquido normal com o potencial químico (ou campo magnético, em sistemas de spin), $T_c \\propto (\\mu-\\mu_c)^\\phi$, estavam equivocados na interpretação de um comportamento transiente na aproximação da região crítica genuína. Quando os parâmetros do modelo são modificados de maneira a ampliar a região crítica quântica, simulações com ambos os modelos clássico e quântico revelam que a lei de escala $\\phi= u z$ [com $\\phi=2.7(2)$, $z=3$ e $ u = 0.88(5)$] é válida. Também estimamos o expoente crítico do parâmetro de ordem, encontrando $\\beta=1.5(2)$.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A cold atomic cloud is a versatile object, because it offers many handles to control and tune its properties. This facilitates studies of its behavior in various circumstances, such as sample temperature, size and density, composition, dimensionality and coherence time. The range of possible experiments is constrained by the specifications of the atomic species used. In this thesis presents the work done in the experiment for laser cooling of strontium atoms, focusing on its stability, which should provide cold and ultracold samples for the study of collective effects in light scattering. From the initial apparatus, innumerous changes were performed. The vacuum system got improved and now reached lower ultra high vacuum due to the pre-baking done to its parts and adding a titanium-sublimation stage. The quadrupole trap were improved by the design and construction of a new pair of coils. The stability of the blue, green and red laser systems and the loss prevention of laser light were improved, giving rise to a robust apparatus. Another important point is the development of homemade devices to reduce the costs and to be used as a monitor of different parts of an cold atoms experiment. From this homemade devices, we could demonstrate a dramatic linewidth narrowing by injection lock of an low cost 461 nm diode laser and its application to our strontium experiment. In the end, this improved experimental apparatus made possible the study of a new scattering effect, the mirror assisted coherent back-scattering (mCBS).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a detailed numerical study on the effects of adding quenched impurities to a three dimensional system which in the pure case undergoes a strong first order phase transition (specifically, the ferromagnetic/paramagnetic transition of the site-diluted four states Potts model). We can state that the transition remains first-order in the presence of quenched disorder (a small amount of it) but it turns out to be second order as more impurities are added. A tricritical point, which is studied by means of Finite-Size Scaling, separates the first-order and second-order parts of the critical line. The results were made possible by a new definition of the disorder average that avoids the diverging-variance probability distributions that arise using the standard methodology. We also made use of a recently proposed microcanonical Monte Carlo method in which entropy, instead of free energy, is the basic quantity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper submitted to the 43rd International Symposium on Robotics (ISR), Taipei, Taiwan, August 29-31, 2012.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims. We study the optical and near-infrared colour excesses produced by circumstellar emission in a sample of Be/X-ray binaries. Our main goals are exploring whether previously published relations, valid for isolated Be stars, are applicable to Be/X-ray binaries and computing the distance to these systems after correcting for the effects of the circumstellar contamination. Methods. Simultaneous UBVRI photometry and spectra in the 3500−7000 Å spectral range were obtained for 11 optical counterparts to Be/X-ray binaries in the LMC, 5 in the SMC and 12 in the Milky Way. As a measure of the amount of circumstellar emission we used the Hα equivalent width corrected for photospheric absorption. Results. We find a linear relationship between the strength of the Hα emission line and the component of E(B − V) originating from the circumstellar disk. This relationship is valid for stars with emission lines weaker than EW ≈ −15   Å. Beyond this point, the circumstellar contribution to E(B − V) saturates at a value ≈0.17   mag. A similar relationship is found for the (V − I) near infrared colour excess, albeit with a steeper slope and saturation level. The circumstellar excess in (B − V) is found to be about five times higher for Be/X-ray binaries than for isolated Be stars with the same equivalent width EW(Hα), implying significant differences in the physical properties of their circumstellar envelopes. The distance to Be/X-ray binaries (with non-shell Be star companions) can only be correctly estimated by taking into account the excess emission in the V band produced by free-free and free-bound transitions in the circumstellar envelope. We provide a simple method to determine the distances that includes this effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several recent works deal with 3D data in mobile robotic problems, e.g. mapping or egomotion. Data comes from any kind of sensor such as stereo vision systems, time of flight cameras or 3D lasers, providing a huge amount of unorganized 3D data. In this paper, we describe an efficient method to build complete 3D models from a Growing Neural Gas (GNG). The GNG is applied to the 3D raw data and it reduces both the subjacent error and the number of points, keeping the topology of the 3D data. The GNG output is then used in a 3D feature extraction method. We have performed a deep study in which we quantitatively show that the use of GNG improves the 3D feature extraction method. We also show that our method can be applied to any kind of 3D data. The 3D features obtained are used as input in an Iterative Closest Point (ICP)-like method to compute the 6DoF movement performed by a mobile robot. A comparison with standard ICP is performed, showing that the use of GNG improves the results. Final results of 3D mapping from the egomotion calculated are also shown.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Feature vectors can be anything from simple surface normals to more complex feature descriptors. Feature extraction is important to solve various computer vision problems: e.g. registration, object recognition and scene understanding. Most of these techniques cannot be computed online due to their complexity and the context where they are applied. Therefore, computing these features in real-time for many points in the scene is impossible. In this work, a hardware-based implementation of 3D feature extraction and 3D object recognition is proposed to accelerate these methods and therefore the entire pipeline of RGBD based computer vision systems where such features are typically used. The use of a GPU as a general purpose processor can achieve considerable speed-ups compared with a CPU implementation. In this work, advantageous results are obtained using the GPU to accelerate the computation of a 3D descriptor based on the calculation of 3D semi-local surface patches of partial views. This allows descriptor computation at several points of a scene in real-time. Benefits of the accelerated descriptor have been demonstrated in object recognition tasks. Source code will be made publicly available as contribution to the Open Source Point Cloud Library.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

3D sensors provides valuable information for mobile robotic tasks like scene classification or object recognition, but these sensors often produce noisy data that makes impossible applying classical keypoint detection and feature extraction techniques. Therefore, noise removal and downsampling have become essential steps in 3D data processing. In this work, we propose the use of a 3D filtering and down-sampling technique based on a Growing Neural Gas (GNG) network. GNG method is able to deal with outliers presents in the input data. These features allows to represent 3D spaces, obtaining an induced Delaunay Triangulation of the input space. Experiments show how the state-of-the-art keypoint detectors improve their performance using GNG output representation as input data. Descriptors extracted on improved keypoints perform better matching in robotics applications as 3D scene registration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To evaluate two cases of intermittent exotropia (IX(T)) treated by vision therapy the efficacy of the treatment by complementing the clinical examination with a 3-D video-oculography to register and to evidence the potential applicability of this technology for such purpose. Methods: We report the binocular alignment changes occurring after vision therapy in a woman of 36 years with an IX(T) of 25 prism diopters (Δ) at far and 18 Δ at near and a child of 10 years with 8 Δ of IX(T) in primary position associated to 6 Δ of left eye hypotropia. Both patients presented good visual acuity with correction in both eyes. Instability of ocular deviation was evident by VOG analysis, revealing also the presence of vertical and torsional components. Binocular vision therapy was prescribed and performed including different types of vergence, accommodation, and consciousness of diplopia training. Results: After therapy, excellent ranges of fusional vergence and a “to-the-nose” near point of convergence were obtained. The 3-D VOG examination (Sensoro Motoric Instruments, Teltow, Germany) confirmed the compensation of the deviation with a high level of stability of binocular alignment. Significant improvement could be observed after therapy in the vertical and torsional components that were found to become more stable. Patients were very satisfied with the outcome obtained by vision therapy. Conclusion: 3D-VOG is a useful technique for providing an objective register of the compensation of the ocular deviation and the stability of the binocular alignment achieved after vision therapy in cases of IX(T), providing a detailed analysis of vertical and torsional improvements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objetivo: Evaluar la eficacia del tratamiento en 3 casos de exotropia intermitente (XT(i)) mediante ejercicios de terapia visual, completando la exploración clínica con Videooculografia-30 y evidenciar la potencial aplicabilidad de esta tecnología para dicho propósito. Métodos: Exponemos los cambios ocurridos tras ejercicios de terapia visual en una mujer de 36 años con XT(i) de -25 dioptrías prismáticas (dp) de lejos y 18 dp de cerca; Un niño de 10 años de edad con 8 dp de XT(i) en posición primaria, asociados a +6 dp de hipotropia izquierda; y un hombre de 63 años con XT(i) de 6 dp en posición primaria asociada a +7 dp de hipertropia derecha. Todos los pacientes presentaron buena agudeza visual corregida en ambos ojos. La inestabilidad de la desviación ocular se evidenció mediante análisis de VOG-30, revelando la presencia de components verticales y torsionales. Se realizaron ejercicios de terapia visual, incluyendo diferentes tipos de ejercicios de vergencias, acomodación y percepción de la diplopía. Resultados: Tras la terapia visual se obtuvieron excelentes rangos de vergencias fusionales y de punto próximo de convergencia («hasta la nariz»). El examen mediante VOG-3D (Sensoro Motoric lnstruments, Teltow, Germany) confirmó la compensación de la desviación con estabilidad del alineamiento ocular. Se observó una significativa mejora después de la terapia en los components verticals y torsionales, lo cuales se hicieron más estables. Los pacientes se mostraron muy satisfechos de los resultados obtenidos. Conclusión: La VOG-3D es una técnica útil para dotamos de un método objetivo de registro de la compensación y estabilidad de la desviación ocular después de realizar ejercicios de terapia visual en casos de XT(i), ofreciéndonos un detallado análisis de la mejoría de los components verticales y torsionales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Today, the requirement of professional skills to university students is constantly increasing in our society. In our opinion, the content offered in official degrees need to be nourished with different variables, enriching their global professional knowledge in a parallel way; that is why, in recent years, there is a great multiplicity of complementary courses at university. One of the most socially demanded technical requirements within the architectural, design or engineering field is the management of 3D drawing software, becoming an indispensable reality in these sectors. Thus, this specific training becomes essential over two-dimension traditional design, because the inclusion of great possibilities of spatial development that go beyond conventional orthographic projections (plans, sections or elevations), allowing modelling and rotation of the selected items from multiple angles and perspectives. Therefore, this paper analyzes the teaching methodology of a complementary course for those technicians in the construction industry interested in computer-aided design, using modelling (SketchupMake) and rendering programs (Kerkythea). The course is developed from the technician point of view, by learning computer management and its application to professional development from a more general to a more specific view through practical examples. The proposed methodology is based on the development of real examples in different professional environments such as rehabilitation, new constructions, opening projects or architectural design. This multidisciplinary contribution improves criticism of students in different areas, encouraging new learning strategies and the independent development of three-dimensional solutions. Thus, the practical implementation of new situations, even suggested by the students themselves, ensures active participation, saving time during the design process and the increase of effectiveness when generating elements which may be represented, moved or virtually tested. In conclusion, this teaching-learning methodology improves the skills and competencies of students to face the growing professional demands of society. After finishing the course, technicians not only improved their expertise in the field of drawing but they also enhanced their capacity for spatial vision; both essential qualities in these sectors that can be applied to their professional development with great success.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Durante los últimos años ha sido creciente el uso de las unidades de procesamiento gráfico, más conocidas como GPU (Graphic Processing Unit), en aplicaciones de propósito general, dejando a un lado el objetivo para el que fueron creadas y que no era otro que el renderizado de gráficos por computador. Este crecimiento se debe en parte a la evolución que han experimentado estos dispositivos durante este tiempo y que les ha dotado de gran potencia de cálculo, consiguiendo que su uso se extienda desde ordenadores personales a grandes cluster. Este hecho unido a la proliferación de sensores RGB-D de bajo coste ha hecho que crezca el número de aplicaciones de visión que hacen uso de esta tecnología para la resolución de problemas, así como también para el desarrollo de nuevas aplicaciones. Todas estas mejoras no solamente se han realizado en la parte hardware, es decir en los dispositivos, sino también en la parte software con la aparición de nuevas herramientas de desarrollo que facilitan la programación de estos dispositivos GPU. Este nuevo paradigma se acuñó como Computación de Propósito General sobre Unidades de Proceso Gráfico (General-Purpose computation on Graphics Processing Units, GPGPU). Los dispositivos GPU se clasifican en diferentes familias, en función de las distintas características hardware que poseen. Cada nueva familia que aparece incorpora nuevas mejoras tecnológicas que le permite conseguir mejor rendimiento que las anteriores. No obstante, para sacar un rendimiento óptimo a un dispositivo GPU es necesario configurarlo correctamente antes de usarlo. Esta configuración viene determinada por los valores asignados a una serie de parámetros del dispositivo. Por tanto, muchas de las implementaciones que hoy en día hacen uso de los dispositivos GPU para el registro denso de nubes de puntos 3D, podrían ver mejorado su rendimiento con una configuración óptima de dichos parámetros, en función del dispositivo utilizado. Es por ello que, ante la falta de un estudio detallado del grado de afectación de los parámetros GPU sobre el rendimiento final de una implementación, se consideró muy conveniente la realización de este estudio. Este estudio no sólo se realizó con distintas configuraciones de parámetros GPU, sino también con diferentes arquitecturas de dispositivos GPU. El objetivo de este estudio es proporcionar una herramienta de decisión que ayude a los desarrolladores a la hora implementar aplicaciones para dispositivos GPU. Uno de los campos de investigación en los que más prolifera el uso de estas tecnologías es el campo de la robótica ya que tradicionalmente en robótica, sobre todo en la robótica móvil, se utilizaban combinaciones de sensores de distinta naturaleza con un alto coste económico, como el láser, el sónar o el sensor de contacto, para obtener datos del entorno. Más tarde, estos datos eran utilizados en aplicaciones de visión por computador con un coste computacional muy alto. Todo este coste, tanto el económico de los sensores utilizados como el coste computacional, se ha visto reducido notablemente gracias a estas nuevas tecnologías. Dentro de las aplicaciones de visión por computador más utilizadas está el registro de nubes de puntos. Este proceso es, en general, la transformación de diferentes nubes de puntos a un sistema de coordenadas conocido. Los datos pueden proceder de fotografías, de diferentes sensores, etc. Se utiliza en diferentes campos como son la visión artificial, la imagen médica, el reconocimiento de objetos y el análisis de imágenes y datos de satélites. El registro se utiliza para poder comparar o integrar los datos obtenidos en diferentes mediciones. En este trabajo se realiza un repaso del estado del arte de los métodos de registro 3D. Al mismo tiempo, se presenta un profundo estudio sobre el método de registro 3D más utilizado, Iterative Closest Point (ICP), y una de sus variantes más conocidas, Expectation-Maximization ICP (EMICP). Este estudio contempla tanto su implementación secuencial como su implementación paralela en dispositivos GPU, centrándose en cómo afectan a su rendimiento las distintas configuraciones de parámetros GPU. Como consecuencia de este estudio, también se presenta una propuesta para mejorar el aprovechamiento de la memoria de los dispositivos GPU, permitiendo el trabajo con nubes de puntos más grandes, reduciendo el problema de la limitación de memoria impuesta por el dispositivo. El funcionamiento de los métodos de registro 3D utilizados en este trabajo depende en gran medida de la inicialización del problema. En este caso, esa inicialización del problema consiste en la correcta elección de la matriz de transformación con la que se iniciará el algoritmo. Debido a que este aspecto es muy importante en este tipo de algoritmos, ya que de él depende llegar antes o no a la solución o, incluso, no llegar nunca a la solución, en este trabajo se presenta un estudio sobre el espacio de transformaciones con el objetivo de caracterizarlo y facilitar la elección de la transformación inicial a utilizar en estos algoritmos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the insight gained from 2-D particle models, and given that the dynamics of crustal faults occur in 3-D space, the question remains, how do the 3-D fault gouge dynamics differ from those in 2-D? Traditionally, 2-D modeling has been preferred over 3-D simulations because of the computational cost of solving 3-D problems. However, modern high performance computing architectures, combined with a parallel implementation of the Lattice Solid Model (LSM), provide the opportunity to explore 3-D fault micro-mechanics and to advance understanding of effective constitutive relations of fault gouge layers. In this paper, macroscopic friction values from 2-D and 3-D LSM simulations, performed on an SGI Altix 3700 super-cluster, are compared. Two rectangular elastic blocks of bonded particles, with a rough fault plane and separated by a region of randomly sized non-bonded gouge particles, are sheared in opposite directions by normally-loaded driving plates. The results demonstrate that the gouge particles in the 3-D models undergo significant out-of-plane motion during shear. The 3-D models also exhibit a higher mean macroscopic friction than the 2-D models for varying values of interparticle friction. 2-D LSM gouge models have previously been shown to exhibit accelerating energy release in simulated earthquake cycles, supporting the Critical Point hypothesis. The 3-D models are shown to also display accelerating energy release, and good fits of power law time-to-failure functions to the cumulative energy release are obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the creation of 3D statistical shape models of the knee bones and their use to embed information into a segmentation system for MRIs of the knee. We propose utilising the strong spatial relationship between the cartilages and the bones in the knee by embedding this information into the created models. This information can then be used to automate the initialisation of segmentation algorithms for the cartilages. The approach used to automatically generate the 3D statistical shape models of the bones is based on the point distribution model optimisation framework of Davies. Our implementation of this scheme uses a parameterized surface extraction algorithm, which is used as the basis for the optimisation scheme that automatically creates the 3D statistical shape models. The current approach is illustrated by generating 3D statistical shape models of the patella, tibia and femoral bones from a segmented database of the knee. The use of these models to embed spatial relationship information to aid in the automation of segmentation algorithms for the cartilages is then illustrated.