976 resultados para 3D point cloud file as 3Ddxf
Resumo:
SOA (Service Oriented Architecture), workflow, the Semantic Web, and Grid computing are key enabling information technologies in the development of increasingly sophisticated e-Science infrastructures and application platforms. While the emergence of Cloud computing as a new computing paradigm has provided new directions and opportunities for e-Science infrastructure development, it also presents some challenges. Scientific research is increasingly finding that it is difficult to handle “big data” using traditional data processing techniques. Such challenges demonstrate the need for a comprehensive analysis on using the above mentioned informatics techniques to develop appropriate e-Science infrastructure and platforms in the context of Cloud computing. This survey paper describes recent research advances in applying informatics techniques to facilitate scientific research particularly from the Cloud computing perspective. Our particular contributions include identifying associated research challenges and opportunities, presenting lessons learned, and describing our future vision for applying Cloud computing to e-Science. We believe our research findings can help indicate the future trend of e-Science, and can inform funding and research directions in how to more appropriately employ computing technologies in scientific research. We point out the open research issues hoping to spark new development and innovation in the e-Science field.
Resumo:
We present a novel method for retrieving high-resolution, three-dimensional (3-D) nonprecipitating cloud fields in both overcast and broken-cloud situations. The method uses scanning cloud radar and multiwavelength zenith radiances to obtain gridded 3-D liquid water content (LWC) and effective radius (re) and 2-D column mean droplet number concentration (Nd). By using an adaption of the ensemble Kalman filter, radiances are used to constrain the optical properties of the clouds using a forward model that employs full 3-D radiative transfer while also providing full error statistics given the uncertainty in the observations. To evaluate the new method, we first perform retrievals using synthetic measurements from a challenging cumulus cloud field produced by a large-eddy simulation snapshot. Uncertainty due to measurement error in overhead clouds is estimated at 20% in LWC and 6% in re, but the true error can be greater due to uncertainties in the assumed droplet size distribution and radiative transfer. Over the entire domain, LWC and re are retrieved with average error 0.05–0.08 g m-3 and ~2 μm, respectively, depending on the number of radiance channels used. The method is then evaluated using real data from the Atmospheric Radiation Measurement program Mobile Facility at the Azores. Two case studies are considered, one stratocumulus and one cumulus. Where available, the liquid water path retrieved directly above the observation site was found to be in good agreement with independent values obtained from microwave radiometer measurements, with an error of 20 g m-2.
Resumo:
Periocular recognition has recently become an active topic in biometrics. Typically it uses 2D image data of the periocular region. This paper is the first description of combining 3D shape structure with 2D texture. A simple and effective technique using iterative closest point (ICP) was applied for 3D periocular region matching. It proved its strength for relatively unconstrained eye region capture, and does not require any training. Local binary patterns (LBP) were applied for 2D image based periocular matching. The two modalities were combined at the score-level. This approach was evaluated using the Bosphorus 3D face database, which contains large variations in facial expressions, head poses and occlusions. The rank-1 accuracy achieved from the 3D data (80%) was better than that for 2D (58%), and the best accuracy (83%) was achieved by fusing the two types of data. This suggests that significant improvements to periocular recognition systems could be achieved using the 3D structure information that is now available from small and inexpensive sensors.
Resumo:
The Large Magellanic Cloud (LMC) has a rich star cluster system spanning a wide range of ages and masses. One striking feature of the LMC cluster system is the existence of an age gap between 3 and 10 Gyr. But this feature is not clearly seen among field stars. Three LMC fields containing relatively poor and sparse clusters whose integrated colours are consistent with those of intermediate-age simple stellar populations have been imaged in BVI with the Optical Imager (SOI) at the Southern Telescope for Astrophysical Research (SOAR). A total of six clusters, five of them with estimated initial masses M < 104 M(circle dot), were studied in these fields. Photometry was performed and colour-magnitude diagrams (CMDs) were built using standard point spread function fitting methods. The faintest stars measured reach V similar to 23. The CMD was cleaned from field contamination by making use of the three-dimensional colour and magnitude space available in order to select stars in excess relative to the field. A statistical CMD comparison method was developed for this purpose. The subtraction method has proven to be successful, yielding cleaned CMDs consistent with a simple stellar population. The intermediate-age candidates were found to be the oldest in our sample, with ages between 1 and 2 Gyr. The remaining clusters found in the SOAR/SOI have ages ranging from 100 to 200 Myr. Our analysis has conclusively shown that none of the relatively low-mass clusters studied by us belongs to the LMC age gap.
Resumo:
In Fazenda Belém oil field (Potiguar Basin, Ceará State, Brazil) occur frequently sinkholes and sudden terrain collapses associated to an unconsolidated sedimentary cap covering the Jandaíra karst. This research was carried out in order to understand the mechanisms of generation of these collapses. The main tool used was Ground Penetrating Radar (GPR). This work is developed twofold: one aspect concerns methodology improvements in GPR data processing whilst another aspect concerns the geological study of the Jandaíra karst. This second aspect was strongly supported both by the analysis of outcropping karst structures (in another regions of Potiguar Basin) and by the interpretation of radargrams from the subsurface karst in Fazenda Belém. It was designed and tested an adequate flux to process GPR data which was adapted from an usual flux to process seismic data. The changes were introduced to take into account important differences between GPR and Reflection Seismic methods, in particular: poor coupling between source and ground, mixed phase of the wavelet, low signal-to-noise ratio, monochannel acquisition, and high influence of wave propagation effects, notably dispersion. High frequency components of the GPR pulse suffer more pronounced effects of attenuation than low frequency components resulting in resolution losses in radargrams. In Fazenda Belém, there is a stronger need of an suitable flux to process GPR data because both the presence of a very high level of aerial events and the complexity of the imaged subsurface karst structures. The key point of the processing flux was an improvement in the correction of the attenuation effects on the GPR pulse based on their influence on the amplitude and phase spectra of GPR signals. In low and moderate losses dielectric media the propagated signal suffers significant changes only in its amplitude spectrum; that is, the phase spectrum of the propagated signal remains practically unaltered for the usual travel time ranges. Based on this fact, it is shown using real data that the judicious application of the well known tools of time gain and spectral balancing can efficiently correct the attenuation effects. The proposed approach can be applied in heterogeneous media and it does not require the precise knowledge of the attenuation parameters of the media. As an additional benefit, the judicious application of spectral balancing promotes a partial deconvolution of the data without changing its phase. In other words, the spectral balancing acts in a similar way to a zero phase deconvolution. In GPR data the resolution increase obtained with spectral balancing is greater than those obtained with spike and predictive deconvolutions. The evolution of the Jandaíra karst in Potiguar Basin is associated to at least three events of subaerial exposition of the carbonatic plataform during the Turonian, Santonian, and Campanian. In Fazenda Belém region, during the mid Miocene, the Jandaíra karst was covered by continental siliciclastic sediments. These sediments partially filled the void space associated to the dissolution structures and fractures. Therefore, the development of the karst in this region was attenuated in comparison to other places in Potiguar Basin where this karst is exposed. In Fazenda Belém, the generation of sinkholes and terrain collapses are controlled mainly by: (i) the presence of an unconsolidated sedimentary cap which is thick enough to cover completely the karst but with sediment volume lower than the available space associated to the dissolution structures in the karst; (ii) the existence of important structural of SW-NE and NW-SE alignments which promote a localized increase in the hydraulic connectivity allowing the channeling of underground water, thus facilitating the carbonatic dissolution; and (iii) the existence of a hydraulic barrier to the groundwater flow, associated to the Açu-4 Unity. The terrain collapse mechanisms in Fazenda Belém occur according to the following temporal evolution. The meteoric water infiltrates through the unconsolidated sedimentary cap and promotes its remobilization to the void space associated with the dissolution structures in Jandaíra Formation. This remobilization is initiated at the base of the sedimentary cap where the flow increases its abrasion due to a change from laminar to turbulent flow regime when the underground water flow reaches the open karst structures. The remobilized sediments progressively fill from bottom to top the void karst space. So, the void space is continuously migrated upwards ultimately reaching the surface and causing the sudden observed terrain collapses. This phenomenon is particularly active during the raining season, when the water table that normally is located in the karst may be temporarily located in the unconsolidated sedimentary cap
Resumo:
In this communication, we report results of three-dimensional hydrodynamic computations, by using equations of state with a critical end Point as suggested by the lattice QCD. Some of the results are an increase of the multiplicity in the mid-rapidity region and a larger elliptic-flow parameter nu(2). We discuss also the effcts of the initial-condition fluctuations and the continuous emission.
Resumo:
Simulations of overshooting, tropical deep convection using a Cloud Resolving Model with bulk microphysics are presented in order to examine the effect on the water content of the TTL (Tropical Tropopause Layer) and lower stratosphere. This case study is a subproject of the HIBISCUS (Impact of tropical convection on the upper troposphere and lower stratosphere at global scale) campaign, which took place in Bauru, Brazil (22° S, 49° W), from the end of January to early March 2004. Comparisons between 2-D and 3-D simulations suggest that the use of 3-D dynamics is vital in order to capture the mixing between the overshoot and the stratospheric air, which caused evaporation of ice and resulted in an overall moistening of the lower stratosphere. In contrast, a dehydrating effect was predicted by the 2-D simulation due to the extra time, allowed by the lack of mixing, for the ice transported to the region to precipitate out of the overshoot air. Three different strengths of convection are simulated in 3-D by applying successively lower heating rates (used to initiate the convection) in the boundary layer. Moistening is produced in all cases, indicating that convective vigour is not a factor in whether moistening or dehydration is produced by clouds that penetrate the tropopause, since the weakest case only just did so. An estimate of the moistening effect of these clouds on an air parcel traversing a convective region is made based on the domain mean simulated moistening and the frequency of convective events observed by the IPMet (Instituto de Pesquisas Meteorológicas, Universidade Estadual Paulista) radar (S-band type at 2.8 Ghz) to have the same 10 dBZ echo top height as those simulated. These suggest a fairly significant mean moistening of 0.26, 0.13 and 0.05 ppmv in the strongest, medium and weakest cases, respectively, for heights between 16 and 17 km. Since the cold point and WMO (World Meteorological Organization) tropopause in this region lies at ∼ 15.9 km, this is likely to represent direct stratospheric moistening. Much more moistening is predicted for the 15-16 km height range with increases of 0.85-2.8 ppmv predicted. However, it would be required that this air is lofted through the tropopause via the Brewer Dobson circulation in order for it to have a stratospheric effect. Whether this is likely is uncertain and, in addition, the dehydration of air as it passes through the cold trap and the number of times that trajectories sample convective regions needs to be taken into account to gauge the overall stratospheric effect. Nevertheless, the results suggest a potentially significant role for convection in determining the stratospheric water content. Sensitivity tests exploring the impact of increased aerosol numbers in the boundary layer suggest that a corresponding rise in cloud droplet numbers at cloud base would increase the number concentrations of the ice crystals transported to the TTL, which had the effect of reducing the fall speeds of the ice and causing a ∼13% rise in the mean vapour increase in both the 15-16 and 16-17 km height ranges, respectively, when compared to the control case. Increases in the total water were much larger, being 34% and 132% higher for the same height ranges, but it is unclear whether the extra ice will be able to evaporate before precipitating from the region. These results suggest a possible impact of natural and anthropogenic aerosols on how convective clouds affect stratospheric moisture levels.
Resumo:
O raio conectando dois pontos em um meio anisotrópico, homogêneo por partes e com variação lateral, é calculado utilizando-se técnicas de continuação em 3D. Se combinado com algoritmos para solução do problema de valor inicial, o método pode ser estendido para o cálculo de eventos qS1 e qS2. O algoritmo apresenta a mesma eficiência e robustez que implementações de técnicas de continuação em meios isotrópicos. Rotinas baseadas neste algoritmo têm várias aplicações de interesse. Primeiramente, na modelagem e inversão de parâmetros elásticos na presença de anisotropia. Em segundo lugar, as iterações de Newton-Raphson produzem atributos da frente de onda como vetor vagarosidade e a matrix hessiana do tempo de trânsito, quantidades que permitem determinar o espalhamento geométrico e aproximações de segunda ordem para o tempo de trânsito. Estes atributos permitem calcular as amplitudes ao longo do raio e investigar os efeitos da anisotropia no empilhamento CRS em modelos de velocidade simples.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The representation of real objects in virtual environments has applications in many areas, such as cartography, mixed reality and reverse engineering. The generation of these objects can be performed through two ways: manually, with CAD (Computer Aided Design) tools, or automatically, by means of surface reconstruction techniques. The simpler the 3D model, the easier it is to process and store it. However, this methods can generate very detailed virtual elements, that can result in some problems when processing the resulting mesh, because it has a lot of edges and polygons that have to be checked at visualization. Considering this context, it can be applied simplification algorithms to eliminate polygons from resulting mesh, without change its topology, generating a lighter mesh with less irrelevant details. The project aimed the study, implementation and comparative tests of simplification algorithms applied to meshes generated through a reconstruction pipeline based on point clouds. This work proposes the realization of the simplification step, like a complement to the pipeline developed by (ONO et al., 2012), that developed reconstruction through cloud points obtained by Microsoft Kinect, and then using Poisson algorithm
Resumo:
Pós-graduação em Bases Gerais da Cirurgia - FMB
Resumo:
Micellar solutions of polystyrene-block-polybutadiene and polystyrene-block-polyisoprene in propane are found to exhibit significantly lower cloud pressures than the corresponding hypothetical nonmicellar solutions. Such a cloud-pressure reduction indicates the extent to which micelle formation enhances the apparent diblock solubility in near-critical and hence compressible propane. Concentration-dependent pressure-temperature points beyond which no micelles can be formed, referred to as the micellization end points, are found to depend on the block type, size, and ratio. The cloud-pressure reduction and the micellization end point measured for styrene-diene diblocks in propane should be characteristic of all amphiphilic diblock copolymer solutions that form micelles in compressible solvents.
Resumo:
[EN] In this paper we present a variational technique for the reconstruction of 3D cylindrical surfaces. Roughly speaking by a cylindrical surface we mean a surface that can be parameterized using the projection on a cylinder in terms of two coordinates, representing the displacement and angle in a cylindrical coordinate system respectively. The starting point for our method is a set of different views of a cylindrical surface, as well as a precomputed disparity map estimation between pair of images. The proposed variational technique is based on an energy minimization where we balance on the one hand the regularity of the cylindrical function given by the distance of the surface points to cylinder axis, and on the other hand, the distance between the projection of the surface points on the images and the expected location following the precomputed disparity map estimation between pair of images. One interesting advantage of this approach is that we regularize the 3D surface by means of a bi-dimensio al minimization problem. We show some experimental results for large stereo sequences.
Resumo:
[EN] In this paper, we present a vascular tree model made with synthetic materials and which allows us to obtain images to make a 3D reconstruction.We have used PVC tubes of several diameters and lengths that will let us evaluate the accuracy of our 3D reconstruction. In order to calibrate the camera we have used a corner detector. Also we have used Optical Flow techniques to follow the points through the images going and going back. We describe two general techniques to extract a sequence of corresponding points from multiple views of an object. The resulting sequence of points will be used later to reconstruct a set of 3D points representing the object surfaces on the scene. We have made the 3D reconstruction choosing by chance a couple of images and we have calculated the projection error. After several repetitions, we have found the best 3D location for the point.
Resumo:
[EN] In this paper, we present a vascular tree model made with synthetic materials and which allows us to obtain images to make a 3D reconstruction. In order to create this model, we have used PVC tubes of several diameters and lengths that will let us evaluate the accuracy of our 3D reconstruction. We have made the 3D reconstruction from a series of images that we have from our model and after we have calibrated the camera. In order to calibrate it we have used a corner detector. Also we have used Optical Flow techniques to follow the points through the images going and going back. Once we have the set of images where we have located a point, we have made the 3D reconstruction choosing by chance a couple of images and we have calculated the projection error. After several repetitions, we have found the best 3D location for the point.