998 resultados para Dense method
Resumo:
13th International Conference on Autonomous Robot Systems (Robotica), 2013, Lisboa
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A relaxation method is employed to study a rotating dense Bose-Einstein condensate beyond the Thomas-Fermi approximation. We use a slave-boson model to describe the strongly interacting condensate and derive a generalized nonlinear Schrodinger equation with a kinetic term for the rotating condensate. In comparison with previous calculations, based on the Thomas-Fermi approximation, significant improvements are found in regions where the condensate in a trap potential is not smooth. The critical angular velocity of the vortex formation is higher than in the Thomas-Fermi prediction.
Resumo:
Free-space optical interconnects (FSOIs), made up of dense arrays of vertical-cavity surface-emitting lasers, photodetectors and microlenses can be used for implementing high-speed and high-density communication links, and hence replace the inferior electrical interconnects. A major concern in the design of FSOIs is minimization of the optical channel cross talk arising from laser beam diffraction. In this article we introduce modifications to the mode expansion method of Tanaka et al. [IEEE Trans. Microwave Theory Tech. MTT-20, 749 (1972)] to make it an efficient tool for modelling and design of FSOIs in the presence of diffraction. We demonstrate that our modified mode expansion method has accuracy similar to the exact solution of the Huygens-Kirchhoff diffraction integral in cases of both weak and strong beam clipping, and that it is much more accurate than the existing approximations. The strength of the method is twofold: first, it is applicable in the region of pronounced diffraction (strong beam clipping) where all other approximations fail and, second, unlike the exact-solution method, it can be efficiently used for modelling diffraction on multiple apertures. These features make the mode expansion method useful for design and optimization of free-space architectures containing multiple optical elements inclusive of optical interconnects and optical clock distribution systems. (C) 2003 Optical Society of America.
Resumo:
Proceedings of the International Conference on Computer Vision Theory and Applications, 361-365, 2013, Barcelona, Spain
Resumo:
In this paper, we propose a new paradigm to carry outthe registration task with a dense deformation fieldderived from the optical flow model and the activecontour method. The proposed framework merges differenttasks such as segmentation, regularization, incorporationof prior knowledge and registration into a singleframework. The active contour model is at the core of ourframework even if it is used in a different way than thestandard approaches. Indeed, active contours are awell-known technique for image segmentation. Thistechnique consists in finding the curve which minimizesan energy functional designed to be minimal when thecurve has reached the object contours. That way, we getaccurate and smooth segmentation results. So far, theactive contour model has been used to segment objectslying in images from boundary-based, region-based orshape-based information. Our registration technique willprofit of all these families of active contours todetermine a dense deformation field defined on the wholeimage. A well-suited application of our model is theatlas registration in medical imaging which consists inautomatically delineating anatomical structures. Wepresent results on 2D synthetic images to show theperformances of our non rigid deformation field based ona natural registration term. We also present registrationresults on real 3D medical data with a large spaceoccupying tumor substantially deforming surroundingstructures, which constitutes a high challenging problem.
Resumo:
Atlas registration is a recognized paradigm for the automatic segmentation of normal MR brain images. Unfortunately, atlas-based segmentation has been of limited use in presence of large space-occupying lesions. In fact, brain deformations induced by such lesions are added to normal anatomical variability and they may dramatically shift and deform anatomically or functionally important brain structures. In this work, we chose to focus on the problem of inter-subject registration of MR images with large tumors, inducing a significant shift of surrounding anatomical structures. First, a brief survey of the existing methods that have been proposed to deal with this problem is presented. This introduces the discussion about the requirements and desirable properties that we consider necessary to be fulfilled by a registration method in this context: To have a dense and smooth deformation field and a model of lesion growth, to model different deformability for some structures, to introduce more prior knowledge, and to use voxel-based features with a similarity measure robust to intensity differences. In a second part of this work, we propose a new approach that overcomes some of the main limitations of the existing techniques while complying with most of the desired requirements above. Our algorithm combines the mathematical framework for computing a variational flow proposed by Hermosillo et al. [G. Hermosillo, C. Chefd'Hotel, O. Faugeras, A variational approach to multi-modal image matching, Tech. Rep., INRIA (February 2001).] with the radial lesion growth pattern presented by Bach et al. [M. Bach Cuadra, C. Pollo, A. Bardera, O. Cuisenaire, J.-G. Villemure, J.-Ph. Thiran, Atlas-based segmentation of pathological MR brain images using a model of lesion growth, IEEE Trans. Med. Imag. 23 (10) (2004) 1301-1314.]. Results on patients with a meningioma are visually assessed and compared to those obtained with the most similar method from the state-of-the-art.
Resumo:
Much effort is being expended by various state, federal, and private organizations relative to the protection and preservation of concrete bridge floors. The generally recognized culprit is the chloride ion, from the deicing salt, reaching the reinforcing steel, and along with water and oxygen, causing corrosion. The corrosion process exerts pressure which eventually causes cracks and spalls in the bridge floor. The reinforcing· has been treated and coated, various types of "waterproof" membranes have been placed on the deck surface, decks have been surfaced with dense and modified concretes, decks have been electrically protected, and attempts to internally seal the concrete have been made. As of yet, no one method has been proven and accepted by the various government agencies as being the "best" when considering the initial cost, application effort, length and effectiveness of protection, etc.
Resumo:
The use of deicing salts in this part of the country is a necessity to remove ice from our bridges. The use of these salts has always been a problem since the chloride-ions penetrate the concrete and reach the steel and cause corrosion which eventually cause deterioration of both the steel and concrete. One method used to try to prevent this from happening was to apply a waterproof membrane to the concrete after it was placed. This method did help, but was not cost effective as the longevity of the membrane system was of relatively short duration. For this reason, this research project was initiated. After the original deck was placed a second layer of concrete about 1 1/2" thick was placed on top. Biennial evaluation of the decks included testing for delaminations and steel corrosion. Cores were also obtained for a chloride analysis. Testing and observations showed the two-layer bridge deck to be effective in preventing corrosion. Since the time this project was initiated, epoxy steel has been introduced and is a cost effective way to protect the steel from corrosion.
Resumo:
A predominance of small, dense low-density lipoprotein (LDL) is a major component of an atherogenic lipoprotein phenotype, and a common, but modifiable, source of increased risk for coronary heart disease in the free-living population. While much of the atherogenicity of small, dense LDL is known to arise from its structural properties, the extent to which an increase in the number of small, dense LDL particles (hyper-apoprotein B) contributes to this risk of coronary heart disease is currently unknown. This study reports a method for the recruitment of free-living individuals with an atherogenic lipoprotein phenotype for a fish-oil intervention trial, and critically evaluates the relationship between LDL particle number and the predominance of small, dense LDL. In this group, volunteers were selected through local general practices on the basis of a moderately raised plasma triacylglycerol (triglyceride) level (>1.5 mmol/l) and a low concentration of high-density-lipoprotein cholesterol (<1.1 mmol/l). The screening of LDL subclasses revealed a predominance of small, dense LDL (LDL subclass pattern B) in 62% of the cohort. As expected, subjects with LDL subclass pattern B were characterized by higher plasma triacylglycerol and lower high-density lipoprotein cholesterol (<1.1 mmol/l) levels and, less predictably, by lower LDL cholesterol and apoprotein B levels (P<0.05; LDL subclass A compared with subclass B). While hyper-apoprotein B was detected in only five subjects, the relative percentage of small, dense LDL-III in subjects with subclass B showed an inverse relationship with LDL apoprotein B (r=-0.57; P<0.001), identifying a subset of individuals with plasma triacylglycerol above 2.5 mmol/l and a low concentration of LDL almost exclusively in a small and dense form. These findings indicate that a predominance of small, dense LDL and hyper-apoprotein B do not always co-exist in free-living groups. Moreover, if coronary risk increases with increasing LDL particle number, these results imply that the risk arising from a predominance of small, dense LDL may actually be reduced in certain cases when plasma triacylglycerol exceeds 2.5 mmol/l.
Resumo:
Anthropogenic emissions of heat and exhaust gases play an important role in the atmospheric boundary layer, altering air quality, greenhouse gas concentrations and the transport of heat and moisture at various scales. This is particularly evident in urban areas where emission sources are integrated in the highly heterogeneous urban canopy layer and directly linked to human activities which exhibit significant temporal variability. It is common practice to use eddy covariance observations to estimate turbulent surface fluxes of latent heat, sensible heat and carbon dioxide, which can be attributed to a local scale source area. This study provides a method to assess the influence of micro-scale anthropogenic emissions on heat, moisture and carbon dioxide exchange in a highly urbanized environment for two sites in central London, UK. A new algorithm for the Identification of Micro-scale Anthropogenic Sources (IMAS) is presented, with two aims. Firstly, IMAS filters out the influence of micro-scale emissions and allows for the analysis of the turbulent fluxes representative of the local scale source area. Secondly, it is used to give a first order estimate of anthropogenic heat flux and carbon dioxide flux representative of the building scale. The algorithm is evaluated using directional and temporal analysis. The algorithm is then used at a second site which was not incorporated in its development. The spatial and temporal local scale patterns, as well as micro-scale fluxes, appear physically reasonable and can be incorporated in the analysis of long-term eddy covariance measurements at the sites in central London. In addition to the new IMAS-technique, further steps in quality control and quality assurance used for the flux processing are presented. The methods and results have implications for urban flux measurements in dense urbanised settings with significant sources of heat and greenhouse gases.
Resumo:
This study describes a simple technique that improves a recently developed 3D sub-diffraction imaging method based on three-photon absorption of commercially available quantum dots. The method combines imaging of biological samples via tri-exciton generation in quantum dots with deconvolution and spectral multiplexing, resulting in a novel approach for multi-color imaging of even thick biological samples at a 1.4 to 1.9-fold better spatial resolution. This approach is realized on a conventional confocal microscope equipped with standard continuous-wave lasers. We demonstrate the potential of multi-color tri-exciton imaging of quantum dots combined with deconvolution on viral vesicles in lentivirally transduced cells as well as intermediate filaments in three-dimensional clusters of mouse-derived neural stem cells (neurospheres) and dense microtubuli arrays in myotubes formed by stacks of differentiated C2C12 myoblasts.
Resumo:
The presence of lingual papillae and the nerve endings in the middle region of the tongue mucosa of collared peccary (Tayassu tajacu) were studied using scanning electron microscopy and light microscopy, based upon the silver impregnation method. The middle region of tongue mucosa revealed numerous filiform and fungiform papillae. The thick epithelial layer showed epithelial cells and a dense connective tissue layer containing nerve fibre bundles and capillaries. The sensory nerve endings, intensely stained by silver impregnation, were usually non-encapsulated and extended into the connective tissue of the filiform and fungiform papillae very close to the epithelial cells. In some regions, the sensory nerves fibres formed a dense and complex network of fine fibrils. The presence of these nerve fibrils may characterize the mechanisms of transmission of sensitive impulses to the tongue mucosa.
Resumo:
The immersed boundary method is a versatile tool for the investigation of flow-structure interaction. In a large number of applications, the immersed boundaries or structures are very stiff and strong tangential forces on these interfaces induce a well-known, severe time-step restriction for explicit discretizations. This excessive stability constraint can be removed with fully implicit or suitable semi-implicit schemes but at a seemingly prohibitive computational cost. While economical alternatives have been proposed recently for some special cases, there is a practical need for a computationally efficient approach that can be applied more broadly. In this context, we revisit a robust semi-implicit discretization introduced by Peskin in the late 1970s which has received renewed attention recently. This discretization, in which the spreading and interpolation operators are lagged. leads to a linear system of equations for the inter-face configuration at the future time, when the interfacial force is linear. However, this linear system is large and dense and thus it is challenging to streamline its solution. Moreover, while the same linear system or one of similar structure could potentially be used in Newton-type iterations, nonlinear and highly stiff immersed structures pose additional challenges to iterative methods. In this work, we address these problems and propose cost-effective computational strategies for solving Peskin`s lagged-operators type of discretization. We do this by first constructing a sufficiently accurate approximation to the system`s matrix and we obtain a rigorous estimate for this approximation. This matrix is expeditiously computed by using a combination of pre-calculated values and interpolation. The availability of a matrix allows for more efficient matrix-vector products and facilitates the design of effective iterative schemes. We propose efficient iterative approaches to deal with both linear and nonlinear interfacial forces and simple or complex immersed structures with tethered or untethered points. One of these iterative approaches employs a splitting in which we first solve a linear problem for the interfacial force and then we use a nonlinear iteration to find the interface configuration corresponding to this force. We demonstrate that the proposed approach is several orders of magnitude more efficient than the standard explicit method. In addition to considering the standard elliptical drop test case, we show both the robustness and efficacy of the proposed methodology with a 2D model of a heart valve. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)