994 resultados para virtual topology, decomposition, hex meshing algorithms
Resumo:
Background and Purpose: Several different methods of teaching laparoscopic skills have been advocated, with virtual reality surgical simulation (VRSS) being the most popular. Its effectiveness in improving surgical performance is not a consensus yet, however. The purpose of this study was to determine whether practicing surgical skills in a virtual reality simulator results in improved surgical performance. Materials and Methods: Fifteen medical students recruited for the study were divided into three groups. Group I (control) did not receive any VRSS training. For 10 weeks, group II trained basic laparoscopic skills (camera handling, cutting skill, peg transfer skill, and clipping skill) in a VRSS laparoscopic skills simulator. Group III practiced the same skills and, in addition, performed a simulated cholecystectomy. All students then performed a cholecystectomy in a swine model. Their performance was reviewed by two experienced surgeons. The following parameters were evaluated: Gallbladder pedicle dissection time, clipping time, time for cutting the pedicle, gallbladder removal time, total procedure time, and blood loss. Results: With practice, there was improvement in most of the evaluated parameters by each of the individuals. There were no statistical differences in any of evaluated parameters between those who did and did not undergo VRSS training, however. Conclusion: VRSS training is assumed to be an effective tool for learning and practicing laparoscopic skills. In this study, we could not demonstrate that VRSS training resulted in improved surgical performance. It may be useful, however, in familiarizing surgeons with laparoscopic surgery. More effective methods of teaching laparoscopic skills should be evaluated to help in improving surgical performance.
Resumo:
We study the star/galaxy classification efficiency of 13 different decision tree algorithms applied to photometric objects in the Sloan Digital Sky Survey Data Release Seven (SDSS-DR7). Each algorithm is defined by a set of parameters which, when varied, produce different final classification trees. We extensively explore the parameter space of each algorithm, using the set of 884,126 SDSS objects with spectroscopic data as the training set. The efficiency of star-galaxy separation is measured using the completeness function. We find that the Functional Tree algorithm (FT) yields the best results as measured by the mean completeness in two magnitude intervals: 14 <= r <= 21 (85.2%) and r >= 19 (82.1%). We compare the performance of the tree generated with the optimal FT configuration to the classifications provided by the SDSS parametric classifier, 2DPHOT, and Ball et al. We find that our FT classifier is comparable to or better in completeness over the full magnitude range 15 <= r <= 21, with much lower contamination than all but the Ball et al. classifier. At the faintest magnitudes (r > 19), our classifier is the only one that maintains high completeness (> 80%) while simultaneously achieving low contamination (similar to 2.5%). We also examine the SDSS parametric classifier (psfMag - modelMag) to see if the dividing line between stars and galaxies can be adjusted to improve the classifier. We find that currently stars in close pairs are often misclassified as galaxies, and suggest a new cut to improve the classifier. Finally, we apply our FT classifier to separate stars from galaxies in the full set of 69,545,326 SDSS photometric objects in the magnitude range 14 <= r <= 21.
Resumo:
We propose and analyze two different Bayesian online algorithms for learning in discrete Hidden Markov Models and compare their performance with the already known Baldi-Chauvin Algorithm. Using the Kullback-Leibler divergence as a measure of generalization we draw learning curves in simplified situations for these algorithms and compare their performances.
Resumo:
The effects of fluctuating initial conditions are studied in the context of relativistic heavy ion collisions where a rapidly evolving system is formed. Two-particle correlation analysis is applied to events generated with the NEXSPHERIO hydrodynamic code, starting with fluctuating nonsmooth initial conditions (IC). The results show that the nonsmoothness in the IC survives the hydroevolution and can be seen as topological features of the angular correlation function of the particles emerging from the evolving system. A long range correlation is observed in the longitudinal direction and in the azimuthal direction a double peak structure is observed in the opposite direction to the trigger particle. This analysis provides clear evidence that these are signatures of the combined effect of tubular structures present in the IC and the proceeding collective dynamics of the hot and dense medium.
Resumo:
The thermodynamic properties of the magnetic semiconductors GaMnAs and GaCrAs are studied under biaxial strain. The calculations are based on the projector augmented wave method combined with the generalized quasichemical approach to treat the disorder and composition effects. Considering the influence of biaxial strain, we find a tendency to the suppression of binodal decomposition mainly for GaMnAs under compressive strain. For a substrate with a lattice constant 5% smaller than the one of GaAs, for GaMnAs, the solubility limit increases up to 40%. Thus, the strain can be a useful tool for tailoring magnetic semiconductors to the formation or not of embedded nanoclusters. (C) 2010 American Institute of Physics. [doi:10.1063/1.3448025]
Resumo:
The decomposition of peroxynitrite to nitrite and dioxygen at neutral pH follows complex kinetics, compared to its isomerization to nitrate at low pH. Decomposition may involve radicals or proceed by way of the classical peracid decomposition mechanism. Peroxynitrite (ONOOH/ONOO(-)) decomposition has been proposed to involve formation of peroxynitrate (O(2)NOOH/O(2)NOO(-)) at neutral pH (D. Gupta, B. Harish, R. Kissner and W. H. Koppenol, Dalton Trans., 2009, DOI: 10.1039/b905535e, see accompanying paper in this issue). Peroxynitrate is unstable and decomposes to nitrite and dioxygen. This study aimed to investigate whether O(2)NOO(-) formed upon ONOOH/ONOO(-) decomposition generates singlet molecular oxygen [O(2) ((1)Delta(g))]. As unequivocally revealed by the measurement of monomol light emission in the near infrared region at 1270 nm and by chemical trapping experiments, the decomposition of ONOO(-) or O(2)NOOH at neutral to alkaline pH generates O(2) ((1)Delta(g)) at a yield of ca. 1% and 2-10%, respectively. Characteristic light emission, corresponding to O(2) ((1)Delta(g)) monomolecular decay was observed for ONOO(-) and for O(2)NOOH prepared by reaction of H(2)O(2) with NO(2)BF(4) and of H(2)O(2) with NO(2)(-) in HClO(4). The generation of O(2) ((1)Delta(g)) from ONOO(-) increased in a concentration-dependent manner in the range of 0.1-2.5 mM and was dependent on pH, giving a sigmoid pro. le with an apparent pK(a) around pD 8.1 (pH 7.7). Taken together, our results clearly identify the generation of O(2) ((1)Delta(g)) from peroxynitrate [O(2)NOO(-) -> NO(2)(-) + O(2) ((1)Delta(g))] generated from peroxynitrite and also from the reactions of H(2)O(2) with either NO(2)BF(4) or NO(2)(-) in acidic media.
Resumo:
Due to the worldwide increase in demand for biofuels, the area cultivated with sugarcane is expected to increase. For environmental and economic reasons, an increasing proportion of the areas are being harvested without burning, leaving the residues on the soil surface. This periodical input of residues affects soil physical, chemical and biological properties, as well as plant growth and nutrition. Modeling can be a useful tool in the study of the complex interactions between the climate, residue quality, and the biological factors controlling plant growth and residue decomposition. The approach taken in this work was to parameterize the CENTURY model for the sugarcane crop, to simulate the temporal dynamics of aboveground phytomass and litter decomposition, and to validate the model through field experiment data. When studying aboveground growth, burned and unburned harvest systems were compared, as well as the effect of mineral fertilizer and organic residue applications. The simulations were performed with data from experiments with different durations, from 12 months to 60 years, in Goiana, TimbaA(0)ba and Pradpolis, Brazil; Harwood, Mackay and Tully, Australia; and Mount Edgecombe, South Africa. The differentiation of two pools in the litter, with different decomposition rates, was found to be a relevant factor in the simulations made. Originally, the model had a basically unlimited layer of mulch directly available for decomposition, 5,000 g m(-2). Through a parameter optimization process, the thickness of the mulch layer closer to the soil, more vulnerable to decomposition, was set as 110 g m(-2). By changing the layer of mulch at any given time available for decomposition, the sugarcane residues decomposition simulations where close to measured values (R (2) = 0.93), contributing to making the CENTURY model a tool for the study of sugarcane litter decomposition patterns. The CENTURY model accurately simulated aboveground carbon stalk values (R (2) = 0.76), considering burned and unburned harvest systems, plots with and without nitrogen fertilizer and organic amendment applications, in different climates and soil conditions.
Resumo:
This paper presents a framework to build medical training applications by using virtual reality and a tool that helps the class instantiation of this framework. The main purpose is to make easier the building of virtual reality applications in the medical training area, considering systems to simulate biopsy exams and make available deformation, collision detection, and stereoscopy functionalities. The instantiation of the classes allows quick implementation of the tools for such a purpose, thus reducing errors and offering low cost due to the use of open source tools. Using the instantiation tool, the process of building applications is fast and easy. Therefore, computer programmers can obtain an initial application and adapt it to their needs. This tool allows the user to include, delete, and edit parameters in the functionalities chosen as well as storing these parameters for future use. In order to verify the efficiency of the framework, some case studies are presented.
Resumo:
Voltage and current waveforms of a distribution or transmission power system are not pure sinusoids. There are distortions in these waveforms that can be represented as a combination of the fundamental frequency, harmonics and high frequency transients. This paper presents a novel approach to identifying harmonics in power system distorted waveforms. The proposed method is based on Genetic Algorithms, which is an optimization technique inspired by genetics and natural evolution. GOOAL, a specially designed intelligent algorithm for optimization problems, was successfully implemented and tested. Two kinds of representations concerning chromosomes are utilized: binary and real. The results show that the proposed method is more precise than the traditional Fourier Transform, especially considering the real representation of the chromosomes.
Resumo:
This technical note develops information filter and array algorithms for a linear minimum mean square error estimator of discrete-time Markovian jump linear systems. A numerical example for a two-mode Markovian jump linear system, to show the advantage of using array algorithms to filter this class of systems, is provided.
Resumo:
The continuous growth of peer-to-peer networks has made them responsible for a considerable portion of the current Internet traffic. For this reason, improvements in P2P network resources usage are of central importance. One effective approach for addressing this issue is the deployment of locality algorithms, which allow the system to optimize the peers` selection policy for different network situations and, thus, maximize performance. To date, several locality algorithms have been proposed for use in P2P networks. However, they usually adopt heterogeneous criteria for measuring the proximity between peers, which hinders a coherent comparison between the different solutions. In this paper, we develop a thoroughly review of popular locality algorithms, based on three main characteristics: the adopted network architecture, distance metric, and resulting peer selection algorithm. As result of this study, we propose a novel and generic taxonomy for locality algorithms in peer-to-peer networks, aiming to enable a better and more coherent evaluation of any individual locality algorithm.
Resumo:
Compliant mechanisms can achieve a specified motion as a mechanism without relying on the use of joints and pins. They have broad application in precision mechanical devices and Micro-Electro Mechanical Systems (MEMS) but may lose accuracy and produce undesirable displacements when subjected to temperature changes. These undesirable effects can be reduced by using sensors in combination with control techniques and/or by applying special design techniques to reduce such undesirable effects at the design stage, a process generally termed ""design for precision"". This paper describes a design for precision method based on a topology optimization method (TOM) for compliant mechanisms that includes thermal compensation features. The optimization problem emphasizes actuator accuracy and it is formulated to yield optimal compliant mechanism configurations that maximize the desired output displacement when a force is applied, while minimizing undesirable thermal effects. To demonstrate the effectiveness of the method, two-dimensional compliant mechanisms are designed considering thermal compensation, and their performance is compared with compliant mechanisms designs that do not consider thermal compensation. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
By means of continuous topology optimization, this paper discusses the influence of material gradation and layout in the overall stiffness behavior of functionally graded structures. The formulation is associated to symmetry and pattern repetition constraints, including material gradation effects at both global and local levels. For instance, constraints associated with pattern repetition are applied by considering material gradation either on the global structure or locally over the specific pattern. By means of pattern repetition, we recover previous results in the literature which were obtained using homogenization and optimization of cellular materials.
Resumo:
Sensors and actuators based on piezoelectric plates have shown increasing demand in the field of smart structures, including the development of actuators for cooling and fluid-pumping applications and transducers for novel energy-harvesting devices. This project involves the development of a topology optimization formulation for dynamic design of piezoelectric laminated plates aiming at piezoelectric sensors, actuators and energy-harvesting applications. It distributes piezoelectric material over a metallic plate in order to achieve a desired dynamic behavior with specified resonance frequencies, modes, and enhanced electromechanical coupling factor (EMCC). The finite element employs a piezoelectric plate based on the MITC formulation, which is reliable, efficient and avoids the shear locking problem. The topology optimization formulation is based on the PEMAP-P model combined with the RAMP model, where the design variables are the pseudo-densities that describe the amount of piezoelectric material at each finite element and its polarization sign. The design problem formulated aims at designing simultaneously an eigenshape, i.e., maximizing and minimizing vibration amplitudes at certain points of the structure in a given eigenmode, while tuning the eigenvalue to a desired value and also maximizing its EMCC, so that the energy conversion is maximized for that mode. The optimization problem is solved by using sequential linear programming. Through this formulation, a design with enhancing energy conversion in the low-frequency spectrum is obtained, by minimizing a set of first eigenvalues, enhancing their corresponding eigenshapes while maximizing their EMCCs, which can be considered an approach to the design of energy-harvesting devices. The implementation of the topology optimization algorithm and some results are presented to illustrate the method.