56 resultados para Statistical Simulation
Resumo:
Background and Purpose: Several different methods of teaching laparoscopic skills have been advocated, with virtual reality surgical simulation (VRSS) being the most popular. Its effectiveness in improving surgical performance is not a consensus yet, however. The purpose of this study was to determine whether practicing surgical skills in a virtual reality simulator results in improved surgical performance. Materials and Methods: Fifteen medical students recruited for the study were divided into three groups. Group I (control) did not receive any VRSS training. For 10 weeks, group II trained basic laparoscopic skills (camera handling, cutting skill, peg transfer skill, and clipping skill) in a VRSS laparoscopic skills simulator. Group III practiced the same skills and, in addition, performed a simulated cholecystectomy. All students then performed a cholecystectomy in a swine model. Their performance was reviewed by two experienced surgeons. The following parameters were evaluated: Gallbladder pedicle dissection time, clipping time, time for cutting the pedicle, gallbladder removal time, total procedure time, and blood loss. Results: With practice, there was improvement in most of the evaluated parameters by each of the individuals. There were no statistical differences in any of evaluated parameters between those who did and did not undergo VRSS training, however. Conclusion: VRSS training is assumed to be an effective tool for learning and practicing laparoscopic skills. In this study, we could not demonstrate that VRSS training resulted in improved surgical performance. It may be useful, however, in familiarizing surgeons with laparoscopic surgery. More effective methods of teaching laparoscopic skills should be evaluated to help in improving surgical performance.
Resumo:
We show that the one-loop effective action at finite temperature for a scalar field with quartic interaction has the same renormalized expression as at zero temperature if written in terms of a certain classical field phi(c), and if we trade free propagators at zero temperature for their finite-temperature counterparts. The result follows if we write the partition function as an integral over field eigenstates (boundary fields) of the density matrix element in the functional Schrodinger field representation, and perform a semiclassical expansion in two steps: first, we integrate around the saddle point for fixed boundary fields, which is the classical field phi(c), a functional of the boundary fields; then, we perform a saddle-point integration over the boundary fields, whose correlations characterize the thermal properties of the system. This procedure provides a dimensionally reduced effective theory for the thermal system. We calculate the two-point correlation as an example.
Resumo:
Spectral changes of Na(2) in liquid helium were studied using the sequential Monte Carlo-quantum mechanics method. Configurations composed by Na(2) surrounded by explicit helium atoms sampled from the Monte Carlo simulation were submitted to time-dependent density-functional theory calculations of the electronic absorption spectrum using different functionals. Attention is given to both line shift and line broadening. The Perdew, Burke, and Ernzerhof (PBE1PBE, also known as PBE0) functional, with the PBE1PBE/6-311++G(2d,2p) basis set, gives the spectral shift, compared to gas phase, of 500 cm(-1) for the allowed X (1)Sigma(+)(g) -> B (1)Pi(u) transition, in very good agreement with the experimental value (700 cm(-1)). For comparison, cluster calculations were also performed and the first X (1)Sigma(+)(g) -> A (1)Sigma(+)(u) transition was also considered.
Resumo:
We consider a simple Maier-Saupe statistical model with the inclusion of disorder degrees of freedom to mimic the phase diagram of a mixture of rodlike and disklike molecules. A quenched distribution of shapes leads to a phase diagram with two uniaxial and a biaxial nematic structure. A thermalized distribution, however, which is more adequate to liquid mixtures, precludes the stability of this biaxial phase. We then use a two-temperature formalism, and assume a separation of relaxation times, to show that a partial degree of annealing is already sufficient to stabilize a biaxial nematic structure.
Resumo:
We study the electronic transport properties of a dual-gated bilayer graphene nanodevice via first-principles calculations. We investigate the electric current as a function of gate length and temperature. Under the action of an external electrical field we show that even for gate lengths up 100 angstrom, a nonzero current is exhibited. The results can be explained by the presence of a tunneling regime due the remanescent states in the gap. We also discuss the conditions to reach the charge neutrality point in a system free of defects and extrinsic carrier doping.
Resumo:
Thanks to recent advances in molecular biology, allied to an ever increasing amount of experimental data, the functional state of thousands of genes can now be extracted simultaneously by using methods such as cDNA microarrays and RNA-Seq. Particularly important related investigations are the modeling and identification of gene regulatory networks from expression data sets. Such a knowledge is fundamental for many applications, such as disease treatment, therapeutic intervention strategies and drugs design, as well as for planning high-throughput new experiments. Methods have been developed for gene networks modeling and identification from expression profiles. However, an important open problem regards how to validate such approaches and its results. This work presents an objective approach for validation of gene network modeling and identification which comprises the following three main aspects: (1) Artificial Gene Networks (AGNs) model generation through theoretical models of complex networks, which is used to simulate temporal expression data; (2) a computational method for gene network identification from the simulated data, which is founded on a feature selection approach where a target gene is fixed and the expression profile is observed for all other genes in order to identify a relevant subset of predictors; and (3) validation of the identified AGN-based network through comparison with the original network. The proposed framework allows several types of AGNs to be generated and used in order to simulate temporal expression data. The results of the network identification method can then be compared to the original network in order to estimate its properties and accuracy. Some of the most important theoretical models of complex networks have been assessed: the uniformly-random Erdos-Renyi (ER), the small-world Watts-Strogatz (WS), the scale-free Barabasi-Albert (BA), and geographical networks (GG). The experimental results indicate that the inference method was sensitive to average degree k variation, decreasing its network recovery rate with the increase of k. The signal size was important for the inference method to get better accuracy in the network identification rate, presenting very good results with small expression profiles. However, the adopted inference method was not sensible to recognize distinct structures of interaction among genes, presenting a similar behavior when applied to different network topologies. In summary, the proposed framework, though simple, was adequate for the validation of the inferred networks by identifying some properties of the evaluated method, which can be extended to other inference methods.
Resumo:
Efficient automatic protein classification is of central importance in genomic annotation. As an independent way to check the reliability of the classification, we propose a statistical approach to test if two sets of protein domain sequences coming from two families of the Pfam database are significantly different. We model protein sequences as realizations of Variable Length Markov Chains (VLMC) and we use the context trees as a signature of each protein family. Our approach is based on a Kolmogorov-Smirnov-type goodness-of-fit test proposed by Balding et at. [Limit theorems for sequences of random trees (2008), DOI: 10.1007/s11749-008-0092-z]. The test statistic is a supremum over the space of trees of a function of the two samples; its computation grows, in principle, exponentially fast with the maximal number of nodes of the potential trees. We show how to transform this problem into a max-flow over a related graph which can be solved using a Ford-Fulkerson algorithm in polynomial time on that number. We apply the test to 10 randomly chosen protein domain families from the seed of Pfam-A database (high quality, manually curated families). The test shows that the distributions of context trees coming from different families are significantly different. We emphasize that this is a novel mathematical approach to validate the automatic clustering of sequences in any context. We also study the performance of the test via simulations on Galton-Watson related processes.
Resumo:
Structural and dynamical properties of liquid trimethylphosphine (TMP), (CH(3))(3)P, as a function of temperature is investigated by molecular dynamics (MD) simulations. The force field used in the MD simulations, which has been proposed from molecular mechanics and quantum chemistry calculations, is able to reproduce the experimental density of liquid TMP at room temperature. Equilibrium structure is investigated by the usual radial distribution function, g(r), and also in the reciprocal space by the static structure factor, S(k). On the basis of center of mass distances, liquid TMP behaves like a simple liquid of almost spherical particles, but orientational correlation due to dipole-dipole interactions is revealed at short-range distances. Single particle and collective dynamics are investigated by several time correlation functions. At high temperatures, diffusion and reorientation occur at the same time range as relaxation of the liquid structure. Decoupling of these dynamic properties starts below ca. 220 K, when rattling dynamics of a given TMP molecules due to the cage effect of neighbouring molecules becomes important. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3624408]
Resumo:
Due to the worldwide increase in demand for biofuels, the area cultivated with sugarcane is expected to increase. For environmental and economic reasons, an increasing proportion of the areas are being harvested without burning, leaving the residues on the soil surface. This periodical input of residues affects soil physical, chemical and biological properties, as well as plant growth and nutrition. Modeling can be a useful tool in the study of the complex interactions between the climate, residue quality, and the biological factors controlling plant growth and residue decomposition. The approach taken in this work was to parameterize the CENTURY model for the sugarcane crop, to simulate the temporal dynamics of aboveground phytomass and litter decomposition, and to validate the model through field experiment data. When studying aboveground growth, burned and unburned harvest systems were compared, as well as the effect of mineral fertilizer and organic residue applications. The simulations were performed with data from experiments with different durations, from 12 months to 60 years, in Goiana, TimbaA(0)ba and Pradpolis, Brazil; Harwood, Mackay and Tully, Australia; and Mount Edgecombe, South Africa. The differentiation of two pools in the litter, with different decomposition rates, was found to be a relevant factor in the simulations made. Originally, the model had a basically unlimited layer of mulch directly available for decomposition, 5,000 g m(-2). Through a parameter optimization process, the thickness of the mulch layer closer to the soil, more vulnerable to decomposition, was set as 110 g m(-2). By changing the layer of mulch at any given time available for decomposition, the sugarcane residues decomposition simulations where close to measured values (R (2) = 0.93), contributing to making the CENTURY model a tool for the study of sugarcane litter decomposition patterns. The CENTURY model accurately simulated aboveground carbon stalk values (R (2) = 0.76), considering burned and unburned harvest systems, plots with and without nitrogen fertilizer and organic amendment applications, in different climates and soil conditions.
Resumo:
Currently there is a trend for the expansion of the area cropped with sugarcane (Saccharum officinarum L.), driven by an increase in the world demand for biofuels, due to economical, environmental, and geopolitical issues. Although sugarcane is traditionally harvested by burning dried leaves and tops, the unburned, mechanized harvest has been progressively adopted. The use of process based models is useful in understanding the effects of plant litter in soil C dynamics. The objective of this work was to use the CENTURY model in evaluating the effect of sugarcane residue management in the temporal dynamics of soil C. The approach taken in this work was to parameterize the CENTURY model for the sugarcane crop, to simulate the temporal dynamics of soil C, validating the model through field experiment data, and finally to make predictions in the long term regarding soil C. The main focus of this work was the comparison of soil C stocks between the burned and unburned litter management systems, but the effect of mineral fertilizer and organic residue applications were also evaluated. The simulations were performed with data from experiments with different durations, from 1 to 60 yr, in Goiana and Timbauba, Pernambuco, and Pradopolis, Sao Paulo, all in Brazil; and Mount Edgecombe, Kwazulu-Natal, South Africa. It was possible to simulate the temporal dynamics of soil C (R(2) = 0.89). The predictions made with the model revealed that there is, in the long term, a trend for higher soil C stocks with the unburned management. This increase is conditioned by factors such as climate, soil texture, time of adoption of the unburned system, and N fertilizer management.
Resumo:
This article intends to contribute to the reflection on the Educational Statistics as being source for the researches on History of Education. The main concern was to reveal the way Educational Statistics related to the period from 1871 to 1931 were produced, in central government. Official reports - from the General Statistics Directory - and Statistics yearbooks released by that department were analyzed and, on this analysis, recommendations and definitions to perform the works were sought. By rending problematic to the documental issues on Educational Statistics and their usual interpretations, the intention was to reduce the ignorance about the origin of the school numbers, which are occasionally used in current researches without the convenient critical exam.
Resumo:
This study investigated the energy system contributions of rowers in three different conditions: rowing on an ergometer without and with the slide and rowing in the water. For this purpose, eight rowers were submitted to 2,000 m race simulations in each of the situations defined above. The fractions of the aerobic (W(AER)), anaerobic alactic (W(PCR)) and anaerobic lactic (W([La-])) systems were calculated based on the oxygen uptake, the fast component of excess post-exercise oxygen uptake and changes in net blood lactate, respectively. In the water, the metabolic work was significantly higher [(851 (82) kJ] than during both ergometer [674 (60) kJ] and ergometer with slide [663 (65) kJ] (P <= 0.05). The time in the water [515 (11) s] was higher (P < 0.001) than in the ergometers with [398 (10) s] and without the slide [402 (15) s], resulting in no difference when relative energy expenditure was considered: in the water [99 (9) kJ min(-1)], ergometer without the slide [99.6 (9) kJ min(-1)] and ergometer with the slide [100.2 (9.6) kJ min(-1)]. The respective contributions of the WAER, WPCR and W[La-] systems were water = 87 (2), 7 (2) and 6 (2)%, ergometer = 84 (2), 7 (2) and 9 (2)%, and ergometer with the slide = 84 (2), 7 (2) and 9 (1)%. (V) over dotO(2), HR and lactate were not different among conditions. These results seem to indicate that the ergometer braking system simulates conditions of a bigger and faster boat and not a single scull. Probably, a 2,500 m test should be used to properly simulate in the water single-scull race.
Resumo:
This study presents the results of a mature landfill leachate treated by a homogeneous catalytic ozonation process with ions Fe(2+) and Fe(3+) at acidic pH. Quality assessments were performed using Taguchi`s method (L(8) design). Strong synergism was observed statistically between molecular ozone and ferric ions, pointing to their catalytic effect on (center dot)OH generation. The achievement of better organic matter depollution rates requires an ozone flow of 5 L h(-1) (590 mg h(-1) O(3)) and a ferric ion concentration of 5 mg L(-1).
Resumo:
The purpose of this paper is to propose a multiobjective optimization approach for solving the manufacturing cell formation problem, explicitly considering the performance of this said manufacturing system. Cells are formed so as to simultaneously minimize three conflicting objectives, namely, the level of the work-in-process, the intercell moves and the total machinery investment. A genetic algorithm performs a search in the design space, in order to approximate to the Pareto optimal set. The values of the objectives for each candidate solution in a population are assigned by running a discrete-event simulation, in which the model is automatically generated according to the number of machines and their distribution among cells implied by a particular solution. The potential of this approach is evaluated via its application to an illustrative example, and a case from the relevant literature. The obtained results are analyzed and reviewed. Therefore, it is concluded that this approach is capable of generating a set of alternative manufacturing cell configurations considering the optimization of multiple performance measures, greatly improving the decision making process involved in planning and designing cellular systems. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Literature presents a huge number of different simulations of gas-solid flows in risers applying two-fluid modeling. In spite of that, the related quantitative accuracy issue remains mostly untouched. This state of affairs seems to be mainly a consequence of modeling shortcomings, notably regarding the lack of realistic closures. In this article predictions from a two-fluid model are compared to other published two-fluid model predictions applying the same Closures, and to experimental data. A particular matter of concern is whether the predictions are generated or not inside the statistical steady state regime that characterizes the riser flows. The present simulation was performed inside the statistical steady state regime. Time-averaged results are presented for different time-averaging intervals of 5, 10, 15 and 20 s inside the statistical steady state regime. The independence of the averaged results regarding the time-averaging interval is addressed and the results averaged over the intervals of 10 and 20 s are compared to both experiment and other two-fluid predictions. It is concluded that the two-fluid model used is still very crude, and cannot provide quantitative accurate results, at least for the particular case that was considered. (C) 2009 Elsevier Inc. All rights reserved.