983 resultados para Methods: numerical
Resumo:
A number of authors concerned with the analysis of rock jointing have used the idea that the joint areal or diametral distribution can be linked to the trace length distribution through a theorem attributed to Crofton. This brief paper seeks to demonstrate why Crofton's theorem need not be used to link moments of the trace length distribution captured by scan line or areal mapping to the moments of the diametral distribution of joints represented as disks and that it is incorrect to do so. The valid relationships for areal or scan line mapping between all the moments of the trace length distribution and those of the joint size distribution for joints modeled as disks are recalled and compared with those that might be applied were Crofton's theorem assumed to apply. For areal mapping, the relationship is fortuitously correct but incorrect for scan line mapping.
Resumo:
In this paper we propose a novel fast and linearly scalable method for solving master equations arising in the context of gas-phase reactive systems, based on an existent stiff ordinary differential equation integrator. The required solution of a linear system involving the Jacobian matrix is achieved using the GMRES iteration preconditioned using the diffusion approximation to the master equation. In this way we avoid the cubic scaling of traditional master equation solution methods and maintain the low temperature robustness of numerical integration. The method is tested using a master equation modelling the formation of propargyl from the reaction of singlet methylene with acetylene, proceeding through long lived isomerizing intermediates. (C) 2003 American Institute of Physics.
Resumo:
High-resolution numerical model simulations have been used to study the local and mesoscale thermal circulations in an Alpine lake basin. The lake (87 km(2)) is situated in the Southern Alps, New Zealand and is located in a glacially excavated rock basin surrounded by mountain ranges that reach 3000 m in height. The mesoscale model used (RAMS) is a three-dimensional non-hydrostatic model with a level 2.5 turbulence closure scheme. The model demonstrates that thermal forcing at local (within the basin) and regional (coast-to-basin inflow) scales drive the observed boundary-layer airflow in the lake basin during clear anticyclonic summertime conditions. The results show that the lake can modify (perturb) both the local and regional wind systems. Following sunrise, local thermal circulations dominate, including a lake breeze component that becomes embedded within the background valley wind system. This results in a more divergent flow in the basin extending across the lake shoreline. However, a closed lake breeze circulation is neither observed nor modelled. Modelling results indicate that in the latter part of the day when the mesoscale (coast-to-basin) inflow occurs, the relatively cold pool of lake air in the basin can cause the intrusion to decouple from the surface. Measured data provide qualitative and quantitative support for the model results.
Resumo:
In modern magnetic resonance imaging (MRI), patients are exposed to strong, nonuniform static magnetic fields outside the central imaging region, in which the movement of the body may be able to induce electric currents in tissues which could be possibly harmful. This paper presents theoretical investigations into the spatial distribution of induced electric fields and currents in the patient when moving into the MRI scanner and also for head motion at various positions in the magnet. The numerical calculations are based on an efficient, quasi-static, finite-difference scheme and an anatomically realistic, full-body, male model. 3D field profiles from an actively shielded 4T magnet system are used and the body model projected through the field profile with a range of velocities. The simulation shows that it possible to induce electric fields/currents near the level of physiological significance under some circumstances and provides insight into the spatial characteristics of the induced fields. The results are extrapolated to very high field strengths and tabulated data shows the expected induced currents and fields with both movement velocity and field strength. (C) 2003 Elsevier Science (USA). All rights reserved.
Resumo:
The aim of this study was to compare accumulated oxygen deficit data derived using two different exercise protocols with the aim of producing a less time-consuming test specifically for use with athletes. Six road and four track male endurance cyclists performed two series of cycle ergometer tests. The first series involved five 10 min sub-maximal cycle exercise bouts, a (V) over dotO(2peak) test and a 115% (V) over dotO(2peak) test. Data from these tests were used to estimate the accumulated oxygen deficit according to the calculations of Medbo et al. (1988). In the second series of tests, participants performed a 15 min incremental cycle ergometer test followed, 2 min later, by a 2 min variable resistance test in which they completed as much work as possible while pedalling at a constant rate. Analysis revealed that the accumulated oxygen deficit calculated from the first series of tests was higher (P< 0.02) than that calculated from the second series: 52.3 +/- 11.7 and 43.9 +/- 6.4 ml . kg(-1), respectively (mean +/- s). Other significant differences between the two protocols were observed for (V) over dot O-2peak, total work and maximal heart rate; all were higher during the modified protocol (P
Resumo:
Undemutrition during early life is known to cause deficits and distortions of brain structure although it has remained uncertain whether or not this includes a diminution of the total numbers of neurons. Estimates of numerical density (e.g. number of cells per microscopic field, or number of cells per unit area of section, or number of cells per unit volume of tissue) are extremely difficult to interpret and do not provide estimates of total numbers of cells. However, advances in stereological techniques have made it possible to obtain unbiased estimates of total numbers of cells in well defined biological structures. These methods have been utilised in studies to determine the effects of varying periods of undernutrition during early life on the numbers of neurons in various regions of the rat brain. The regions examined so far have included the cerebellum, the dentate gyrus, the olfactory bulbs and the cerebral cortex. The only region to show, unequivocally, that a period of undernutrition during early life causes a deficit in the number of neurons was the dentate gyrus. These findings are discussed in the context of other morphological and functional deficits present in undernourished animals.
Resumo:
O presente trabalho objetiva avaliar o desempenho do MECID (Método dos Elementos de Contorno com Interpolação Direta) para resolver o termo integral referente à inércia na Equação de Helmholtz e, deste modo, permitir a modelagem do Problema de Autovalor assim como calcular as frequências naturais, comparando-o com os resultados obtidos pelo MEF (Método dos Elementos Finitos), gerado pela Formulação Clássica de Galerkin. Em primeira instância, serão abordados alguns problemas governados pela equação de Poisson, possibilitando iniciar a comparação de desempenho entre os métodos numéricos aqui abordados. Os problemas resolvidos se aplicam em diferentes e importantes áreas da engenharia, como na transmissão de calor, no eletromagnetismo e em problemas elásticos particulares. Em termos numéricos, sabe-se das dificuldades existentes na aproximação precisa de distribuições mais complexas de cargas, fontes ou sorvedouros no interior do domínio para qualquer técnica de contorno. No entanto, este trabalho mostra que, apesar de tais dificuldades, o desempenho do Método dos Elementos de Contorno é superior, tanto no cálculo da variável básica, quanto na sua derivada. Para tanto, são resolvidos problemas bidimensionais referentes a membranas elásticas, esforços em barras devido ao peso próprio e problemas de determinação de frequências naturais em problemas acústicos em domínios fechados, dentre outros apresentados, utilizando malhas com diferentes graus de refinamento, além de elementos lineares com funções de bases radiais para o MECID e funções base de interpolação polinomial de grau (um) para o MEF. São geradas curvas de desempenho através do cálculo do erro médio percentual para cada malha, demonstrando a convergência e a precisão de cada método. Os resultados também são comparados com as soluções analíticas, quando disponíveis, para cada exemplo resolvido neste trabalho.
Resumo:
Os métodos de análise de estruturas de contenção de solo reforçado sob condições de trabalho, em geral, desconsideram a contribuição da face para o equilíbrio da estrutura. Visando estudar a influência do peso específico da face e das propriedades relacionadas à rigidez da mesma sobre o desempenho das estruturas de solo reforçado, são realizadas simulações numéricas de diversas estruturas, utilizando a versão de dupla precisão do programa CRISP92-SC. Avalia-se, também, o emprego de diferentes tipos de elementos para a representação da face. Verifica-se que a face rígida impõe redução significativa das solicitações máximas de tração nos reforços e dos deslocamentos das estruturas de solo reforçado. A influência do peso específico da face sobre a estabilidade interna dos maciços reforçados mostrase desprezível e constata-se que a rigidez à flexão e a rigidez axial da face, função da sua geometria e do seu módulo de Young, são parâmetros influentes no comportamento das estruturas de contenção de solo reforçado. As variações da tração no reforço e da resultante de força cortante na face, em decorrência do enrijecimento da face, são analisadas e propõe-se uma relação entre elas. Quanto à forma de representação de uma face com rigidez expressiva, na simulação de uma estrutura de solo reforçado com o CRISP92-SC, é observado que a representação da face, seja por elementos de viga, seja por elementos quadriláteros, não altera os resultados da análise.
Resumo:
Abstract. Graphical user interfaces (GUIs) make software easy to use by providing the user with visual controls. Therefore, correctness of GUI’s code is essential to the correct execution of the overall software. Models can help in the evaluation of interactive applications by allowing designers to concentrate on its more important aspects. This paper describes our approach to reverse engineer an abstract model of a user interface directly from the GUI’s legacy code. We also present results from a case study. These results are encouraging and give evidence that the goal of reverse engineering user interfaces can be met with more work on this technique.
Resumo:
n plant breeding programs that aim to obtain cultivars with nitrogen (N) use efficiency, the focus is on methods of selection and experimental procedures that present low cost, fast response, high repeatability, and can be applied to a large number of cultivars. Thus, the objectives of this study were to classify maize cultivars regarding their use efficiency and response to N in a breeding program, and to validate the methodology with contrasting doses of the nutrient. The experimental design was a randomized block with the treatments arranged in a split-plot scheme with three replicates and five N doses (0, 30, 60, 120 and 200 kg ha-1) in the plots, and six cultivars in subplots. We compared a method examining the efficiency and response (ER) with two contrasting doses of N. After that, the analysis of variance, mean comparison and regression analysis were performed. In conclusion, the method of the use efficiency and response based on two N levels classifies the cultivars in the same way as the regression analysis, and it is appropriate in plant breeding routine. Thus, it is necessary to identify the levels of N required to discriminate maize cultivars in conditions of low and high N availability in plant breeding programs that aim to obtain efficient and responsive cultivars. Moreover, the analysis of the interaction genotype x environment at experiments with contrasting doses is always required, even when the interaction is not significant.
Resumo:
ABSTRACTAiming to compare three different methods for the determination of organic carbon (OC) in the soil and fractions of humic substances, seventeen Brazilian soil samples of different classes and textures were evaluated. Amounts of OC in the soil samples and the humic fractions were measured by the dichromate-oxidation method, with and without external heating in a digestion block at 130 °C for 30 min; by the loss-on-ignition method at 450 °C during 5 h and at 600 °C during 6 h; and by the dry combustion method. Dry combustion was used as reference in order to measure the efficiency of the other methods. Soil OC measured by the dichromate-oxidation method with external heating had the highest efficiency and the best results comparing to the reference method. When external heating was not used, the mean recovery efficiency dropped to 71%. The amount of OC was overestimated by the loss-on-ignition methods. Regression equations obtained between total OC contents of the reference method and those of the other methods showed relatively good adjustment, but all intercepts were different from zero (p < 0.01), which suggests that more accuracy can be obtained using not one single correction factor, but considering also the intercept. The Walkley-Black method underestimated the OC contents of the humic fractions, which was associated with the partial oxidation of the humin fraction. Better results were obtained when external heating was used. For the organic matter fractions, the OC in the humic and fulvic acid fractions can be determined without external heating if the reference method is not available, but the humin fraction requires the external heating.
Resumo:
ABSTRACT Soils of tropical regions are more weathered and in need of conservation managements to maintain and improve the quality of its components. The objective of this study was to evaluate the availability of K, the organic matter content and the stock of total carbon of an Argisol after vinasse application and manual and mechanized harvesting of burnt and raw sugarcane, in western São Paulo.The data collection was done in the 2012/2013 harvest, in a bioenergy company in Presidente Prudente/SP. The research was arranged out following a split-plot scheme in a 5x5 factorial design, characterized by four management systems: without vinasse application and harvest without burning; with vinasse application and harvest without burning; with vinasse application and harvest after burning; without vinasse application and harvest after burning; plus native forest, and five soil sampling depths (0-10 10-20, 20-30, 30-40, 40-50 cm), with four replications. In each treatment, the K content in the soil and accumulated in the remaining dry biomass in the area, the levels of organic matter, organic carbon and soil carbon stock were determined. The mean values were compared by Tukey test. The vinasse application associated with the harvest without burning increased the K content in soil layers up to 40 cm deep. The managements without vinasse application and manual harvest after burning, and without vinasse application with mechanical harvesting without burning did not increase the levels of organic matter, organic carbon and stock of total soil organic carbon, while the vinasse application and harvest after burning and without burning increased the levels of these attributes in the depth of 0-10 cm.
Resumo:
ABSTRACT The objective of this work was to study the distribution of values of the coefficient of variation (CV) in the experiments of papaya crop (Carica papaya L.) by proposing ranges to guide researchers in their evaluation for different characters in the field. The data used in this study were obtained by bibliographical review in Brazilian journals, dissertations and thesis. This study considered the following characters: diameter of the stalk, insertion height of the first fruit, plant height, number of fruits per plant, fruit biomass, fruit length, equatorial diameter of the fruit, pulp thickness, fruit firmness, soluble solids and internal cavity diameter, from which, value ranges were obtained for the CV values for each character, based on the methodology proposed by Garcia, Costa and by the standard classification of Pimentel-Gomes. The results obtained in this study indicated that ranges of CV values were different among various characters, presenting a large variation, which justifies the necessity of using specific evaluation range for each character. In addition, the use of classification ranges obtained from methodology of Costa is recommended.
Resumo:
Minimally invasive cardiovascular interventions guided by multiple imaging modalities are rapidly gaining clinical acceptance for the treatment of several cardiovascular diseases. These images are typically fused with richly detailed pre-operative scans through registration techniques, enhancing the intra-operative clinical data and easing the image-guided procedures. Nonetheless, rigid models have been used to align the different modalities, not taking into account the anatomical variations of the cardiac muscle throughout the cardiac cycle. In the current study, we present a novel strategy to compensate the beat-to-beat physiological adaptation of the myocardium. Hereto, we intend to prove that a complete myocardial motion field can be quickly recovered from the displacement field at the myocardial boundaries, therefore being an efficient strategy to locally deform the cardiac muscle. We address this hypothesis by comparing three different strategies to recover a dense myocardial motion field from a sparse one, namely, a diffusion-based approach, thin-plate splines, and multiquadric radial basis functions. Two experimental setups were used to validate the proposed strategy. First, an in silico validation was carried out on synthetic motion fields obtained from two realistic simulated ultrasound sequences. Then, 45 mid-ventricular 2D sequences of cine magnetic resonance imaging were processed to further evaluate the different approaches. The results showed that accurate boundary tracking combined with dense myocardial recovery via interpolation/ diffusion is a potentially viable solution to speed up dense myocardial motion field estimation and, consequently, to deform/compensate the myocardial wall throughout the cardiac cycle. Copyright © 2015 John Wiley & Sons, Ltd.