944 resultados para computational fluid dynamic
Resumo:
Quantitative laser ablation (LA)-ICP-MS analyses of fluid inclusions, trace element chemistry of sulfides, stable isotope (S), and Pb isotopes have been used to discriminate the formation of two contrasting mineralization styles and to evaluate the origin of the Cu and Au at Mt Morgan. The Mt Morgan Au-Cu deposit is hosted by Devonian felsic volcanic rocks that have been intruded by multiple phases of the Mt Morgan Tonalite, a low-K, low-Al2O3 tonalite-trondhjemite-dacite (TTD) complex. An early, barren massive sulfide mineralization with stringer veins is conforming to VHMS sub-seafloor replacement processes, whereas the high-grade Au-Cu. ore is associated with a later quartz-chalcopyrite-pyrite stock work mineralization that is related to intrusive phases of the Tonalite complex. LA-ICP-MS fluid inclusion analyses reveal high As (avg. 8850 ppm) and Sb (avg. 140 ppm) for the Au-Cu mineralization and 5 to 10 times higher Cu concentration than in the fluids associated with the massive pyrite mineralization. Overall, the hydrothermal system of Mt Morgan is characterized by low average fluid salinities in both mineralization styles (45-80% seawater salinity) and temperatures of 210 to 270 degreesC estimated from fluid inclusions. Laser Raman Spectroscopic analysis indicates a consistent and uniform array Of CO2-bearing fluids. Comparison with active submarine hydrothermal vents shows an enrichment of the Mt Morgan fluids in base metals. Therefore, a seawater-dominated fluid is assumed for the barren massive sulfide mineralization, whereas magmatic volatile contributions are implied for the intrusive related mineralization. Condensation of magmatic vapor into a seawater-dominated environment explains the CO2 occurrence, the low salinities, and the enriched base and precious metal fluid composition that is associated with the Au-Cu. mineralization. The sulfur isotope signature of pyrite and chalcopyrite is composed of fractionated Devonian seawater and oxidized magmatic fluids or remobilized sulfur from existing sulfides. Pb isotopes indicate that Au and Cu. originated from the Mt Morgan intrusions and a particular volcanic strata that shows elevated Cu background. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We use published and new trace element data to identify element ratios which discriminate between arc magmas from the supra-subduction zone mantle wedge and those formed by direct melting of subducted crust (i.e. adakites). The clearest distinction is obtained with those element ratios which are strongly fractionated during refertilisation of the depleted mantle wedge, ultimately reflecting slab dehydration. Hence, adakites have significantly lower Pb/Nd and B/Be but higher Nb/Ta than typical arc magmas and continental crust as a whole. Although Li and Be are also overenriched in continental crust, behaviour of Li/Yb and Be/Nd is more complex and these ratios do not provide unique signatures of slab melting. Archaean tonalite-trondhjemite-granodiorites (TTGs) strongly resemble ordinary mantle wedge-derived arc magmas in terms of fluid-mobile trace element content, implying that they-did not form by slab melting but that they originated from mantle which was hydrated and enriched in elements lost from slabs during prograde dehydration. We suggest that Archaean TTGs formed by extensive fractional crystallisation from a mafic precursor. It is widely claimed that the time between the creation and subduction of oceanic lithosphere was significantly shorter in the Archaean (i.e. 20 Ma) than it is today. This difference was seen as an attractive explanation for the presumed preponderance of adakitic magmas during the first half of Earth's history. However, when we consider the effects of a higher potential mantle temperature on the thickness of oceanic crust, it follows that the mean age of oceanic lithosphere has remained virtually constant. Formation of adakites has therefore always depended on local plate geometry and not on potential mantle temperature.
Resumo:
The widespread adoption of soil conservation technologies by farmers (notably contour hedgerows) observed in Guba, Cebu City, Philippines, is not often observed elsewhere In the country. Adoption of these technologies was because of the interaction of such phenomena as site-specific factors, appropriate extension systems, and technologies. However, lack of hedgerow maintenance, decreasing hedgerow quality, and disappearance of hedgerows raised concerns about sustainability. The dynamic nature of upland farming systems suggests the need for a location-specific farming system development framework, which provides farmers with ongoing extension for continual promotion of appropriate conservation practices.
Resumo:
In this paper we refer to the gene-to-phenotype modeling challenge as the GP problem. Integrating information across levels of organization within a genotype-environment system is a major challenge in computational biology. However, resolving the GP problem is a fundamental requirement if we are to understand and predict phenotypes given knowledge of the genome and model dynamic properties of biological systems. Organisms are consequences of this integration, and it is a major property of biological systems that underlies the responses we observe. We discuss the E(NK) model as a framework for investigation of the GP problem and the prediction of system properties at different levels of organization. We apply this quantitative framework to an investigation of the processes involved in genetic improvement of plants for agriculture. In our analysis, N genes determine the genetic variation for a set of traits that are responsible for plant adaptation to E environment-types within a target population of environments. The N genes can interact in epistatic NK gene-networks through the way that they influence plant growth and development processes within a dynamic crop growth model. We use a sorghum crop growth model, available within the APSIM agricultural production systems simulation model, to integrate the gene-environment interactions that occur during growth and development and to predict genotype-to-phenotype relationships for a given E(NK) model. Directional selection is then applied to the population of genotypes, based on their predicted phenotypes, to simulate the dynamic aspects of genetic improvement by a plant-breeding program. The outcomes of the simulated breeding are evaluated across cycles of selection in terms of the changes in allele frequencies for the N genes and the genotypic and phenotypic values of the populations of genotypes.
Resumo:
It has been argued that power-law time-to-failure fits for cumulative Benioff strain and an evolution in size-frequency statistics in the lead-up to large earthquakes are evidence that the crust behaves as a Critical Point (CP) system. If so, intermediate-term earthquake prediction is possible. However, this hypothesis has not been proven. If the crust does behave as a CP system, stress correlation lengths should grow in the lead-up to large events through the action of small to moderate ruptures and drop sharply once a large event occurs. However this evolution in stress correlation lengths cannot be observed directly. Here we show, using the lattice solid model to describe discontinuous elasto-dynamic systems subjected to shear and compression, that it is for possible correlation lengths to exhibit CP-type evolution. In the case of a granular system subjected to shear, this evolution occurs in the lead-up to the largest event and is accompanied by an increasing rate of moderate-sized events and power-law acceleration of Benioff strain release. In the case of an intact sample system subjected to compression, the evolution occurs only after a mature fracture system has developed. The results support the existence of a physical mechanism for intermediate-term earthquake forecasting and suggest this mechanism is fault-system dependent. This offers an explanation of why accelerating Benioff strain release is not observed prior to all large earthquakes. The results prove the existence of an underlying evolution in discontinuous elasto-dynamic, systems which is capable of providing a basis for forecasting catastrophic failure and earthquakes.
Resumo:
Measurement of exchange of substances between blood and tissue has been a long-lasting challenge to physiologists, and considerable theoretical and experimental accomplishments were achieved before the development of the positron emission tomography (PET). Today, when modeling data from modern PET scanners, little use is made of earlier microvascular research in the compartmental models, which have become the standard model by which the vast majority of dynamic PET data are analysed. However, modern PET scanners provide data with a sufficient temporal resolution and good counting statistics to allow estimation of parameters in models with more physiological realism. We explore the standard compartmental model and find that incorporation of blood flow leads to paradoxes, such as kinetic rate constants being time-dependent, and tracers being cleared from a capillary faster than they can be supplied by blood flow. The inability of the standard model to incorporate blood flow consequently raises a need for models that include more physiology, and we develop microvascular models which remove the inconsistencies. The microvascular models can be regarded as a revision of the input function. Whereas the standard model uses the organ inlet concentration as the concentration throughout the vascular compartment, we consider models that make use of spatial averaging of the concentrations in the capillary volume, which is what the PET scanner actually registers. The microvascular models are developed for both single- and multi-capillary systems and include effects of non-exchanging vessels. They are suitable for analysing dynamic PET data from any capillary bed using either intravascular or diffusible tracers, in terms of physiological parameters which include regional blood flow. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Water wetting is a crucial issue in carbon dioxide (CO.) corrosion of multiphase flow pipelines made from mild steel. This study demonstrates the use of a novel benchtop apparatus, a horizontal rotating cylinder, to study the effect of water wetting on CO2 corrosion of mild steel in two-phase flow. The setup is similar to a standard rotating cylinder except for its horizontal orientation and the presence of two phases-typically water and oil. The apparatus has been tested by using mass-transfer measurements and CO2 corrosion measurements in single-phase water flow. CO2 corrosion measurements were subsequently performed using a water/hexane mixture with water cuts varying between 5% and 50%. While the metal surface was primarily hydrophilic under stagnant. conditions, a variety of dynamic water wetting situations was encountered as the water cut and fluid velocity were altered. Threshold velocities were identified at various water cuts when the surface became oil-wet and corrosion stopped.
Resumo:
Subcycling, or the use of different timesteps at different nodes, can be an effective way of improving the computational efficiency of explicit transient dynamic structural solutions. The method that has been most widely adopted uses a nodal partition. extending the central difference method, in which small timestep updates are performed interpolating on the displacement at neighbouring large timestep nodes. This approach leads to narrow bands of unstable timesteps or statistical stability. It also can be in error due to lack of momentum conservation on the timestep interface. The author has previously proposed energy conserving algorithms that avoid the first problem of statistical stability. However, these sacrifice accuracy to achieve stability. An approach to conserve momentum on an element interface by adding partial velocities is considered here. Applied to extend the central difference method. this approach is simple. and has accuracy advantages. The method can be programmed by summing impulses of internal forces, evaluated using local element timesteps, in order to predict a velocity change at a node. However, it is still only statistically stable, so an adaptive timestep size is needed to monitor accuracy and to be adjusted if necessary. By replacing the central difference method with the explicit generalized alpha method. it is possible to gain stability by dissipating the high frequency response that leads to stability problems. However. coding the algorithm is less elegant, as the response depends on previous partial accelerations. Extension to implicit integration, is shown to be impractical due to the neglect of remote effects of internal forces acting across a timestep interface. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Signal peptides and transmembrane helices both contain a stretch of hydrophobic amino acids. This common feature makes it difficult for signal peptide and transmembrane helix predictors to correctly assign identity to stretches of hydrophobic residues near the N-terminal methionine of a protein sequence. The inability to reliably distinguish between N-terminal transmembrane helix and signal peptide is an error with serious consequences for the prediction of protein secretory status or transmembrane topology. In this study, we report a new method for differentiating protein N-terminal signal peptides and transmembrane helices. Based on the sequence features extracted from hydrophobic regions (amino acid frequency, hydrophobicity, and the start position), we set up discriminant functions and examined them on non-redundant datasets with jackknife tests. This method can incorporate other signal peptide prediction methods and achieve higher prediction accuracy. For Gram-negative bacterial proteins, 95.7% of N-terminal signal peptides and transmembrane helices can be correctly predicted (coefficient 0.90). Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 99% (coefficient 0.92). For eukaryotic proteins, 94.2% of N-terminal signal peptides and transmembrane helices can be correctly predicted with coefficient 0.83. Given a sensitivity of 90%, transmembrane helices can be identified from signal peptides with a precision of 87% (coefficient 0.85). The method can be used to complement current transmembrane protein prediction and signal peptide prediction methods to improve their prediction accuracies. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
This paper conducts a dynamic stability analysis of symmetrically laminated FGM rectangular plates with general out-of-plane supporting conditions, subjected to a uniaxial periodic in-plane load and undergoing uniform temperature change. Theoretical formulations are based on Reddy's third-order shear deformation plate theory, and account for the temperature dependence of material properties. A semi-analytical Galerkin-differential quadrature approach is employed to convert the governing equations into a linear system of Mathieu-Hill equations from which the boundary points on the unstable regions are determined by Bolotin's method. Free vibration and bifurcation buckling are also discussed as subset problems. Numerical results are presented in both dimensionless tabular and graphical forms for laminated plates with FGM layers made of silicon nitride and stainless steel. The influences of various parameters such as material composition, layer thickness ratio, temperature change, static load level, boundary constraints on the dynamic stability, buckling and vibration frequencies are examined in detail through parametric studies.
Resumo:
Borderline hypertension (BH) has been associated with an exaggerated blood pressure (BP) response during laboratory stressors. However, the incidence of target organ damage in this condition and its relation to BP hyperreactivity is an unsettled issue. Thus, we assessed the Doppler echocardiographic profile of a group of BH men (N = 36) according to office BP measurements with exaggerated BP in the cycloergometric test. A group of normotensive men (NT, N = 36) with a normal BP response during the cycloergometric test was used as control. To assess vascular function and reactivity, all subjects were submitted to the cold pressor test. Before Doppler echocardiography, the BP profile of all subjects was evaluated by 24-h ambulatory BP monitoring. All subjects from the NT group presented normal monitored levels of BP. In contrast, 19 subjects from the original BH group presented normal monitored BP levels and 17 presented elevated monitored BP levels. In the NT group all Doppler echocardiographic indexes were normal. All subjects from the original BH group presented normal left ventricular mass and geometrical pattern. However, in the subjects with elevated monitored BP levels, fractional shortening was greater, isovolumetric relaxation time longer, and early to late flow velocity ratio was reduced in relation to subjects from the original BH group with normal monitored BP levels (P<0.05). These subjects also presented an exaggerated BP response during the cold pressor test. These results support the notion of an integrated pattern of cardiac and vascular adaptation during the development of hypertension.
Resumo:
Talvez não seja nenhum exagero afirmar que há quase um consenso entre os praticantes da Termoeconomia de que a exergia, ao invés de só entalpia, seja a magnitude Termodinâmica mais adequada para ser combinada com o conceito de custo na modelagem termoeconômica, pois esta leva em conta aspectos da Segunda Lei da Termodinâmica e permite identificar as irreversibilidades. Porém, muitas vezes durante a modelagem termoeconômica se usa a exergia desagregada em suas parcelas (química, térmica e mecânica), ou ainda, se inclui a neguentropia que é um fluxo fictício, permitindo assim a desagregação do sistema em seus componentes (ou subsistemas) visando melhorar e detalhar a modelagem para a otimização local, diagnóstico e alocação dos resíduos e equipamentos dissipativos. Alguns autores também afirmam que a desagregação da exergia física em suas parcelas (térmica e mecânica) permite aumentar a precisão dos resultados na alocação de custos, apesar de fazer aumentar a complexidade do modelo termoeconômico e consequentemente os custos computacionais envolvidos. Recentemente alguns autores apontaram restrições e possíveis inconsistências do uso da neguentropia e deste tipo de desagregação da exergia física, propondo assim alternativas para o tratamento de resíduos e equipamentos dissipativos que permitem a desagregação dos sistemas em seus componentes. Estas alternativas consistem, basicamente, de novas propostas de desagregação da exergia física na modelagem termoeconômica. Sendo assim, este trabalho tem como objetivo avaliar as diferentes metodologias de desagregação da exergia física para a modelagem termoeconômica, tendo em conta alguns aspectos como vantagens, restrições, inconsistências, melhoria na precisão dos resultados, aumento da complexidade e do esforço computacional e o tratamento dos resíduos e equipamentos dissipativos para a total desagregação do sistema térmico. Para isso, as diferentes metodologias e níveis de desagregação da exergia física são aplicados na alocação de custos para os produtos finais (potência líquida e calor útil) em diferentes plantas de cogeração considerando como fluido de trabalho tanto o gás ideal bem como o fluido real. Plantas essas com equipamentos dissipativos (condensador ou válvula) ou resíduos (gases de exaustão da caldeira de recuperação). Porém, foi necessário que uma das plantas de cogeração não incorporasse equipamentos dissipativos e nem caldeira de recuperação com o intuito de avaliar isoladamente o efeito da desagregação da exergia física na melhoria da precisão dos resultados da alocação de custos para os produtos finais.
Resumo:
Utilizar robôs autônomos capazes de planejar o seu caminho é um desafio que atrai vários pesquisadores na área de navegação de robôs. Neste contexto, este trabalho tem como objetivo implementar um algoritmo PSO híbrido para o planejamento de caminhos em ambientes estáticos para veículos holonômicos e não holonômicos. O algoritmo proposto possui duas fases: a primeira utiliza o algoritmo A* para encontrar uma trajetória inicial viável que o algoritmo PSO otimiza na segunda fase. Por fim, uma fase de pós planejamento pode ser aplicada no caminho a fim de adaptá-lo às restrições cinemáticas do veículo não holonômico. O modelo Ackerman foi considerado para os experimentos. O ambiente de simulação de robótica CARMEN (Carnegie Mellon Robot Navigation Toolkit) foi utilizado para realização de todos os experimentos computacionais considerando cinco instâncias de mapas geradas artificialmente com obstáculos. O desempenho do algoritmo desenvolvido, A*PSO, foi comparado com os algoritmos A*, PSO convencional e A* Estado Híbrido. A análise dos resultados indicou que o algoritmo A*PSO híbrido desenvolvido superou em qualidade de solução o PSO convencional. Apesar de ter encontrado melhores soluções em 40% das instâncias quando comparado com o A*, o A*PSO apresentou trajetórias com menos pontos de guinada. Investigando os resultados obtidos para o modelo não holonômico, o A*PSO obteve caminhos maiores entretanto mais suaves e seguros.