977 resultados para Mindlin Pseudospectral Plate Element, Chebyshev Polynomial, Integration Scheme
Resumo:
The understanding of complex physiological processes requires information from many different areas of knowledge. To meet this interdisciplinary scenario, the ability of integrating and articulating information is demanded. The difficulty of such approach arises because, more often than not, information is fragmented through under graduation education in Health Sciences. Shifting from a fragmentary and deep view of many topics to joining them horizontally in a global view is not a trivial task for teachers to implement. To attain that objective we proposed a course herein described Biochemistry of the envenomation response aimed at integrating previous contents of Health Sciences courses, following international recommendations of interdisciplinary model. The contents were organized by modules with increasing topic complexity. The full understanding of the envenoming pathophysiology of each module would be attained by the integration of knowledge from different disciplines. Active-learning strategy was employed focusing concept map drawing. Evaluation was obtained by a 30-item Likert-type survey answered by ninety students; 84% of the students considered that the number of relations that they were able to establish as seen by concept maps increased throughout the course. Similarly, 98% considered that both the theme and the strategy adopted in the course contributed to develop an interdisciplinary view.
Resumo:
Discrete element method (DEM) modeling is used in parallel with a model for coalescence of deformable surface wet granules. This produces a method capable of predicting both collision rates and coalescence efficiencies for use in derivation of an overall coalescence kernel. These coalescence kernels can then be used in computationally efficient meso-scale models such as population balance equation (PBE) models. A soft-sphere DEM model using periodic boundary conditions and a unique boxing scheme was utilized to simulate particle flow inside a high-shear mixer. Analysis of the simulation results provided collision frequency, aggregation frequency, kinetic energy, coalescence efficiency and compaction rates for the granulation process. This information can be used to bridge the gap in multi-scale modeling of granulation processes between the micro-scale DEM/coalescence modeling approach and a meso-scale PBE modeling approach.
Resumo:
Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.
Resumo:
Power system real time security assessment is one of the fundamental modules of the electricity markets. Typically, when a contingency occurs, it is required that security assessment and enhancement module shall be ready for action within about 20 minutes’ time to meet the real time requirement. The recent California black out again highlighted the importance of system security. This paper proposed an approach for power system security assessment and enhancement based on the information provided from the pre-defined system parameter space. The proposed scheme opens up an efficient way for real time security assessment and enhancement in a competitive electricity market for single contingency case
Resumo:
Modeling volcanic phenomena is complicated by free-surfaces often supporting large rheological gradients. Analytical solutions and analogue models provide explanations for fundamental characteristics of lava flows. But more sophisticated models are needed, incorporating improved physics and rheology to capture realistic events. To advance our understanding of the flow dynamics of highly viscous lava in Peléean lava dome formation, axi-symmetrical Finite Element Method (FEM) models of generic endogenous dome growth have been developed. We use a novel technique, the level-set method, which tracks a moving interface, leaving the mesh unaltered. The model equations are formulated in an Eulerian framework. In this paper we test the quality of this technique in our numerical scheme by considering existing analytical and experimental models of lava dome growth which assume a constant Newtonian viscosity. We then compare our model against analytical solutions for real lava domes extruded on Soufrière, St. Vincent, W.I. in 1979 and Mount St. Helens, USA in October 1980 using an effective viscosity. The level-set method is found to be computationally light and robust enough to model the free-surface of a growing lava dome. Also, by modeling the extruded lava with a constant pressure head this naturally results in a drop in extrusion rate with increasing dome height, which can explain lava dome growth observables more appropriately than when using a fixed extrusion rate. From the modeling point of view, the level-set method will ultimately provide an opportunity to capture more of the physics while benefiting from the numerical robustness of regular grids.
Resumo:
In this paper, a progressive asymptotic approach procedure is presented for solving the steady-state Horton-Rogers-Lapwood problem in a fluid-saturated porous medium. The Horton-Rogers-Lapwood problem possesses a bifurcation and, therefore, makes the direct use of conventional finite element methods difficult. Even if the Rayleigh number is high enough to drive the occurrence of natural convection in a fluid-saturated porous medium, the conventional methods will often produce a trivial non-convective solution. This difficulty can be overcome using the progressive asymptotic approach procedure associated with the finite element method. The method considers a series of modified Horton-Rogers-Lapwood problems in which gravity is assumed to tilt a small angle away from vertical. The main idea behind the progressive asymptotic approach procedure is that through solving a sequence of such modified problems with decreasing tilt, an accurate non-zero velocity solution to the Horton-Rogers-Lapwood problem can be obtained. This solution provides a very good initial prediction for the solution to the original Horton-Rogers-Lapwood problem so that the non-zero velocity solution can be successfully obtained when the tilted angle is set to zero. Comparison of numerical solutions with analytical ones to a benchmark problem of any rectangular geometry has demonstrated the usefulness of the present progressive asymptotic approach procedure. Finally, the procedure has been used to investigate the effect of basin shapes on natural convection of pore-fluid in a porous medium. (C) 1997 by John Wiley & Sons, Ltd.
Resumo:
The study of the mechanisms of mechanical alloying requires knowledge of the impact characteristics between the ball and vial in the presence of milling powders. In this paper, foe falling experiments have br cn used to investigate the characteristics of impact events involved in mechanical milling. The effects of milling conditions, including impact velocity, ball size and powder thickness. on the coefficient of restitution and impact force are studied. It is found that the powder has a significant influence on the impact process due to its porous structure. This effect can be demonstrated using a modified Kelvin model. This study also confirms that the impact force is a relevant parameter for characterising the impact event due to its sensitivity to the milling conditions. (C) 1998 Elsevier Science S.A.
Resumo:
Algorithms for explicit integration of structural dynamics problems with multiple time steps (subcycling) are investigated. Only one such algorithm, due to Smolinski and Sleith has proved to be stable in a classical sense. A simplified version of this algorithm that retains its stability is presented. However, as with the original version, it can be shown to sacrifice accuracy to achieve stability. Another algorithm in use is shown to be only statistically stable, in that a probability of stability can be assigned if appropriate time step limits are observed. This probability improves rapidly with the number of degrees of freedom in a finite element model. The stability problems are shown to be a property of the central difference method itself, which is modified to give the subcycling algorithm. A related problem is shown to arise when a constraint equation in time is introduced into a time-continuous space-time finite element model. (C) 1998 Elsevier Science S.A.
Resumo:
Subcycling algorithms which employ multiple timesteps have been previously proposed for explicit direct integration of first- and second-order systems of equations arising in finite element analysis, as well as for integration using explicit/implicit partitions of a model. The author has recently extended this work to implicit/implicit multi-timestep partitions of both first- and second-order systems. In this paper, improved algorithms for multi-timestep implicit integration are introduced, that overcome some weaknesses of those proposed previously. In particular, in the second-order case, improved stability is obtained. Some of the energy conservation properties of the Newmark family of algorithms are shown to be preserved in the new multi-timestep extensions of the Newmark method. In the first-order case, the generalized trapezoidal rule is extended to multiple timesteps, in a simple way that permits an implicit/implicit partition. Explicit special cases of the present algorithms exist. These are compared to algorithms proposed previously. (C) 1998 John Wiley & Sons, Ltd.
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.
Resumo:
The problem of extracting pore size distributions from characterization data is solved here with particular reference to adsorption. The technique developed is based on a finite element collocation discretization of the adsorption integral, with fitting of the isotherm data by least squares using regularization. A rapid and simple technique for ensuring non-negativity of the solutions is also developed which modifies the original solution having some negativity. The technique yields stable and converged solutions, and is implemented in a package RIDFEC. The package is demonstrated to be robust, yielding results which are less sensitive to experimental error than conventional methods, with fitting errors matching the known data error. It is shown that the choice of relative or absolute error norm in the least-squares analysis is best based on the kind of error in the data. (C) 1998 Elsevier Science Ltd. All rights reserved.
Resumo:
We have evaluated T-DNA mediated plant promoter tagging, with a left-border-linked promoterless firefly luciferase (luc) construct, as a strategy for the isolation of novel plant promoters. In a population of approximately 300 transformed tobacco plants, IO lines showed LUC activity, including novel tissue-specific and developmental patterns of expression. One line, showing LUC activity only in the shoot and root apical meristems, was further characterised. Inverse PCR was used to amplify a 1.5 kb fragment of plant DNA flanking the single-copy T-DNA insertion in this line. With the exception of a 249 bp highly repetitive element, this sequence is present as a single copy in the tobacco genome, and is not homologous to any previously characterised DNA sequences. Sequence analysis revealed the presence of several motifs that may be involved in transcriptional regulation. Transgenic tobacco plants transformed with a transcriptional fusion of this putative promoter sequence to the beta-glucuronidase (uidA) reporter gene, showed GUS activity confined to the shoot tip and mature pollen. This promoter may be useful to direct the expression of genes controlling the transition to flowering, or genes to reduce losses due to pests and stresses damaging plant apical meristems.