976 resultados para Classical guitar
Resumo:
The main objective of this work is to present an alternative boundary element method (BEM) formulation for the static analysis of three-dimensional non-homogeneous isotropic solids. These problems can be solved using the classical boundary element formulation, analyzing each subregion separately and then joining them together by introducing equilibrium and displacements compatibility. Establishing relations between the displacement fundamental solutions of the different domains, the alternative technique proposed in this paper allows analyzing all the domains as one unique solid, not requiring equilibrium or compatibility equations. This formulation also leads to a smaller system of equations when compared to the usual subregion technique, and the results obtained are even more accurate. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
We consider a class of two-dimensional problems in classical linear elasticity for which material overlapping occurs in the absence of singularities. Of course, material overlapping is not physically realistic, and one possible way to prevent it uses a constrained minimization theory. In this theory, a minimization problem consists of minimizing the total potential energy of a linear elastic body subject to the constraint that the deformation field must be locally invertible. Here, we use an interior and an exterior penalty formulation of the minimization problem together with both a standard finite element method and classical nonlinear programming techniques to compute the minimizers. We compare both formulations by solving a plane problem numerically in the context of the constrained minimization theory. The problem has a closed-form solution, which is used to validate the numerical results. This solution is regular everywhere, including the boundary. In particular, we show numerical results which indicate that, for a fixed finite element mesh, the sequences of numerical solutions obtained with both the interior and the exterior penalty formulations converge to the same limit function as the penalization is enforced. This limit function yields an approximate deformation field to the plane problem that is locally invertible at all points in the domain. As the mesh is refined, this field converges to the exact solution of the plane problem.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
A geometrical approach of the finite-element analysis applied to electrostatic fields is presented. This approach is particularly well adapted to teaching Finite Elements in Electrical Engineering courses at undergraduate level. The procedure leads to the same system of algebraic equations as that derived by classical approaches, such as variational principle or weighted residuals for nodal elements with plane symmetry. It is shown that the extension of the original procedure to three dimensions is straightforward, provided the domain be meshed in first-order tetrahedral elements. The element matrices are derived by applying Maxwell`s equations in integral form to suitably chosen surfaces in the finite-element mesh.
Resumo:
Most post-processors for boundary element (BE) analysis use an auxiliary domain mesh to display domain results, working against the profitable modelling process of a pure boundary discretization. This paper introduces a novel visualization technique which preserves the basic properties of the boundary element methods. The proposed algorithm does not require any domain discretization and is based on the direct and automatic identification of isolines. Another critical aspect of the visualization of domain results in BE analysis is the effort required to evaluate results in interior points. In order to tackle this issue, the present article also provides a comparison between the performance of two different BE formulations (conventional and hybrid). In addition, this paper presents an overview of the most common post-processing and visualization techniques in BE analysis, such as the classical algorithms of scan line and the interpolation over a domain discretization. The results presented herein show that the proposed algorithm offers a very high performance compared with other visualization procedures.
Resumo:
An alternative approach for the analysis of arbitrarily curved shells is developed in this paper based on the idea of initial deformations. By `alternative` we mean that neither differential geometry nor the concept of degeneration is invoked here to describe the shell surface. We begin with a flat reference configuration for the shell mid-surface, after which the initial (curved) geometry is mapped as a stress-free deformation from the plane position. The actual motion of the shell takes place only after this initial mapping. In contrast to classical works in the literature, this strategy enables the use of only orthogonal frames within the theory and therefore objects such as Christoffel symbols, the second fundamental form or three-dimensional degenerated solids do not enter the formulation. Furthermore, the issue of physical components of tensors does not appear. Another important aspect (but not exclusive of our scheme) is the possibility to describe exactly the initial geometry. The model is kinematically exact, encompasses finite strains in a totally consistent manner and is here discretized under the light of the finite element method (although implementation via mesh-free techniques is also possible). Assessment is made by means of several numerical simulations. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
An exact non-linear formulation of the equilibrium of elastic prismatic rods subjected to compression and planar bending is presented, electing as primary displacement variable the cross-section rotations and taking into account the axis extensibility. Such a formulation proves to be sufficiently general to encompass any boundary condition. The evaluation of critical loads for the five classical Euler buckling cases is pursued, allowing for the assessment of the axis extensibility effect. From the quantitative viewpoint, it is seen that such an influence is negligible for very slender bars, but it dramatically increases as the slenderness ratio decreases. From the qualitative viewpoint, its effect is that there are not infinite critical loads, as foreseen by the classical inextensible theory. The method of multiple (spatial) scales is used to survey the post-buckling regime for the five classical Euler buckling cases, with remarkable success, since very small deviations were observed with respect to results obtained via numerical integration of the exact equation of equilibrium, even when loads much higher than the critical ones were considered. Although known beforehand that such classical Euler buckling cases are imperfection insensitive, the effect of load offsets were also looked at, thus showing that the formulation is sufficiently general to accommodate this sort of analysis. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
A two-dimensional numeric simulator is developed to predict the nonlinear, convective-reactive, oxygen mass exchange in a cross-flow hollow fiber blood oxygenator. The numeric simulator also calculates the carbon dioxide mass exchange, as hemoglobin affinity to oxygen is affected by the local pH value, which depends mostly on the local carbon dioxide content in blood. Blood pH calculation inside the oxygenator is made by the simultaneous solution of an equation that takes into account the blood buffering capacity and the classical Henderson-Hasselbach equation. The modeling of the mass transfer conductance in the blood comprises a global factor, which is a function of the Reynolds number, and a local factor, which takes into account the amount of oxygen reacted to hemoglobin. The simulator is calibrated against experimental data for an in-line fiber bundle. The results are: (i) the calibration process allows the precise determination of the mass transfer conductance for both oxygen and carbon dioxide; (ii) very alkaline pH values occur in the blood path at the gas inlet side of the fiber bundle; (iii) the parametric analysis of the effect of the blood base excess (BE) shows that V(CO2) is similar in the case of blood metabolic alkalosis, metabolic acidosis, or normal BE, for a similar blood inlet P(CO2), although the condition of metabolic alkalosis is the worst case, as the pH in the vicinity of the gas inlet is the most alkaline; (iv) the parametric analysis of the effect of the gas flow to blood flow ratio (Q(G)/Q(B)) shows that V(CO2) variation with the gas flow is almost linear up to Q(G)/Q(B) = 2.0. V(O2) is not affected by the gas flow as it was observed that by increasing the gas flow up to eight times, the V(O2) grows only 1%. The mass exchange of carbon dioxide uses the full length of the hollow-fiber only if Q(G)/Q(B) > 2.0, as it was observed that only in this condition does the local variation of pH and blood P(CO2) comprise the whole fiber bundle.
Resumo:
This article presents a kinetic evaluation of froth flotation of ultrafine coal contained in the tailings from a Colombian coal preparation plant. The plant utilizes a dense-medium cyclones and spirals circuit. The tailings contained material that was 63% finer than 14 mu m. Flotation tests were performed with and without coal ""promoters"" (diesel oil or kerosene) to evaluate the kinetics of flotation of coal. It was found that flotation rates were higher when no promoter was added. Different kinetic models were evaluated for the flotation of the coal from the tailings, and it was found that the best fitted model was the classical first-order model.
Resumo:
Flow pumps have been developed for classical applications in Engineering, and are important instruments in areas such as Biology and Medicine. Among applications for this kind of device we notice blood pump and chemical reagents dosage in Bioengineering. Furthermore, they have recently emerged as a viable thermal management solution for cooling applications in small-scale electronic devices. This work presents the performance study of a novel principle of a piezoelectric flow pump which is based oil the use of a bimorph piezoelectric actuator inserted in fluid (water). Piezoelectric actuators have some advantages over classical devices, such as lower noise generation and ease of miniaturization. The main objective is the characterization of this piezoelectric pump principle through computational simulations (using finite element software), and experimental tests through a manufactured prototype. Computational data, Such as flow rate and pressure curves, have also been compared with experimental results for validation purposes. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This contribution describes the development of a continuous emulsion copolymerization processs for vinyl acetate and n-butyl acrylate in a tubular reactor. Special features of this reactor include the use of oscillatory (pulsed) flow and internals (sieve plates) to prevent polymer fouling and promote good radial mixing, along with a controlled amount of axial mixing. The copolymer system studied (vinyl acetate and butyl acrylate) is strongly prone to composition drift due to very different reactivity ratios. An axially dispersed plug flow model, based on classical free radical copolymerization kinetics, was developed for this process and used successfully to optimize the lateral feeding profile to reduce compositional drift. An energy balance was included in the model equations to predict the effect of temperature variations on the process. The model predictions were validated with experimental data for monomer conversion, copolymer composition, average particle size, and temperature measured along the reactor length.
Resumo:
The classical approach for acoustic imaging consists of beamforming, and produces the source distribution of interest convolved with the array point spread function. This convolution smears the image of interest, significantly reducing its effective resolution. Deconvolution methods have been proposed to enhance acoustic images and have produced significant improvements. Other proposals involve covariance fitting techniques, which avoid deconvolution altogether. However, in their traditional presentation, these enhanced reconstruction methods have very high computational costs, mostly because they have no means of efficiently transforming back and forth between a hypothetical image and the measured data. In this paper, we propose the Kronecker Array Transform ( KAT), a fast separable transform for array imaging applications. Under the assumption of a separable array, it enables the acceleration of imaging techniques by several orders of magnitude with respect to the fastest previously available methods, and enables the use of state-of-the-art regularized least-squares solvers. Using the KAT, one can reconstruct images with higher resolutions than was previously possible and use more accurate reconstruction techniques, opening new and exciting possibilities for acoustic imaging.
Resumo:
In Part I [""Fast Transforms for Acoustic Imaging-Part I: Theory,"" IEEE TRANSACTIONS ON IMAGE PROCESSING], we introduced the Kronecker array transform (KAT), a fast transform for imaging with separable arrays. Given a source distribution, the KAT produces the spectral matrix which would be measured by a separable sensor array. In Part II, we establish connections between the KAT, beamforming and 2-D convolutions, and show how these results can be used to accelerate classical and state of the art array imaging algorithms. We also propose using the KAT to accelerate general purpose regularized least-squares solvers. Using this approach, we avoid ill-conditioned deconvolution steps and obtain more accurate reconstructions than previously possible, while maintaining low computational costs. We also show how the KAT performs when imaging near-field source distributions, and illustrate the trade-off between accuracy and computational complexity. Finally, we show that separable designs can deliver accuracy competitive with multi-arm logarithmic spiral geometries, while having the computational advantages of the KAT.
Resumo:
The main goal of this paper is to apply the so-called policy iteration algorithm (PIA) for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP`s) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we first derive some important properties for a pseudo-Poisson equation associated to the problem. In the sequence it is shown that the convergence of the PIA to a solution satisfying the optimality equation holds under some classical hypotheses and that this optimal solution yields to an optimal control strategy for the average control problem for the continuous-time PDMP in a feedback form.
Resumo:
This letter presents the properties of nMOS junctionless nanowire transistors (JNTs) under cryogenic operation. Experimental results of drain current, subthreshold slope, maximum transconductance at low electric field, and threshold voltage, as well as its variation with temperature, are presented. Unlike in classical devices, the drain current of JNTs decreases when temperature is lowered, although the maximum transconductance increases when the temperature is lowered down to 125 K. An analytical model for the threshold voltage is proposed to explain the influence of nanowire width and doping concentration on its variation with temperature. It is shown that the wider the nanowire or the lower the doping concentration, the higher the threshold voltage variation with temperature.