981 resultados para Classical philology.
Resumo:
The decomposition of peroxynitrite to nitrite and dioxygen at neutral pH follows complex kinetics, compared to its isomerization to nitrate at low pH. Decomposition may involve radicals or proceed by way of the classical peracid decomposition mechanism. Peroxynitrite (ONOOH/ONOO(-)) decomposition has been proposed to involve formation of peroxynitrate (O(2)NOOH/O(2)NOO(-)) at neutral pH (D. Gupta, B. Harish, R. Kissner and W. H. Koppenol, Dalton Trans., 2009, DOI: 10.1039/b905535e, see accompanying paper in this issue). Peroxynitrate is unstable and decomposes to nitrite and dioxygen. This study aimed to investigate whether O(2)NOO(-) formed upon ONOOH/ONOO(-) decomposition generates singlet molecular oxygen [O(2) ((1)Delta(g))]. As unequivocally revealed by the measurement of monomol light emission in the near infrared region at 1270 nm and by chemical trapping experiments, the decomposition of ONOO(-) or O(2)NOOH at neutral to alkaline pH generates O(2) ((1)Delta(g)) at a yield of ca. 1% and 2-10%, respectively. Characteristic light emission, corresponding to O(2) ((1)Delta(g)) monomolecular decay was observed for ONOO(-) and for O(2)NOOH prepared by reaction of H(2)O(2) with NO(2)BF(4) and of H(2)O(2) with NO(2)(-) in HClO(4). The generation of O(2) ((1)Delta(g)) from ONOO(-) increased in a concentration-dependent manner in the range of 0.1-2.5 mM and was dependent on pH, giving a sigmoid pro. le with an apparent pK(a) around pD 8.1 (pH 7.7). Taken together, our results clearly identify the generation of O(2) ((1)Delta(g)) from peroxynitrate [O(2)NOO(-) -> NO(2)(-) + O(2) ((1)Delta(g))] generated from peroxynitrite and also from the reactions of H(2)O(2) with either NO(2)BF(4) or NO(2)(-) in acidic media.
Resumo:
The AdS/CFT duality has established a mapping between quantities in the bulk AdS black-hole physics and observables in a boundary finite-temperature field theory. Such a relationship appears to be valid for an arbitrary number of spacetime dimensions, extrapolating the original formulations of Maldacena`s correspondence. In the same sense properties like the hydrodynamic behavior of AdS black-hole fluctuations have been proved to be universal. We investigate in this work the complete quasinormal spectra of gravitational perturbations of d-dimensional plane-symmetric AdS black holes (black branes). Holographically the frequencies of the quasinormal modes correspond to the poles of two-point correlation functions of the field-theory stress-energy tensor. The important issue of the correct boundary condition to be imposed on the gauge-invariant perturbation fields at the AdS boundary is studied and elucidated in a fully d-dimensional context. We obtain the dispersion relations of the first few modes in the low-, intermediate- and high-wavenumber regimes. The sound-wave (shear-mode) behavior of scalar (vector)-type low- frequency quasinormal mode is analytically and numerically confirmed. These results are found employing both a power series method and a direct numerical integration scheme.
Resumo:
Age-related changes in running kinematics have been reported in the literature using classical inferential statistics. However, this approach has been hampered by the increased number of biomechanical gait variables reported and subsequently the lack of differences presented in these studies. Data mining techniques have been applied in recent biomedical studies to solve this problem using a more general approach. In the present work, we re-analyzed lower extremity running kinematic data of 17 young and 17 elderly male runners using the Support Vector Machine (SVM) classification approach. In total, 31 kinematic variables were extracted to train the classification algorithm and test the generalized performance. The results revealed different accuracy rates across three different kernel methods adopted in the classifier, with the linear kernel performing the best. A subsequent forward feature selection algorithm demonstrated that with only six features, the linear kernel SVM achieved 100% classification performance rate, showing that these features provided powerful combined information to distinguish age groups. The results of the present work demonstrate potential in applying this approach to improve knowledge about the age-related differences in running gait biomechanics and encourages the use of the SVM in other clinical contexts. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The kinetics of the solution free radical polymerization of N-vinylcaprolactam, in 1,4-dioxane and under various polymerization conditions was studied. Azobisisobutyronitrile and 3-mercaptopropionic acid were used as initiator and as chain transfer agent (CTA), respectively. The influence of monomer and initiator concentrations and polymerization temperature on the rate of polymerizations (R(p)) was investigated. In general, high conversions were obtained. The order with respect to initiator was consistent with the classical kinetic rate equation, while the order with respect to the monomer was greater than unity. The overall activation energy of 53.6 kJ mol(-1) was obtained in the temperature range 60-80 degrees C. The decreasing of the absolute molecular weights when increasing the CIA concentration was confirmed by GPC/SEC/LALS analyses. It was confirmed by UV-visible analyses the effect of molecular weights on the lower critical solution temperature of the polymers. It was also verified that the addition of the CTA influenced the kinetic of the polymerizations. (C) 2010 Wiley Periodicals, Inc. J Appl Polym Sci 118: 229-240, 2010
Resumo:
The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.
Resumo:
In this study, the innovation approach is used to estimate the measurement total error associated with power system state estimation. This is required because the power system equations are very much correlated with each other and as a consequence part of the measurements errors is masked. For that purpose an index, innovation index (II), which provides the quantity of new information a measurement contains is proposed. A critical measurement is the limit case of a measurement with low II, it has a zero II index and its error is totally masked. In other words, that measurement does not bring any innovation for the gross error test. Using the II of a measurement, the masked gross error by the state estimation is recovered; then the total gross error of that measurement is composed. Instead of the classical normalised measurement residual amplitude, the corresponding normalised composed measurement residual amplitude is used in the gross error detection and identification test, but with m degrees of freedom. The gross error processing turns out to be very simple to implement, requiring only few adaptations to the existing state estimation software. The IEEE-14 bus system is used to validate the proposed gross error detection and identification test.
Resumo:
This work extends a previously presented refined sandwich beam finite element (FE) model to vibration analysis, including dynamic piezoelectric actuation and sensing. The mechanical model is a refinement of the classical sandwich theory (CST), for which the core is modelled with a third-order shear deformation theory (TSDT). The FE model is developed considering, through the beam length, electrically: constant voltage for piezoelectric layers and quadratic third-order variable of the electric potential in the core, while meclianically: linear axial displacement, quadratic bending rotation of the core and cubic transverse displacement of the sandwich beam. Despite the refinement of mechanical and electric behaviours of the piezoelectric core, the model leads to the same number of degrees of freedom as the previous CST one due to a two-step static condensation of the internal dof (bending rotation and core electric potential third-order variable). The results obtained with the proposed FE model are compared to available numerical, analytical and experimental ones. Results confirm that the TSDT and the induced cubic electric potential yield an extra stiffness to the sandwich beam. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
This work presents a non-linear boundary element formulation applied to analysis of contact problems. The boundary element method (BEM) is known as a robust and accurate numerical technique to handle this type of problem, because the contact among the solids occurs along their boundaries. The proposed non-linear formulation is based on the use of singular or hyper-singular integral equations by BEM, for multi-region contact. When the contact occurs between crack surfaces, the formulation adopted is the dual version of BEM, in which singular and hyper-singular integral equations are defined along the opposite sides of the contact boundaries. The structural non-linear behaviour on the contact is considered using Coulomb`s friction law. The non-linear formulation is based on the tangent operator in which one uses the derivate of the set of algebraic equations to construct the corrections for the non-linear process. This implicit formulation has shown accurate as the classical approach, however, it is faster to compute the solution. Examples of simple and multi-region contact problems are shown to illustrate the applicability of the proposed scheme. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This work deals with analysis of cracked structures using BEM. Two formulations to analyse the crack growth process in quasi-brittle materials are discussed. They are based on the dual formulation of BEM where two different integral equations are employed along the opposite sides of the crack surface. The first presented formulation uses the concept of constant operator, in which the corrections of the nonlinear process are made only by applying appropriate tractions along the crack surfaces. The second presented BEM formulation to analyse crack growth problems is an implicit technique based on the use of a consistent tangent operator. This formulation is accurate, stable and always requires much less iterations to reach the equilibrium within a given load increment in comparison with the classical approach. Comparison examples of classical problem of crack growth are shown to illustrate the performance of the two formulations. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The main objective of this work is to present an alternative boundary element method (BEM) formulation for the static analysis of three-dimensional non-homogeneous isotropic solids. These problems can be solved using the classical boundary element formulation, analyzing each subregion separately and then joining them together by introducing equilibrium and displacements compatibility. Establishing relations between the displacement fundamental solutions of the different domains, the alternative technique proposed in this paper allows analyzing all the domains as one unique solid, not requiring equilibrium or compatibility equations. This formulation also leads to a smaller system of equations when compared to the usual subregion technique, and the results obtained are even more accurate. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
We consider a class of two-dimensional problems in classical linear elasticity for which material overlapping occurs in the absence of singularities. Of course, material overlapping is not physically realistic, and one possible way to prevent it uses a constrained minimization theory. In this theory, a minimization problem consists of minimizing the total potential energy of a linear elastic body subject to the constraint that the deformation field must be locally invertible. Here, we use an interior and an exterior penalty formulation of the minimization problem together with both a standard finite element method and classical nonlinear programming techniques to compute the minimizers. We compare both formulations by solving a plane problem numerically in the context of the constrained minimization theory. The problem has a closed-form solution, which is used to validate the numerical results. This solution is regular everywhere, including the boundary. In particular, we show numerical results which indicate that, for a fixed finite element mesh, the sequences of numerical solutions obtained with both the interior and the exterior penalty formulations converge to the same limit function as the penalization is enforced. This limit function yields an approximate deformation field to the plane problem that is locally invertible at all points in the domain. As the mesh is refined, this field converges to the exact solution of the plane problem.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
A geometrical approach of the finite-element analysis applied to electrostatic fields is presented. This approach is particularly well adapted to teaching Finite Elements in Electrical Engineering courses at undergraduate level. The procedure leads to the same system of algebraic equations as that derived by classical approaches, such as variational principle or weighted residuals for nodal elements with plane symmetry. It is shown that the extension of the original procedure to three dimensions is straightforward, provided the domain be meshed in first-order tetrahedral elements. The element matrices are derived by applying Maxwell`s equations in integral form to suitably chosen surfaces in the finite-element mesh.
Resumo:
Most post-processors for boundary element (BE) analysis use an auxiliary domain mesh to display domain results, working against the profitable modelling process of a pure boundary discretization. This paper introduces a novel visualization technique which preserves the basic properties of the boundary element methods. The proposed algorithm does not require any domain discretization and is based on the direct and automatic identification of isolines. Another critical aspect of the visualization of domain results in BE analysis is the effort required to evaluate results in interior points. In order to tackle this issue, the present article also provides a comparison between the performance of two different BE formulations (conventional and hybrid). In addition, this paper presents an overview of the most common post-processing and visualization techniques in BE analysis, such as the classical algorithms of scan line and the interpolation over a domain discretization. The results presented herein show that the proposed algorithm offers a very high performance compared with other visualization procedures.
Resumo:
An alternative approach for the analysis of arbitrarily curved shells is developed in this paper based on the idea of initial deformations. By `alternative` we mean that neither differential geometry nor the concept of degeneration is invoked here to describe the shell surface. We begin with a flat reference configuration for the shell mid-surface, after which the initial (curved) geometry is mapped as a stress-free deformation from the plane position. The actual motion of the shell takes place only after this initial mapping. In contrast to classical works in the literature, this strategy enables the use of only orthogonal frames within the theory and therefore objects such as Christoffel symbols, the second fundamental form or three-dimensional degenerated solids do not enter the formulation. Furthermore, the issue of physical components of tensors does not appear. Another important aspect (but not exclusive of our scheme) is the possibility to describe exactly the initial geometry. The model is kinematically exact, encompasses finite strains in a totally consistent manner and is here discretized under the light of the finite element method (although implementation via mesh-free techniques is also possible). Assessment is made by means of several numerical simulations. Copyright (C) 2009 John Wiley & Sons, Ltd.