927 resultados para Gauss, Aplicaciones de
Resumo:
Elasmobranchs are under increasing pressure from targeted fisheries worldwide, but unregulated bycatch is perhaps their greatest threat. This study tested five elasmobranch bycatch species (Sphyrna lewini, Carcharhinus tilstoni, Carcharhinus amblyrhynchos, Rhizoprionodon acutus, Glyphis glyphis) and one targeted teleost species (Lates calcarifer) to determine whether magnetic fields caused a reaction response and/or change in spatial use of an experimental arena. All elasmobranch species reacted to magnets at distances between 0.26 and 0.58 m at magnetic strengths between 25 and 234 gauss and avoided the area around the magnets. Contrastingly, the teleosts showed no reaction response and congregated around the magnets. The different reactions of the teleosts and elasmobranchs are presumably driven by the presence of ampullae of Lorenzini in the elasmobranchs; different reaction distances between elasmobranch species appeared to correlate with their feeding ecology. Elasmobranchs with a higher reliance on the electroreceptive sense to locate prey reacted to the magnets at the greatest distance, except G. glyphis. Notably, this is the only elasmobranch species tested with a fresh- and saltwater phase in their ecology, which may account for the decreased magnetic sensitivity. The application of magnets worldwide to mitigate the bycatch of elasmobranchs appears promising based on these results.
Resumo:
e argue that the extraordinary fact that all three known millisecond pulsars are very close to the galactic plane implies that there must be ~100 potentially observable millisecond pulsars within ~4 kpc from the Sun. Our other main conclusion is that the dipole magnetic fields or old neutron stars probably saturate around 5 x 108 gauss.
Resumo:
The simultaneous state and parameter estimation problem for a linear discrete-time system with unknown noise statistics is treated as a large-scale optimization problem. The a posterioriprobability density function is maximized directly with respect to the states and parameters subject to the constraint of the system dynamics. The resulting optimization problem is too large for any of the standard non-linear programming techniques and hence an hierarchical optimization approach is proposed. It turns out that the states can be computed at the first levelfor given noise and system parameters. These, in turn, are to be modified at the second level.The states are to be computed from a large system of linear equations and two solution methods are considered for solving these equations, limiting the horizon to a suitable length. The resulting algorithm is a filter-smoother, suitable for off-line as well as on-line state estimation for given noise and system parameters. The second level problem is split up into two, one for modifying the noise statistics and the other for modifying the system parameters. An adaptive relaxation technique is proposed for modifying the noise statistics and a modified Gauss-Newton technique is used to adjust the system parameters.
A Legendre spectral element model for sloshing and acoustic analysis in nearly incompressible fluids
Resumo:
A new spectral finite element formulation is presented for modeling the sloshing and the acoustic waves in nearly incompressible fluids. The formulation makes use of the Legendre polynomials in deriving the finite element interpolation shape functions in the Lagrangian frame of reference. The formulated element uses Gauss-Lobatto-Legendre quadrature scheme for integrating the volumetric stiffness and the mass matrices while the conventional Gauss-Legendre quadrature scheme is used on the rotational stiffness matrix to completely eliminate the zero energy modes, which are normally associated with the Lagrangian FE formulation. The numerical performance of the spectral element formulated here is examined by doing the inf-sup test oil a standard rectangular rigid tank partially filled with liquid The eigenvalues obtained from the formulated spectral element are compared with the conventional equally spaced node locations of the h-type Lagrangian finite element and the predicted results show that these spectral elements are more accurate and give superior convergence The efficiency and robustness of the formulated elements are demonstrated by solving few standard problems involving free vibration and dynamic response analysis with undistorted and distorted spectral elements. and the obtained results are compared with available results in the published literature (C) 2009 Elsevier Inc All rights reserved
Resumo:
In1-xMnxSb films have been grown with different Mn doping concentrations (x = 0.0085, 0.018, 0.029 and 0.04) beyond the equilibrium 14 solubility limit by liquid phase epitaxy. We have studied temperature dependent resistivity, the Hall effect, magnetoresistance and magnetization for all compositions. Saturation in magnetization observed even at room temperature suggests the existence of ferromagnetic clusters in the film which has been verified by scanning electron microscopy studies. The anomalous Hall coefficient is found to be negative. Remnant field present on the surface of the clusters seems to affect the anomalous Hall effect at very low fields (below 350 Gauss). In the zero field resistivity, a variable-range hopping conduction mechanism dominates below 3.5 K for all samples above which activated behavior is predominant. The temperature dependence of the magnetization measurement shows a magnetic ordering below 10 K which is consistent with electrical measurements. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In this thesis we examine multi-field inflationary models of the early Universe. Since non-Gaussianities may allow for the possibility to discriminate between models of inflation, we compute deviations from a Gaussian spectrum of primordial perturbations by extending the delta-N formalism. We use N-flation as a concrete model; our findings show that these models are generically indistinguishable as long as the slow roll approximation is still valid. Besides computing non-Guassinities, we also investigate Preheating after multi-field inflation. Within the framework of N-flation, we find that preheating via parametric resonance is suppressed, an indication that it is the old theory of preheating that is applicable. In addition to studying non-Gaussianities and preheatng in multi-field inflationary models, we study magnetogenesis in the early universe. To this aim, we propose a mechanism to generate primordial magnetic fields via rotating cosmic string loops. Magnetic fields in the micro-Gauss range have been observed in galaxies and clusters, but their origin has remained elusive. We consider a network of strings and find that rotating cosmic string loops, which are continuously produced in such networks, are viable candidates for magnetogenesis with relevant strength and length scales, provided we use a high string tension and an efficient dynamo.
Resumo:
The magnetic field of the Earth is 99 % of the internal origin and generated in the outer liquid core by the dynamo principle. In the 19th century, Carl Friedrich Gauss proved that the field can be described by a sum of spherical harmonic terms. Presently, this theory is the basis of e.g. IGRF models (International Geomagnetic Reference Field), which are the most accurate description available for the geomagnetic field. In average, dipole forms 3/4 and non-dipolar terms 1/4 of the instantaneous field, but the temporal mean of the field is assumed to be a pure geocentric axial dipolar field. The validity of this GAD (Geocentric Axial Dipole) hypothesis has been estimated by using several methods. In this work, the testing rests on the frequency dependence of inclination with respect to latitude. Each combination of dipole (GAD), quadrupole (G2) and octupole (G3) produces a distinct inclination distribution. These theoretical distributions have been compared with those calculated from empirical observations from different continents, and last, from the entire globe. Only data from Precambrian rocks (over 542 million years old) has been used in this work. The basic assumption is that during the long-term course of drifting continents, the globe is sampled adequately. There were 2823 observations altogether in the paleomagnetic database of the University of Helsinki. The effect of the quality of observations, as well as the age and rocktype, has been tested. For comparison between theoretical and empirical distributions, chi-square testing has been applied. In addition, spatiotemporal binning has effectively been used to remove the errors caused by multiple observations. The modelling from igneous rock data tells that the average magnetic field of the Earth is best described by a combination of a geocentric dipole and a very weak octupole (less than 10 % of GAD). Filtering and binning gave distributions a more GAD-like appearance, but deviation from GAD increased as a function of the age of rocks. The distribution calculated from so called keypoles, the most reliable determinations, behaves almost like GAD, having a zero quadrupole and an octupole 1 % of GAD. In no earlier study, past-400-Ma rocks have given a result so close to GAD, but low inclinations have been prominent especially in the sedimentary data. Despite these results, a greater deal of high-quality data and a proof of the long-term randomness of the Earth's continental motions are needed to make sure the dipole model holds true.
Resumo:
We explore the application of pseudo time marching schemes, involving either deterministic integration or stochastic filtering, to solve the inverse problem of parameter identification of large dimensional structural systems from partial and noisy measurements of strictly static response. Solutions of such non-linear inverse problems could provide useful local stiffness variations and do not have to confront modeling uncertainties in damping, an important, yet inadequately understood, aspect in dynamic system identification problems. The usual method of least-square solution is through a regularized Gauss-Newton method (GNM) whose results are known to be sensitively dependent on the regularization parameter and data noise intensity. Finite time,recursive integration of the pseudo-dynamical GNM (PD-GNM) update equation addresses the major numerical difficulty associated with the near-zero singular values of the linearized operator and gives results that are not sensitive to the time step of integration. Therefore, we also propose a pseudo-dynamic stochastic filtering approach for the same problem using a parsimonious representation of states and specifically solve the linearized filtering equations through a pseudo-dynamic ensemble Kalman filter (PD-EnKF). For multiple sets of measurements involving various load cases, we expedite the speed of thePD-EnKF by proposing an inner iteration within every time step. Results using the pseudo-dynamic strategy obtained through PD-EnKF and recursive integration are compared with those from the conventional GNM, which prove that the PD-EnKF is the best performer showing little sensitivity to process noise covariance and yielding reconstructions with less artifacts even when the ensemble size is small.
Resumo:
We explore the application of pseudo time marching schemes, involving either deterministic integration or stochastic filtering, to solve the inverse problem of parameter identification of large dimensional structural systems from partial and noisy measurements of strictly static response. Solutions of such non-linear inverse problems could provide useful local stiffness variations and do not have to confront modeling uncertainties in damping, an important, yet inadequately understood, aspect in dynamic system identification problems. The usual method of least-square solution is through a regularized Gauss-Newton method (GNM) whose results are known to be sensitively dependent on the regularization parameter and data noise intensity. Finite time, recursive integration of the pseudo-dynamical GNM (PD-GNM) update equation addresses the major numerical difficulty associated with the near-zero singular values of the linearized operator and gives results that are not sensitive to the time step of integration. Therefore, we also propose a pseudo-dynamic stochastic filtering approach for the same problem using a parsimonious representation of states and specifically solve the linearized filtering equations through apseudo-dynamic ensemble Kalman filter (PD-EnKF). For multiple sets ofmeasurements involving various load cases, we expedite the speed of the PD-EnKF by proposing an inner iteration within every time step. Results using the pseudo-dynamic strategy obtained through PD-EnKF and recursive integration are compared with those from the conventional GNM, which prove that the PD-EnKF is the best performer showing little sensitivity to process noise covariance and yielding reconstructions with less artifacts even when the ensemble size is small. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
A new elasto-plastic cracking constitutive model for reinforced concrete is presented. The nonlinear effects considered cover almost all the nonlinearities exhibited by reinforced concrete under short term monotonic loading. They include concrete cracking in tension, plasticity in compression, aggregate interlock, tension softening, elasto-plastic behavior of steel, bond-slip between concrete, and steel reinforcement and tension stiffening. A new procedure for incorporating bondslip in smeared steel elements is described. A modified Huber-Hencky-Mises failure criterion for plastic deformation of concrete, which fits the experimental results under biaxial stresses better, is proposed. Multiple cracking at Gauss points and their opening and closing are considered. Matrix expressions are developed and are incorporated in a nonlinear finite element program. After the objectivity of the model is demonstrated, the model is used to analyze two different types of problems: one, a set of four shear panels, and the other, a reinforced concrete beam without shear reinforcement. The results of the analysis agree favorably with the experimental results.
Resumo:
This paper may be considered as a sequel to one of our earlier works pertaining to the development of an upwind algorithm for meshless solvers. While the earlier work dealt with the development of an inviscid solution procedure, the present work focuses on its extension to viscous flows. A robust viscous discretization strategy is chosen based on positivity of a discrete Laplacian. This work projects meshless solver as a viable cartesian grid methodology. The point distribution required for the meshless solver is obtained from a hybrid cartesian gridding strategy. Particularly considering the importance of an hybrid cartesian mesh for RANS computations, the difficulties encountered in a conventional least squares based discretization strategy are highlighted. In this context, importance of discretization strategies which exploit the local structure in the grid is presented, along with a suitable point sorting strategy. Of particular interest is the proposed discretization strategies (both inviscid and viscous) within the structured grid block; a rotated update for the inviscid part and a Green-Gauss procedure based positive update for the viscous part. Both these procedures conveniently avoid the ill-conditioning associated with a conventional least squares procedure in the critical region of structured grid block. The robustness and accuracy of such a strategy is demonstrated on a number of standard test cases including a case of a multi-element airfoil. The computational efficiency of the proposed meshless solver is also demonstrated. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Gauss and Fourier have together provided us with the essential techniques for symbolic computation with linear arithmetic constraints over the reals and the rationals. These variable elimination techniques for linear constraints have particular significance in the context of constraint logic programming languages that have been developed in recent years. Variable elimination in linear equations (Guassian Elimination) is a fundamental technique in computational linear algebra and is therefore quite familiar to most of us. Elimination in linear inequalities (Fourier Elimination), on the other hand, is intimately related to polyhedral theory and aspects of linear programming that are not quite as familiar. In addition, the high complexity of elimination in inequalities has forces the consideration of intricate specializations of Fourier's original method. The intent of this survey article is to acquaint the reader with these connections and developments. The latter part of the article dwells on the thesis that variable elimination in linear constraints over the reals extends quite naturally to constraints in certain discrete domains.
Resumo:
The weighted-least-squares method based on the Gauss-Newton minimization technique is used for parameter estimation in water distribution networks. The parameters considered are: element resistances (single and/or group resistances, Hazen-Williams coefficients, pump specifications) and consumptions (for single or multiple loading conditions). The measurements considered are: nodal pressure heads, pipe flows, head loss in pipes, and consumptions/inflows. An important feature of the study is a detailed consideration of the influence of different choice of weights on parameter estimation, for error-free data, noisy data, and noisy data which include bad data. The method is applied to three different networks including a real-life problem.
Resumo:
The enthalpy method is primarily developed for studying phase change in a multicomponent material, characterized by a continuous liquid volume fraction (phi(1)) vs temperature (T) relationship. Using the Galerkin finite element method we obtain solutions to the enthalpy formulation for phase change in 1D slabs of pure material, by assuming a superficial phase change region (linear (phi(1) vs T) around the discontinuity at the melting point. Errors between the computed and analytical solutions are evaluated for the fluxes at, and positions of, the freezing front, for different widths of the superficial phase change region and spatial discretizations with linear and quadratic basis functions. For Stefan number (St) varying between 0.1 and 10 the method is relatively insensitive to spatial discretization and widths of the superficial phase change region. Greater sensitivity is observed at St = 0.01, where the variation in the enthalpy is large. In general the width of the superficial phase change region should span at least 2-3 Gauss quadrature points for the enthalpy to be computed accurately. The method is applied to study conventional melting of slabs of frozen brine and ice. Regardless of the forms for the phi(1) vs T relationships, the thawing times were found to scale as the square of the slab thickness. The ability of the method to efficiently capture multiple thawing fronts which may originate at any spatial location within the sample, is illustrated with the microwave thawing of slabs and 2D cylinders. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The maintenance of chlorine residual is needed at all the points in the distribution system supplied with chlorine as a disinfectant. The propagation and level of chlorine in a distribution system is affected by both bulk and pipe wall reactions. It is well known that the field determination of wall reaction parameter is difficult. The source strength of chlorine to maintain a specified chlorine residual at a target node is also an important parameter. The inverse model presented in the paper determines these water quality parameters, which are associated with different reaction kinetics, either in single or in groups of pipes. The weighted-least-squares method based on the Gauss-Newton minimization technique is used for the estimation of these parameters. The validation and application of the inverse model is illustrated with an example pipe distribution system under steady state. A generalized procedure to handle noisy and bad (abnormal) data is suggested, which can be used to estimate these parameters more accurately. The developed inverse model is useful for water supply agencies to calibrate their water distribution system and to improve their operational strategies to maintain water quality.