894 resultados para Numerical approximation and analysis
Resumo:
In this thesis the bifurcational behavior of the solutions of Langford system is analysed. The equilibriums of the Langford system are found, and the stability of equilibriums is discussed. The conditions of loss of stability are found. The periodic solution of the system is approximated. We consider three types of boundary condition for Langford spatially distributed system: Neumann conditions, Dirichlet conditions and Neumann conditions with additional requirement of zero average. We apply the Lyapunov-Schmidt method to Langford spatially distributed system for asymptotic approximation of the periodic mode. We analyse the influence of the diffusion on the behavior of self-oscillations. As well in the present work we perform numerical experiments and compare it with the analytical results.
Resumo:
An experimental study was conducted in a pump-turbine model in pumping mode, in order to characterize the flow field structure in the region between stay and guide vanes, using mainly the laser-Doppler anemometry in a two-color and back-scattered light-based system. The structure of the steady and unsteady flow was analyzed. The measurements were performed at three operation points. The obtained data provide appropriate boundary conditions and a good base of validation for numerical codes, and for the understanding of main loss mechanisms of this complex flow.
Resumo:
Recent interest in the validation of general circulation models (GCMs) has been devoted to objective methods. A small number of authors have used the direct synoptic identification of phenomena together with a statistical analysis to perform the objective comparison between various datasets. This paper describes a general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing. Methods usually associated with image processing have been used to segment the scene and to identify suitable feature points to represent the phenomena of interest. This is performed for each time level. A technique from dynamic scene analysis is then used to link the feature points to form trajectories. The method is fully automatic and should be applicable to a wide range of geophysical fields. An example will be shown of results obtained from this method using data obtained from a run of the Universities Global Atmospheric Modelling Project GCM.
Resumo:
We consider the numerical treatment of second kind integral equations on the real line of the form ∅(s) = ∫_(-∞)^(+∞)▒〖κ(s-t)z(t)ϕ(t)dt,s=R〗 (abbreviated ϕ= ψ+K_z ϕ) in which K ϵ L_1 (R), z ϵ L_∞ (R) and ψ ϵ BC(R), the space of bounded continuous functions on R, are assumed known and ϕ ϵ BC(R) is to be determined. We first derive sharp error estimates for the finite section approximation (reducing the range of integration to [-A, A]) via bounds on (1-K_z )^(-1)as an operator on spaces of weighted continuous functions. Numerical solution by a simple discrete collocation method on a uniform grid on R is then analysed: in the case when z is compactly supported this leads to a coefficient matrix which allows a rapid matrix-vector multiply via the FFT. To utilise this possibility we propose a modified two-grid iteration, a feature of which is that the coarse grid matrix is approximated by a banded matrix, and analyse convergence and computational cost. In cases where z is not compactly supported a combined finite section and two-grid algorithm can be applied and we extend the analysis to this case. As an application we consider acoustic scattering in the half-plane with a Robin or impedance boundary condition which we formulate as a boundary integral equation of the class studied. Our final result is that if z (related to the boundary impedance in the application) takes values in an appropriate compact subset Q of the complex plane, then the difference between ϕ(s)and its finite section approximation computed numerically using the iterative scheme proposed is ≤C_1 [kh log〖(1⁄kh)+(1-Θ)^((-1)⁄2) (kA)^((-1)⁄2) 〗 ] in the interval [-ΘA,ΘA](Θ<1) for kh sufficiently small, where k is the wavenumber and h the grid spacing. Moreover this numerical approximation can be computed in ≤C_2 N logN operations, where N = 2A/h is the number of degrees of freedom. The values of the constants C1 and C2 depend only on the set Q and not on the wavenumber k or the support of z.
Resumo:
The rural electrification is characterized by geographical dispersion of the population, low consumption, high investment by consumers and high cost. Moreover, solar radiation constitutes an inexhaustible source of energy and in its conversion into electricity photovoltaic panels are used. In this study, equations were adjusted to field conditions presented by the manufacturer for current and power of small photovoltaic systems. The mathematical analysis was performed on the photovoltaic rural system I- 100 from ISOFOTON, with power 300 Wp, located at the Experimental Farm Lageado of FCA/UNESP. For the development of such equations, the circuitry of photovoltaic cells has been studied to apply iterative numerical methods for the determination of electrical parameters and possible errors in the appropriate equations in the literature to reality. Therefore, a simulation of a photovoltaic panel was proposed through mathematical equations that were adjusted according to the data of local radiation. The results have presented equations that provide real answers to the user and may assist in the design of these systems, once calculated that the maximum power limit ensures a supply of energy generated. This real sizing helps establishing the possible applications of solar energy to the rural producer and informing the real possibilities of generating electricity from the sun.
Resumo:
In this work, are discussed two formulations of the boundary element method - BEM to perform linear bending analysis of plates reinforced by beams. Both formulations are based on the Kirchhoffs hypothesis and they are obtained from the reciprocity theorem applied to zoned plates, where each sub-region defines a beam or a stab. In the first model the problem values are defined along the interfaces and the external boundary. Then, in order to reduce the number of degrees of freedom kinematics hypothesis are assumed along the beam cross section, leading to a second formulation where the collocation points are defined along the beam skeleton, instead of being placed on interfaces. on these formulations no approximation of the generalized forces along the interface is required. Moreover, compatibility and equilibrium conditions along the interface are automatically imposed by the integral equation. Thus, these formulations require less approximation and the total number of the degrees of freedom is reduced. In the numerical examples are discussed the differences between these two BEM formulations, comparing as well the results to a well-known finite element code.
Resumo:
The goal of the present work is to analyze space missions that use the terrestrial atmosphere to accomplish orbital maneuvers that involve a plane change. A set of analytical solutions is presented for the variation of the orbital elements due to a single passage through the atmosphere, assuming that the interval the spacecraft travels through the atmosphere is not too large. The study considers both the lift influence on the spacecraft orbit as well as drag. The final equations are tested with numerical integration and can be considered in accordance with the numerical results whenever the perigee height is larger than a critical value. Next, a numerical study of the ratio between the velocity increment required to correct the semimajor axis decay due to the atmospheric passage and the velocity variation required to obtain the change in the inclination is also presented. This analysis can be used to decide if a maneuver passing through the atmosphere can decrease the fuel consumption of the mission and, in the cases where this technique can be used, if a multiple passage is more efficient than a single passage.
Resumo:
The Numerical INJection Analysis (NINJA) project is a collaborative effort between members of the numerical relativity and gravitational-wave (GW) astrophysics communities. The purpose of NINJA is to study the ability to detect GWs emitted from merging binary black holes (BBH) and recover their parameters with next-generation GW observatories. We report here on the results of the second NINJA project, NINJA-2, which employs 60 complete BBH hybrid waveforms consisting of a numerical portion modelling the late inspiral, merger, and ringdown stitched to a post-Newtonian portion modelling the early inspiral. In a 'blind injection challenge' similar to that conducted in recent Laser Interferometer Gravitational Wave Observatory (LIGO) and Virgo science runs, we added seven hybrid waveforms to two months of data recoloured to predictions of Advanced LIGO (aLIGO) and Advanced Virgo (AdV) sensitivity curves during their first observing runs. The resulting data was analysed by GW detection algorithms and 6 of the waveforms were recovered with false alarm rates smaller than 1 in a thousand years. Parameter-estimation algorithms were run on each of these waveforms to explore the ability to constrain the masses, component angular momenta and sky position of these waveforms. We find that the strong degeneracy between the mass ratio and the BHs' angular momenta will make it difficult to precisely estimate these parameters with aLIGO and AdV. We also perform a large-scale Monte Carlo study to assess the ability to recover each of the 60 hybrid waveforms with early aLIGO and AdV sensitivity curves. Our results predict that early aLIGO and AdV will have a volume-weighted average sensitive distance of 300 Mpc (1 Gpc) for 10M circle dot + 10M circle dot (50M circle dot + 50M circle dot) BBH coalescences. We demonstrate that neglecting the component angular momenta in the waveform models used in matched-filtering will result in a reduction in sensitivity for systems with large component angular momenta. This reduction is estimated to be up to similar to 15% for 50M circle dot + 50M circle dot BBH coalescences with almost maximal angular momenta aligned with the orbit when using early aLIGO and AdV sensitivity curves.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This work focuses basically on the design and analysis of simple and low cost hardware systems efficiency for temperature measurement in agricultural area. The main objective is to prove quantitatively, through statistical data analysis, to what extent a simple hardware designed with inexpensive components can be used safely in the indoor temperature measurement in farm buildings, such as greenhouses, warehouse or silos. To verify the of simple hardware efficiency, its data were compared with data from measurements with a high performance LabVIEW platform. This work proved that a simple hardware based on a microcontroller and the LM35 sensor can perform well. It presented a good accuracy but a relatively low precision that can be improved when performed some consecutive signal sampling and then used its average value. Although there are many papers that explain these components, this work has the distinction of presenting a data analysis in numerical form and using high performance systems to ensure critical data comparison.
Resumo:
In this work, an experimental and numerical analysis and characterization of functionally graded structures (FGSs) is developed. Nickel (Ni) and copper (Cu) materials are used as basic materials in the numerical modeling and experimental characterization. For modeling, a MATLAB finite element code is developed, which allows simulation of harmonic and modal analysis considering the graded finite element formulation. For experimental characterization, Ni-Cu FGSs are manufactured by using spark plasma sintering technique. Hardness and Young's modulus are found by using microindentation and ultrasonic measurements, respectively. The effective gradation of Ni/Cu FGS is addressed by means of optical microscopy, energy dispersive spectrometry, scanning electron microscopy and hardness testing. For the purpose of comparing modeling and experimental results, the hardness curve, along the gradation direction, is used for identifying the gradation profile; accordingly, the experimental hardness curve is used for approximating the Young's modulus variation and the graded finite element modeling is used for verification. For the first two resonance frequency values, a difference smaller than 1% between simulated and experimental results is obtained. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Analytical and numerical analyses of the nonlinear response of a three-degree-of-freedom nonlinear aeroelastic system are performed. Particularly, the effects of concentrated structural nonlinearities on the different motions are determined. The concentrated nonlinearities are introduced in the pitch, plunge, and flap springs by adding cubic stiffness in each of them. Quasi-steady approximation and the Duhamel formulation are used to model the aerodynamic loads. Using the quasi-steady approach, we derive the normal form of the Hopf bifurcation associated with the system's instability. Using the nonlinear form, three configurations including supercritical and subcritical aeroelastic systems are defined and analyzed numerically. The characteristics of these different configurations in terms of stability and motions are evaluated. The usefulness of the two aerodynamic formulations in the prediction of the different motions beyond the bifurcation is discussed.
Resumo:
[EN]The application of the Isogeometric Analysis (IA) with T-splines [1] demands a partition of the parametric space, C, in a tiling containing T-junctions denominated T-mesh. The T-splines are used both for the geometric modelization of the physical domain, D, and the basis of the numerical approximation. They have the advantage over the NURBS of allowing local refinement. In this work we propose a procedure to construct T-spline representations of complex domains in order to be applied to the resolution of elliptic PDE with IA. In precedent works [2, 3] we accomplished this task by using a tetrahedral parametrization…
Resumo:
We introduce and analyze hp-version discontinuous Galerkin (dG) finite element methods for the numerical approximation of linear second-order elliptic boundary-value problems in three-dimensional polyhedral domains. To resolve possible corner-, edge- and corner-edge singularities, we consider hexahedral meshes that are geometrically and anisotropically refined toward the corresponding neighborhoods. Similarly, the local polynomial degrees are increased linearly and possibly anisotropically away from singularities. We design interior penalty hp-dG methods and prove that they are well-defined for problems with singular solutions and stable under the proposed hp-refinements. We establish (abstract) error bounds that will allow us to prove exponential rates of convergence in the second part of this work.
Resumo:
El propósito de esta tesis fue estudiar el rendimiento ofensivo de los equipos de balonmano de élite cuando se considera el balonmano como un sistema dinámico complejo no lineal. La perspectiva de análisis dinámica dependiente del tiempo fue adoptada para evaluar el rendimiento de los equipos durante el partido. La muestra general comprendió los 240 partidos jugados en la temporada 2011-2012 de la liga profesional masculina de balonmano de España (Liga ASOBAL). En el análisis posterior solo se consideraron los partidos ajustados (diferencia final de goles ≤ 5; n = 142). El estado del marcador, la localización del partido, el nivel de los oponentes y el periodo de juego fueron incorporados al análisis como variables situacionales. Tres estudios compusieron el núcleo de la tesis. En el primer estudio, analizamos la coordinación entre las series temporales que representan el proceso goleador a lo largo del partido de cada uno de los dos equipos que se enfrentan. Autocorrelaciones, correlaciones cruzadas, doble media móvil y transformada de Hilbert fueron usadas para el análisis. El proceso goleador de los equipos presentó una alta consistencia a lo largo de todos los partidos, así como fuertes modos de coordinación en fase en todos los contextos de juego. Las únicas diferencias se encontraron en relación al periodo de juego. La coordinación en los procesos goleadores de los equipos fue significativamente menor en el 1er y 2º periodo (0–10 min y 10–20 min), mostrando una clara coordinación creciente a medida que el partido avanzaba. Esto sugiere que son los 20 primeros minutos aquellos que rompen los partidos. En el segundo estudio, analizamos los efectos temporales (efecto inmediato, a corto y a medio plazo) de los tiempos muertos en el rendimiento goleador de los equipos. Modelos de regresión lineal múltiple fueron empleados para el análisis. Los resultados mostraron incrementos de 0.59, 1.40 y 1.85 goles para los periodos que comprenden la primera, tercera y quinta posesión de los equipos que pidieron el tiempo muerto. Inversamente, se encontraron efectos significativamente negativos para los equipos rivales, con decrementos de 0.50, 1.43 y 2.05 goles en los mismos periodos respectivamente. La influencia de las variables situacionales solo se registró en ciertos periodos de juego. Finalmente, en el tercer estudio, analizamos los efectos temporales de las exclusiones de los jugadores sobre el rendimiento goleador de los equipos, tanto para los equipos que sufren la exclusión (inferioridad numérica) como para los rivales (superioridad numérica). Se emplearon modelos de regresión lineal múltiple para el análisis. Los resultados mostraron efectos negativos significativos en el número de goles marcados por los equipos con un jugador menos, con decrementos de 0.25, 0.40, 0.61, 0.62 y 0.57 goles para los periodos que comprenden el primer, segundo, tercer, cuarto y quinto minutos previos y posteriores a la exclusión. Para los rivales, los resultados mostraron efectos positivos significativos, con incrementos de la misma magnitud en los mismos periodos. Esta tendencia no se vio afectada por el estado del marcador, localización del partido, nivel de los oponentes o periodo de juego. Los incrementos goleadores fueron menores de lo que se podría esperar de una superioridad numérica de 2 minutos. Diferentes teorías psicológicas como la paralización ante situaciones de presión donde se espera un gran rendimiento pueden ayudar a explicar este hecho. Los últimos capítulos de la tesis enumeran las conclusiones principales y presentan diferentes aplicaciones prácticas que surgen de los tres estudios. Por último, se presentan las limitaciones y futuras líneas de investigación. ABSTRACT The purpose of this thesis was to investigate the offensive performance of elite handball teams when considering handball as a complex non-linear dynamical system. The time-dependent dynamic approach was adopted to assess teams’ performance during the game. The overall sample comprised the 240 games played in the season 2011-2012 of men’s Spanish Professional Handball League (ASOBAL League). In the subsequent analyses, only close games (final goal-difference ≤ 5; n = 142) were considered. Match status, game location, quality of opposition, and game period situational variables were incorporated into the analysis. Three studies composed the core of the thesis. In the first study, we analyzed the game-scoring coordination between the time series representing the scoring processes of the two opposing teams throughout the game. Autocorrelation, cross-correlation, double moving average, and Hilbert transform were used for analysis. The scoring processes of the teams presented a high consistency across all the games as well as strong in-phase modes of coordination in all the game contexts. The only differences were found when controlling for the game period. The coordination in the scoring processes of the teams was significantly lower for the 1st and 2nd period (0–10 min and 10–20 min), showing a clear increasing coordination behavior as the game progressed. This suggests that the first 20 minutes are those that break the game-scoring. In the second study, we analyzed the temporal effects (immediate effect, short-term effect, and medium-term effect) of team timeouts on teams’ scoring performance. Multiple linear regression models were used for the analysis. The results showed increments of 0.59, 1.40 and 1.85 goals for the periods within the first, third and fifth timeout ball possessions for the teams that requested the timeout. Conversely, significant negative effects on goals scored were found for the opponent teams, with decrements of 0.59, 1.43 and 2.04 goals for the same periods, respectively. The influence of situational variables on the scoring performance was only registered in certain game periods. Finally, in the third study, we analyzed the players’ exclusions temporal effects on teams’ scoring performance, for the teams that suffer the exclusion (numerical inferiority) and for the opponents (numerical superiority). Multiple linear regression models were used for the analysis. The results showed significant negative effects on the number of goals scored for the teams with one less player, with decrements of 0.25, 0.40, 0.61, 0.62, and 0.57 goals for the periods within the previous and post one, two, three, four and five minutes of play. For the opponent teams, the results showed positive effects, with increments of the same magnitude in the same game periods. This trend was not affected by match status, game location, quality of opposition, or game period. The scoring increments were smaller than might be expected from a 2-minute numerical playing superiority. Psychological theories such as choking under pressure situations where good performance is expected could contribute to explain this finding. The final chapters of the thesis enumerate the main conclusions and underline the main practical applications that arise from the three studies. Lastly, limitations and future research directions are described.