974 resultados para Mindlin Pseudospectral Plate Element, Chebyshev Polynomial, Integration Scheme


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Micro-electromechanical systems (MEMS) are micro scale devices that are able to convert electrical energy into mechanical energy or vice versa. In this paper, the mathematical model of an electronic circuit of a resonant MEMS mass sensor, with time-periodic parametric excitation, was analyzed and controlled by Chebyshev polynomial expansion of the Picard interaction and Lyapunov-Floquet transformation, and by Optimal Linear Feedback Control (OLFC). Both controls consider the union of feedback and feedforward controls. The feedback control obtained by Picard interaction and Lyapunov-Floquet transformation is the first strategy and the optimal control theory the second strategy. Numerical simulations show the efficiency of the two control methods, as well as the sensitivity of each control strategy to parametric errors. Without parametric errors, both control strategies were effective in maintaining the system in the desired orbit. On the other hand, in the presence of parametric errors, the OLFC technique was more robust.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intent of the work presented in this thesis is to show that relativistic perturbations should be considered in the same manner as well known perturbations currently taken into account in planet-satellite systems. It is also the aim of this research to show that relativistic perturbations are comparable to standard perturbations in speciffc force magnitude and effects. This work would have been regarded as little more then a curiosity to most engineers until recent advancements in space propulsion methods { e.g. the creation of a artiffcial neutron stars, light sails, and continuous propulsion techniques. These cutting-edge technologies have the potential to thrust the human race into interstellar, and hopefully intergalactic, travel in the not so distant future. The relativistic perturbations were simulated on two orbit cases: (1) a general orbit and (2) a Molniya type orbit. The simulations were completed using Matlab's ODE45 integration scheme. The methods used to organize, execute, and analyze these simulations are explained in detail. The results of the simulations are presented in graphical and statistical form. The simulation data reveals that the speciffc forces that arise from the relativistic perturbations do manifest as variations in the classical orbital elements. It is also apparent from the simulated data that the speciffc forces do exhibit similar magnitudes and effects that materialize from commonly considered perturbations that are used in trajectory design, optimization, and maintenance. Due to the similarities in behavior of relativistic versus non-relativistic perturbations, a case is made for the development of a fully relativistic formulation for the trajectory design and trajectory optimization problems. This new framework would afford the possibility of illuminating new more optimal solutions to the aforementioned problems that do not arise in current formulations. This type of reformulation has already showed promise when the previously unknown Space Superhighways arose as a optimal solution when classical astrodynamics was reformulated using geometric mechanics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, robustness and stability of continuum damage models applied to material failure in soft tissues are addressed. In the implicit damage models equipped with softening, the presence of negative eigenvalues in the tangent elemental matrix degrades the condition number of the global matrix, leading to a reduction of the computational performance of the numerical model. Two strategies have been adapted from literature to improve the aforementioned computational performance degradation: the IMPL-EX integration scheme [Oliver,2006], which renders the elemental matrix contribution definite positive, and arclength-type continuation methods [Carrera,1994], which allow to capture the unstable softening branch in brittle ruptures. The IMPL-EX integration scheme has as a major drawback the need to use small time steps to keep numerical error below an acceptable value. A convergence study, limiting the maximum allowed increment of internal variables in the damage model, is presented. Finally, numerical simulation of failure problems with fibre reinforced materials illustrates the performance of the adopted methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new integration scheme is developed for nonequilibrium molecular dynamics simulations where the temperature is constrained by a Gaussian thermostat. The utility of the scheme is demonstrated by its application to the SLLOD algorithm which is the standard nonequilibrium molecular dynamics algorithm for studying shear flow. Unlike conventional integrators, the new integrators are constructed using operator-splitting techniques to ensure stability and that little or no drift in the kinetic energy occurs. Moreover, they require minimum computer memory and are straightforward to program. Numerical experiments show that the efficiency and stability of the new integrators compare favorably with conventional integrators such as the Runge-Kutta and Gear predictor-corrector methods. (C) 1999 American Institute of Physics. [S0021-9606(99)50125-6].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The particle-based lattice solid model developed to study the physics of rocks and the nonlinear dynamics of earthquakes is refined by incorporating intrinsic friction between particles. The model provides a means for studying the causes of seismic wave attenuation, as well as frictional heat generation, fault zone evolution, and localisation phenomena. A modified velocity-Verlat scheme that allows friction to be precisely modelled is developed. This is a difficult computational problem given that a discontinuity must be accurately simulated by the numerical approach (i.e., the transition from static to dynamical frictional behaviour). This is achieved using a half time step integration scheme. At each half time step, a nonlinear system is solved to compute the static frictional forces and states of touching particle-pairs. Improved efficiency is achieved by adaptively adjusting the time step increment, depending on the particle velocities in the system. The total energy is calculated and verified to remain constant to a high precision during simulations. Numerical experiments show that the model can be applied to the study of earthquake dynamics, the stick-slip instability, heat generation, and fault zone evolution. Such experiments may lead to a conclusive resolution of the heat flow paradox and improved understanding of earthquake precursory phenomena and dynamics. (C) 1999 Academic Press.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Desde tempos remotos é notória a busca da humanidade para entender e conquistar a felicidade, qualidade de vida, bem-estar e saúde na sua plenitude bio-psico-social. Assim, o objetivo geral deste estudo foi analisar as relações entre percepções de suporte (social, social no trabalho e organizacional) e bem-estar no trabalho (satisfação no trabalho, envolvimento com o trabalho e comprometimento organizacional afetivo) em trabalhadores com deficiência, pois são poucas as pesquisas sobre pessoas com deficiência (PCD). O propósito em abordar o trabalho é por ser um importante elemento de integração social e por constituir um símbolo de reconhecimento social, valorizando a capacidade de estreitar contatos e de estabelecer relações sociais. Deste estudo, participaram 44 trabalhadores com algum tipo de deficiência que atuam em cargos operacionais, técnicos e administrativos. Todos foram escolhidos por conveniência, sendo 24 (54,5%) do sexo masculino e 20 (45,5%) do sexo feminino, com idade entre 18 e 65 anos. Foi possível classificar as deficiências dos participantes em quatro categorias: deficiência nos membros superiores: 9 (20,5%) trabalhadores; deficiência nos membros inferiores: 11 (25%) trabalhadores; deficiência auditiva: 21 (47,7%) trabalhadores; deficiência visual: 3 (6,8%) trabalhadores. Para a coleta de dados foi utilizado questionário de auto-preenchimento, composto de seis escalas que avaliam satisfação no trabalho, envolvimento com o trabalho e comprometimento organizacional, além de suporte social, suportes social no trabalho e organizacional. Foram realizadas análises estatísticas descritivas, testadas diferenças entre médias, bem como calculados coeficientes de correlação entre variáveis. Os resultados apontam que em termos de satisfação no trabalho, não revelam discrepâncias entre estudos realizados com trabalhadores sem deficiências (considerados normais ). Também foi possível observar que as PCD declaram ter orgulho da empresa em que trabalham, além de estarem contentes, entusiasmadas, interessadas e animadas com a organização empregadora. O estudo revelou que as PCD obtêm de sua rede social, ajuda emocional que lhes proporciona sentimento de apoio frente às dificuldades ou carências afetivas, pois provavelmente entendam que podem contar com essa rede para comemorar realizações e sucessos, da mesma forma que receber carinho e consolo quando se frustram ou passam por algum momento triste. É possível afirmar que as PCD percebem que essa mesma rede seria capaz de lhes prover algum apoio prático, como receber informações acerca de sua saúde, talvez reabilitação, também informações para atualização profissional ou até acompanhamento do seu desenvolvimento, inclusive busca de novas oportunidades e desafios para crescimento pessoal e profissional. Os resultados desta pesquisa indicam que as PCD tendem a manter uma forte convicção de que a empresa em que trabalham preocupa-se com seu bem-estar e está disposta a oferecer ajuda diante uma necessidade. Demais resultados sinalizam que as PCD tendem a aumentar o seu vínculo com o trabalho vivenciando mais satisfação na medida em que também aumentam os suportes ofertados pela organização, pela rede social no contexto do trabalho e fora dele. A análise de todo o conteúdo confeccionado é a grande contribuição deste estudo, por ser considerado pioneiro nesta discussão, mas futuros estudos podem vir a confirmar tais resultados e corroborar com mais informações.(AU)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The finite element method is now well established among engineers as being an extremely useful tool in the analysis of problems with complicated boundary conditions. One aim of this thesis has been to produce a set of computer algorithms capable of efficiently analysing complex three dimensional structures. This set of algorithms has been designed to permit much versatility. Provisions such as the use of only those parts of the system which are relevant to a given analysis and the facility to extend the system by the addition of new elements are incorporate. Five element types have been programmed, these are, prismatic members, rectangular plates, triangular plates and curved plates. The 'in and out of plane' stiffness matrices for a curved plate element are derived using the finite element technique. The performance of this type of element is compared with two other theoretical solutions as well as with a set of independent experimental observations. Additional experimental work was then carried out by the author to further evaluate the acceptability of this element. Finally the analysis of two large civil engineering structures, the shell of an electrical precipitator and a concrete bridge, are presented to investigate the performance of the algorithms. Comparisons are made between the computer time, core store requirements and the accuracy of the analysis, for the proposed system and those of another program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

d37Cl values were determined for Izu Bonin arc magmas erupted 0-44 Ma in order to better understand the time-dependent processing of volatiles in subduction zones. Pristine ash-sized particles (glass, pumice, scoria, and rock fragments) were handpicked from tephra drilled at ODP Site 782. d37Cl values for these particles span a large range from -2.1 to +1.7 per mil (error = ± 0.3 per mil) vs. SMOC (Standard Mean Ocean Chloride, defined as 0 per mil). The temporal data extend the previously reported range of d37Cl values of -2.6 to 0.4 per mil (bulk ash) and -5.4 to -0.1 per mil (volcanic gases) from the Quaternary Izu Bonin-Mariana volcanic front to more positive values. Overall, the temporal data indicate a time-progressive evolution, from isotopically negative Eocene and Oligocene magmas (-0.7 ± 1.1 per mil, n = 10) to Neogene magmas that have higher ?37Cl values on average (+0.3 ± 1.1 per mil; n = 13). The increase is due to the emergence of positive d37Cl values in the Neogene, while minimum d37Cl values are similar through time. The range in d37Cl values cannot be attributed to fractionation during melt formation and differentiation, and must reflect the diversity of Cl present in the arc magma sources. Cl clearly derives from the slab (> 96% Cl in arc magmas), but d37Cl values do not correlate with isotope tracers (e.g. 207Pb/204Pb and 87Sr/86Sr) that are indicative of the flux from subducting sedimentary and igneous crust. Given the steady, high Cl flux since at least 42 Ma, the temporal variability of d37Cl values is best explained by a flux from subducting isotopically positive and negative serpentinite formed in the ocean basins that mingles with and possibly overprints the isotopically negative flux from sediment and igneous crust at arc front depths. The change in the d37Cl values before and after backarc spreading may reflect either a tectonically induced change in the mechanism of serpentinite formation on the oceanic plate, or possibly the integration of isotopically positive wedge serpentinite as arc fluid source during the Neogene. Our study suggests that serpentinites are important fluid sources at arc front depth, and implies the return of isotopically positive and negative Cl from the Earth surface to the mantle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We prove that a random Hilbert scheme that parametrizes the closed subschemes with a fixed Hilbert polynomial in some projective space is irreducible and nonsingular with probability greater than $0.5$. To consider the set of nonempty Hilbert schemes as a probability space, we transform this set into a disjoint union of infinite binary trees, reinterpreting Macaulay's classification of admissible Hilbert polynomials. Choosing discrete probability distributions with infinite support on the trees establishes our notion of random Hilbert schemes. To bound the probability that random Hilbert schemes are irreducible and nonsingular, we show that at least half of the vertices in the binary trees correspond to Hilbert schemes with unique Borel-fixed points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Composition methods are useful when solving Ordinary Differential Equations (ODEs) as they increase the order of accuracy of a given basic numerical integration scheme. We will focus on sy-mmetric composition methods involving some basic second order symmetric integrator with different step sizes [17]. The introduction of symmetries into these methods simplifies the order conditions and reduces the number of unknowns. Several authors have worked in the search of the coefficients of these type of methods: the best method of order 8 has 17 stages [24], methods of order 8 and 15 stages were given in [29, 39, 40], 10-order methods of 31, 33 and 35 stages have been also found [24, 34]. In this work some techniques that we have built to obtain 10-order symmetric composition methods of symmetric integrators of s = 31 stages (16 order conditions) are explored. Given some starting coefficients that satisfy the simplest five order conditions, the process followed to obtain the coefficients that satisfy the sixteen order conditions is provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The idea of meta-cognitive learning has enriched the landscape of evolving systems, because it emulates three fundamental aspects of human learning: what-to-learn; how-to-learn; and when-to-learn. However, existing meta-cognitive algorithms still exclude Scaffolding theory, which can realize a plug-and-play classifier. Consequently, these algorithms require laborious pre- and/or post-training processes to be carried out in addition to the main training process. This paper introduces a novel meta-cognitive algorithm termed GENERIC-Classifier (gClass), where the how-to-learn part constitutes a synergy of Scaffolding Theory - a tutoring theory that fosters the ability to sort out complex learning tasks, and Schema Theory - a learning theory of knowledge acquisition by humans. The what-to-learn aspect adopts an online active learning concept by virtue of an extended conflict and ignorance method, making gClass an incremental semi-supervised classifier, whereas the when-to-learn component makes use of the standard sample reserved strategy. A generalized version of the Takagi-Sugeno Kang (TSK) fuzzy system is devised to serve as the cognitive constituent. That is, the rule premise is underpinned by multivariate Gaussian functions, while the rule consequent employs a subset of the non-linear Chebyshev polynomial. Thorough empirical studies, confirmed by their corresponding statistical tests, have numerically validated the efficacy of gClass, which delivers better classification rates than state-of-the-art classifiers while having less complexity.