14 resultados para Fully-Implicit Method

em Universidad Politécnica de Madrid


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We aim at understanding the multislip behaviour of metals subject to irreversible deformations at small-scales. By focusing on the simple shear of a constrained single-crystal strip, we show that discrete Dislocation Dynamics (DD) simulations predict a strong latent hardening size effect, with smaller being stronger in the range [1.5 µm, 6 µm] for the strip height. We attempt to represent the DD pseudo-experimental results by developing a flow theory of Strain Gradient Crystal Plasticity (SGCP), involving both energetic and dissipative higher-order terms and, as a main novelty, a strain gradient extension of the conventional latent hardening. In order to discuss the capability of the SGCP theory proposed, we implement it into a Finite Element (FE) code and set its material parameters on the basis of the DD results. The SGCP FE code is specifically developed for the boundary value problem under study so that we can implement a fully implicit (Backward Euler) consistent algorithm. Special emphasis is placed on the discussion of the role of the material length scales involved in the SGCP model, from both the mechanical and numerical points of view.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We aim at understanding the multislip behaviour of metals subject to irreversible deformations at small-scales. By focusing on the simple shear of a constrained single-crystal strip, we show that discrete Dislocation Dynamics (DD) simulations predict a strong latent hardening size effect, with smaller being stronger in the range [1.5 µm, 6 µm] for the strip height. We attempt to represent the DD pseudo-experimental results by developing a flow theory of Strain Gradient Crystal Plasticity (SGCP), involving both energetic and dissipative higher-order terms and, as a main novelty, a strain gradient extension of the conventional latent hardening. In order to discuss the capability of the SGCP theory proposed, we implement it into a Finite Element (FE) code and set its material parameters on the basis of the DD results. The SGCP FE code is specifically developed for the boundary value problem under study so that we can implement a fully implicit (Backward Euler) consistent algorithm. Special emphasis is placed on the discussion of the role of the material length scales involved in the SGCP model, from both the mechanical and numerical points of view.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Researchers in ecology commonly use multivariate analyses (e.g. redundancy analysis, canonical correspondence analysis, Mantel correlation, multivariate analysis of variance) to interpret patterns in biological data and relate these patterns to environmental predictors. There has been, however, little recognition of the errors associated with biological data and the influence that these may have on predictions derived from ecological hypotheses. We present a permutational method that assesses the effects of taxonomic uncertainty on the multivariate analyses typically used in the analysis of ecological data. The procedure is based on iterative randomizations that randomly re-assign non identified species in each site to any of the other species found in the remaining sites. After each re-assignment of species identities, the multivariate method at stake is run and a parameter of interest is calculated. Consequently, one can estimate a range of plausible values for the parameter of interest under different scenarios of re-assigned species identities. We demonstrate the use of our approach in the calculation of two parameters with an example involving tropical tree species from western Amazonia: 1) the Mantel correlation between compositional similarity and environmental distances between pairs of sites, and; 2) the variance explained by environmental predictors in redundancy analysis (RDA). We also investigated the effects of increasing taxonomic uncertainty (i.e. number of unidentified species), and the taxonomic resolution at which morphospecies are determined (genus-resolution, family-resolution, or fully undetermined species) on the uncertainty range of these parameters. To achieve this, we performed simulations on a tree dataset from southern Mexico by randomly selecting a portion of the species contained in the dataset and classifying them as unidentified at each level of decreasing taxonomic resolution. An analysis of covariance showed that both taxonomic uncertainty and resolution significantly influence the uncertainty range of the resulting parameters. Increasing taxonomic uncertainty expands our uncertainty of the parameters estimated both in the Mantel test and RDA. The effects of increasing taxonomic resolution, however, are not as evident. The method presented in this study improves the traditional approaches to study compositional change in ecological communities by accounting for some of the uncertainty inherent to biological data. We hope that this approach can be routinely used to estimate any parameter of interest obtained from compositional data tables when faced with taxonomic uncertainty.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A joint research to develop an efficient method for automated identification and quantification of ores [1], based on Reflected Light Microscopy (RLM) in the VNIR realm (Fig. 1), provides an alternative to modern SEM based equipments used by geometallurgists, but for ~ 1/10th of the price.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel time integration scheme is presented for the numerical solution of the dynamics of discrete systems consisting of point masses and thermo-visco-elastic springs. Even considering fully coupled constitutive laws for the elements, the obtained solutions strictly preserve the two laws of thermo dynamics and the symmetries of the continuum evolution equations. Moreover, the unconditional control over the energy and the entropy growth have the effect of stabilizing the numerical solution, allowing the use of larger time steps than those suitable for comparable implicit algorithms. Proofs for these claims are provided in the article as well as numerical examples that illustrate the performance of the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyses the noise and gain measurement of microwave differential amplifiers using two passive baluns. A model of the baluns that includes potential losses and unbalances has been considered. This analysis allows to de-embed the actual performance of the differential device from the single-ended measurements of the two-port cascaded system and the baluns. The method has been validated with measured results from a fully-differential amplifier prototype.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, a new two-dimensional optics design method is proposed that enables the coupling of three ray sets with two lens surfaces. The method is especially important for optical systems designed for wide field of view and with clearly separated optical surfaces. Fermat’s principle is used to deduce a set of functional differential equations fully describing the entire optical system. The presented general analytic solution makes it possible to calculate the lens profiles. Ray tracing results for calculated 15th order Taylor polynomials describing the lens profiles demonstrate excellent imaging performance and the versatility of this new analytic design method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose the use of a highly-accurate three-dimensional (3D) fully automatic hp-adaptive finite element method (FEM) for the characterization of rectangular waveguide discontinuities. These discontinuities are either the unavoidable result of mechanical/electrical transitions or deliberately introduced in order to perform certain electrical functions in modern communication systems. The proposed numerical method combines the geometrical flexibility of finite elements with an accuracy that is often superior to that provided by semi-analytical methods. It supports anisotropic refinements on irregular meshes with hanging nodes, and isoparametric elements. It makes use of hexahedral elements compatible with high-order H(curl)H(curl) discretizations. The 3D hp-adaptive FEM is applied for the first time to solve a wide range of 3D waveguide discontinuity problems of microwave communication systems in which exponential convergence of the error is observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The boundary element method (BEM) has been applied successfully to many engineering problems during the last decades. Compared with domain type methods like the finite element method (FEM) or the finite difference method (FDM) the BEM can handle problems where the medium extends to infinity much easier than domain type methods as there is no need to develop special boundary conditions (quiet or absorbing boundaries) or infinite elements at the boundaries introduced to limit the domain studied. The determination of the dynamic stiffness of arbitrarily shaped footings is just one of these fields where the BEM has been the method of choice, especially in the 1980s. With the continuous development of computer technology and the available hardware equipment the size of the problems under study grew and, as the flop count for solving the resulting linear system of equations grows with the third power of the number of equations, there was a need for the development of iterative methods with better performance. In [1] the GMRES algorithm was presented which is now widely used for implementations of the collocation BEM. While the FEM results in sparsely populated coefficient matrices, the BEM leads, in general, to fully or densely populated ones, depending on the number of subregions, posing a serious memory problem even for todays computers. If the geometry of the problem permits the surface of the domain to be meshed with equally shaped elements a lot of the resulting coefficients will be calculated and stored repeatedly. The present paper shows how these unnecessary operations can be avoided reducing the calculation time as well as the storage requirement. To this end a similar coefficient identification algorithm (SCIA), has been developed and implemented in a program written in Fortran 90. The vertical dynamic stiffness of a single pile in layered soil has been chosen to test the performance of the implementation. The results obtained with the 3-d model may be compared with those obtained with an axisymmetric formulation which are considered to be the reference values as the mesh quality is much better. The entire 3D model comprises more than 35000 dofs being a soil region with 21168 dofs the biggest single region. Note that the memory necessary to store all coefficients of this single region is about 6.8 GB, an amount which is usually not available with personal computers. In the problem under study the interface zone between the two adjacent soil regions as well as the surface of the top layer may be meshed with equally sized elements. In this case the application of the SCIA leads to an important reduction in memory requirements. The maximum memory used during the calculation has been reduced to 1.2 GB. The application of the SCIA thus permits problems to be solved on personal computers which otherwise would require much more powerful hardware.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Master’s Thesis a new Distributed Award Protocol (DAP) for robot communication and cooperation is presented. Task assignment (contract awarding) is done dynamically with contracts assigned to robots based upon the best bid received. Instead of having a manager and a contractor it is proposed a fully distributed bidding/awarding mechanism without a distinguished master. The best bidding robots are awarded with contract for execution. The contractors make decisions locally. This brings the following benefits: no communication bottleneck, low computational power requirement, increased robustness. DAP can handle multitasking. Tasks can be injected into system during the execution of already allocated tasks. As tasks have priorities, in the next cycle after taking into account actual bid parameters of all robots, tasks can be re-allocated. The aim is to minimize a global cost function which is a compromise between cost of task execution and cost of resources usage. Information about tasks and bid values is spread among robots with the use of a Round Robin Route, which is a novel solution proposed in this work. This method allows also identifying failed robots. Such failed robot is eliminated from the list of awarded robots and its replacement is found so the task is still executed by a team. If the failure of a robot was temporary (e.g. communication noise) and the robot can recover, it can again participate in the next bidding/awarding process. Using a bidding/awarding mechanism allows robots to dynamically relocate among tasks. This is also contributes to system robustness. DAP was evaluated through multiple experiments done in the multi-robot simulation system. Various scenarios were tested to check the idea of the main algorithm. Different failures of robots (communication failures, partial hardware malfunctions) were simulated and observations were made regarding how DAP recovers from them. Also the DAP flexibility to environment changes was watched. The experiments in the simulated environment confirmed the above features of DAP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

All the interconnected regulated systems are prone to impedance-based interactions making them sensitive to instability and transient-performance degradation. The applied control method affects significantly the characteristics of the converter in terms of sensitivity to different impedance interactions. This paper provides for the first time the whole set of impedance-type internal parameters and the formulas according to which the interaction sensitivity can be fully explained and analyzed. The formulation given in this paper can be utilized equally either based on measured frequency responses or on predicted analytic transfer functions. Usually, the distributed dc-dc systems are constructed by using ready-made power modules without having thorough knowledge on the actual power-stage and control-system designs. As a consequence, the interaction characterization has to be based on the frequency responses measureable via the input and output terminals. A buck converter with four different control methods is experimentally characterized in frequency domain to demonstrate the effect of control method on the interaction sensitivity. The presented analytical models are used to explain the phenomena behind the changes in the interaction sensitivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta tesis, el método de estimación de error de truncación conocido como restimation ha sido extendido de esquemas de bajo orden a esquemas de alto orden. La mayoría de los trabajos en la bibliografía utilizan soluciones convergidas en mallas de distinto refinamiento para realizar la estimación. En este trabajo se utiliza una solución en una única malla con distintos órdenes polinómicos. Además, no se requiere que esta solución esté completamente convergida, resultando en el método conocido como quasi-a priori T-estimation. La aproximación quasi-a priori estima el error mientras el residuo del método iterativo no es despreciable. En este trabajo se demuestra que algunas de las hipótesis fundamentales sobre el comportamiento del error, establecidas para métodos de bajo orden, dejan de ser válidas en esquemas de alto orden, haciendo necesaria una revisión completa del comportamiento del error antes de redefinir el algoritmo. Para facilitar esta tarea, en una primera etapa se considera el método conocido como Chebyshev Collocation, limitando la aplicación a geometrías simples. La extensión al método Discontinuouos Galerkin Spectral Element Method presenta dificultades adicionales para la definición precisa y la estimación del error, debidos a la formulación débil, la discretización multidominio y la formulación discontinua. En primer lugar, el análisis se enfoca en leyes de conservación escalares para examinar la precisión de la estimación del error de truncación. Después, la validez del análisis se demuestra para las ecuaciones incompresibles y compresibles de Euler y Navier Stokes. El método de aproximación quasi-a priori r-estimation permite desacoplar las contribuciones superficiales y volumétricas del error de truncación, proveyendo información sobre la anisotropía de las soluciones así como su ratio de convergencia con el orden polinómico. Se demuestra que esta aproximación quasi-a priori produce estimaciones del error de truncación con precisión espectral. ABSTRACT In this thesis, the τ-estimation method to estimate the truncation error is extended from low order to spectral methods. While most works in the literature rely on fully time-converged solutions on grids with different spacing to perform the estimation, only one grid with different polynomial orders is used in this work. Furthermore, a non timeconverged solution is used resulting in the quasi-a priori τ-estimation method. The quasi-a priori approach estimates the error when the residual of the time-iterative method is not negligible. It is shown in this work that some of the fundamental assumptions about error tendency, well established for low order methods, are no longer valid in high order schemes, making necessary a complete revision of the error behavior before redefining the algorithm. To facilitate this task, the Chebyshev Collocation Method is considered as a first step, limiting their application to simple geometries. The extension to the Discontinuous Galerkin Spectral Element Method introduces additional features to the accurate definition and estimation of the error due to the weak formulation, multidomain discretization and the discontinuous formulation. First, the analysis focuses on scalar conservation laws to examine the accuracy of the estimation of the truncation error. Then, the validity of the analysis is shown for the incompressible and compressible Euler and Navier Stokes equations. The developed quasi-a priori τ-estimation method permits one to decouple the interfacial and the interior contributions of the truncation error in the Discontinuous Galerkin Spectral Element Method, and provides information about the anisotropy of the solution, as well as its rate of convergence in polynomial order. It is demonstrated here that this quasi-a priori approach yields a spectrally accurate estimate of the truncation error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los análisis de fiabilidad representan una herramienta adecuada para contemplar las incertidumbres inherentes que existen en los parámetros geotécnicos. En esta Tesis Doctoral se desarrolla una metodología basada en una linealización sencilla, que emplea aproximaciones de primer o segundo orden, para evaluar eficientemente la fiabilidad del sistema en los problemas geotécnicos. En primer lugar, se emplean diferentes métodos para analizar la fiabilidad de dos aspectos propios del diseño de los túneles: la estabilidad del frente y el comportamiento del sostenimiento. Se aplican varias metodologías de fiabilidad — el Método de Fiabilidad de Primer Orden (FORM), el Método de Fiabilidad de Segundo Orden (SORM) y el Muestreo por Importancia (IS). Los resultados muestran que los tipos de distribución y las estructuras de correlación consideradas para todas las variables aleatorias tienen una influencia significativa en los resultados de fiabilidad, lo cual remarca la importancia de una adecuada caracterización de las incertidumbres geotécnicas en las aplicaciones prácticas. Los resultados también muestran que tanto el FORM como el SORM pueden emplearse para estimar la fiabilidad del sostenimiento de un túnel y que el SORM puede mejorar el FORM con un esfuerzo computacional adicional aceptable. Posteriormente, se desarrolla una metodología de linealización para evaluar la fiabilidad del sistema en los problemas geotécnicos. Esta metodología solamente necesita la información proporcionada por el FORM: el vector de índices de fiabilidad de las funciones de estado límite (LSFs) que componen el sistema y su matriz de correlación. Se analizan dos problemas geotécnicos comunes —la estabilidad de un talud en un suelo estratificado y un túnel circular excavado en roca— para demostrar la sencillez, precisión y eficiencia del procedimiento propuesto. Asimismo, se reflejan las ventajas de la metodología de linealización con respecto a las herramientas computacionales alternativas. Igualmente se muestra que, en el caso de que resulte necesario, se puede emplear el SORM —que aproxima la verdadera LSF mejor que el FORM— para calcular estimaciones más precisas de la fiabilidad del sistema. Finalmente, se presenta una nueva metodología que emplea Algoritmos Genéticos para identificar, de manera precisa, las superficies de deslizamiento representativas (RSSs) de taludes en suelos estratificados, las cuales se emplean posteriormente para estimar la fiabilidad del sistema, empleando la metodología de linealización propuesta. Se adoptan tres taludes en suelos estratificados característicos para demostrar la eficiencia, precisión y robustez del procedimiento propuesto y se discuten las ventajas del mismo con respecto a otros métodos alternativos. Los resultados muestran que la metodología propuesta da estimaciones de fiabilidad que mejoran los resultados previamente publicados, enfatizando la importancia de hallar buenas RSSs —y, especialmente, adecuadas (desde un punto de vista probabilístico) superficies de deslizamiento críticas que podrían ser no-circulares— para obtener estimaciones acertadas de la fiabilidad de taludes en suelos. Reliability analyses provide an adequate tool to consider the inherent uncertainties that exist in geotechnical parameters. This dissertation develops a simple linearization-based approach, that uses first or second order approximations, to efficiently evaluate the system reliability of geotechnical problems. First, reliability methods are employed to analyze the reliability of two tunnel design aspects: face stability and performance of support systems. Several reliability approaches —the first order reliability method (FORM), the second order reliability method (SORM), the response surface method (RSM) and importance sampling (IS)— are employed, with results showing that the assumed distribution types and correlation structures for all random variables have a significant effect on the reliability results. This emphasizes the importance of an adequate characterization of geotechnical uncertainties for practical applications. Results also show that both FORM and SORM can be used to estimate the reliability of tunnel-support systems; and that SORM can outperform FORM with an acceptable additional computational effort. A linearization approach is then developed to evaluate the system reliability of series geotechnical problems. The approach only needs information provided by FORM: the vector of reliability indices of the limit state functions (LSFs) composing the system, and their correlation matrix. Two common geotechnical problems —the stability of a slope in layered soil and a circular tunnel in rock— are employed to demonstrate the simplicity, accuracy and efficiency of the suggested procedure. Advantages of the linearization approach with respect to alternative computational tools are discussed. It is also found that, if necessary, SORM —that approximates the true LSF better than FORM— can be employed to compute better estimations of the system’s reliability. Finally, a new approach using Genetic Algorithms (GAs) is presented to identify the fully specified representative slip surfaces (RSSs) of layered soil slopes, and such RSSs are then employed to estimate the system reliability of slopes, using our proposed linearization approach. Three typical benchmark-slopes with layered soils are adopted to demonstrate the efficiency, accuracy and robustness of the suggested procedure, and advantages of the proposed method with respect to alternative methods are discussed. Results show that the proposed approach provides reliability estimates that improve previously published results, emphasizing the importance of finding good RSSs —and, especially, good (probabilistic) critical slip surfaces that might be non-circular— to obtain good estimations of the reliability of soil slope systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Comparación de los esquemas de integración temporal explícito e implícito, en la simulación del flujo sanguíneo y su interacción con la pared arterial. There are two major strategies in FSI coupling techniques: implicit and explicit. The general difference between these methodologies is how many times the data is exchanged between the fluid and solid domains at each FSI time-step. In both coupling strategies, the pressure values coming from fluid domain calculations at each time-step are exported to the solid domain, and consequently, the solid domain is analyzed with these imported forces. In contrast to the explicit coupling, in the implicit approach the fluid and solid domain’s data is exchanged several times until the convergence is achieved. Although this method may boost the numerical stabilization, it increases the computational cost due to the extra data exchanges. In cardiovascular simulations, depending on the analysis objectives, one may choose an explicit or implicit approach. In the current work, the advantage of an explicit coupling strategy is highlighted when simulation of pulsatile blood flow in elastic arteries is desired.