920 resultados para Finite-Difference Method


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a numerical implementation of the cohesive crack model for the anal-ysis of quasibrittle materials based on the strong discontinuity approach in the framework of the finite element method. A simple central force model is used for the stress versus crack opening curve. The additional degrees of freedom defining the crack opening are determined at the crack level, thus avoiding the need for performing a static condensation at the element level. The need for a tracking algorithm is avoided by using a consistent pro-cedure for the selection of the separated nodes. Such a model is then implemented into a commercial program by means of a user subroutine, consequently being contrasted with the experimental results. The model takes into account the anisotropy of the material. Numerical simulations of well-known experiments are presented to show the ability of the proposed model to simulate the fracture of quasibrittle materials such as mortar, concrete and masonry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As is well known B.E.M. is obtained as a mixture of the integral representation formula of classical elasticity and the discretization philosophy of the finite element method (F.E.M.). The paper presents the application of B.E.M. to elastodynamic problems. Both the transient and steady state solutions are presented as well as some techniques to simplify problems with a free-stress boundary.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En entornos hostiles tales como aquellas instalaciones científicas donde la radiación ionizante es el principal peligro, el hecho de reducir las intervenciones humanas mediante el incremento de las operaciones robotizadas está siendo cada vez más de especial interés. CERN, la Organización Europea para la Investigación Nuclear, tiene alrededor de unos 50 km de superficie subterránea donde robots móviles controlador de forma remota podrían ayudar en su funcionamiento, por ejemplo, a la hora de llevar a cabo inspecciones remotas sobre radiación en los diferentes áreas destinados al efecto. No solo es preciso considerar que los robots deben ser capaces de recorrer largas distancias y operar durante largos periodos de tiempo, sino que deben saber desenvolverse en los correspondientes túneles subterráneos, tener en cuenta la presencia de campos electromagnéticos, radiación ionizante, etc. y finalmente, el hecho de que los robots no deben interrumpir el funcionamiento de los aceleradores. El hecho de disponer de un sistema de comunicaciones inalámbrico fiable y robusto es esencial para la correcta ejecución de las misiones que los robots deben afrontar y por supuesto, para evitar tales situaciones en las que es necesario la recuperación manual de los robots al agotarse su energía o al perder el enlace de comunicaciones. El objetivo de esta Tesis es proveer de las directrices y los medios necesarios para reducir el riesgo de fallo en la misión y maximizar las capacidades de los robots móviles inalámbricos los cuales disponen de almacenamiento finito de energía al trabajar en entornos peligrosos donde no se dispone de línea de vista directa. Para ello se proponen y muestran diferentes estrategias y métodos de comunicación inalámbrica. Teniendo esto en cuenta, se presentan a continuación los objetivos de investigación a seguir a lo largo de la Tesis: predecir la cobertura de comunicaciones antes y durante las misiones robotizadas; optimizar la capacidad de red inalámbrica de los robots móviles con respecto a su posición; y mejorar el rango operacional de esta clase de robots. Por su parte, las contribuciones a la Tesis se citan más abajo. El primer conjunto de contribuciones son métodos novedosos para predecir el consumo de energía y la autonomía en la comunicación antes y después de disponer de los robots en el entorno seleccionado. Esto es importante para proporcionar conciencia de la situación del robot y evitar fallos en la misión. El consumo de energía se predice usando una estrategia propuesta la cual usa modelos de consumo provenientes de diferentes componentes en un robot. La predicción para la cobertura de comunicaciones se desarrolla usando un nuevo filtro de RSS (Radio Signal Strength) y técnicas de estimación con la ayuda de Filtros de Kalman. El segundo conjunto de contribuciones son métodos para optimizar el rango de comunicaciones usando novedosas técnicas basadas en muestreo espacial que son robustas frente a ruidos de campos de detección y radio y que proporcionan redundancia. Se emplean métodos de diferencia central finitos para determinar los gradientes 2D RSS y se usa la movilidad del robot para optimizar el rango de comunicaciones y la capacidad de red. Este método también se valida con un caso de estudio centrado en la teleoperación háptica de robots móviles inalámbricos. La tercera contribución es un algoritmo robusto y estocástico descentralizado para la optimización de la posición al considerar múltiples robots autónomos usados principalmente para extender el rango de comunicaciones desde la estación de control al robot que está desarrollando la tarea. Todos los métodos y algoritmos propuestos se verifican y validan usando simulaciones y experimentos de campo con variedad de robots móviles disponibles en CERN. En resumen, esta Tesis ofrece métodos novedosos y demuestra su uso para: predecir RSS; optimizar la posición del robot; extender el rango de las comunicaciones inalámbricas; y mejorar las capacidades de red de los robots móviles inalámbricos para su uso en aplicaciones dentro de entornos peligrosos, que como ya se mencionó anteriormente, se destacan las instalaciones científicas con emisión de radiación ionizante. En otros términos, se ha desarrollado un conjunto de herramientas para mejorar, facilitar y hacer más seguras las misiones de los robots en entornos hostiles. Esta Tesis demuestra tanto en teoría como en práctica que los robots móviles pueden mejorar la calidad de las comunicaciones inalámbricas mediante la profundización en el estudio de su movilidad para optimizar dinámicamente sus posiciones y mantener conectividad incluso cuando no existe línea de vista. Los métodos desarrollados en la Tesis son especialmente adecuados para su fácil integración en robots móviles y pueden ser aplicados directamente en la capa de aplicación de la red inalámbrica. ABSTRACT In hostile environments such as in scientific facilities where ionising radiation is a dominant hazard, reducing human interventions by increasing robotic operations are desirable. CERN, the European Organization for Nuclear Research, has around 50 km of underground scientific facilities, where wireless mobile robots could help in the operation of the accelerator complex, e.g. in conducting remote inspections and radiation surveys in different areas. The main challenges to be considered here are not only that the robots should be able to go over long distances and operate for relatively long periods, but also the underground tunnel environment, the possible presence of electromagnetic fields, radiation effects, and the fact that the robots shall in no way interrupt the operation of the accelerators. Having a reliable and robust wireless communication system is essential for successful execution of such robotic missions and to avoid situations of manual recovery of the robots in the event that the robot runs out of energy or when the robot loses its communication link. The goal of this thesis is to provide means to reduce risk of mission failure and maximise mission capabilities of wireless mobile robots with finite energy storage capacity working in a radiation environment with non-line-of-sight (NLOS) communications by employing enhanced wireless communication methods. Towards this goal, the following research objectives are addressed in this thesis: predict the communication range before and during robotic missions; optimise and enhance wireless communication qualities of mobile robots by using robot mobility and employing multi-robot network. This thesis provides introductory information on the infrastructures where mobile robots will need to operate, the tasks to be carried out by mobile robots and the problems encountered in these environments. The reporting of research work carried out to improve wireless communication comprises an introduction to the relevant radio signal propagation theory and technology followed by explanation of the research in the following stages: An analysis of the wireless communication requirements for mobile robot for different tasks in a selection of CERN facilities; predictions of energy and communication autonomies (in terms of distance and time) to reduce risk of energy and communication related failures during missions; autonomous navigation of a mobile robot to find zone(s) of maximum radio signal strength to improve communication coverage area; and autonomous navigation of one or more mobile robots acting as mobile wireless relay (repeater) points in order to provide a tethered wireless connection to a teleoperated mobile robot carrying out inspection or radiation monitoring activities in a challenging radio environment. The specific contributions of this thesis are outlined below. The first sets of contributions are novel methods for predicting the energy autonomy and communication range(s) before and after deployment of the mobile robots in the intended environments. This is important in order to provide situational awareness and avoid mission failures. The energy consumption is predicted by using power consumption models of different components in a mobile robot. This energy prediction model will pave the way for choosing energy-efficient wireless communication strategies. The communication range prediction is performed using radio signal propagation models and applies radio signal strength (RSS) filtering and estimation techniques with the help of Kalman filters and Gaussian process models. The second set of contributions are methods to optimise the wireless communication qualities by using novel spatial sampling based techniques that are robust to sensing and radio field noises and provide redundancy features. Central finite difference (CFD) methods are employed to determine the 2-D RSS gradients and use robot mobility to optimise the communication quality and the network throughput. This method is also validated with a case study application involving superior haptic teleoperation of wireless mobile robots where an operator from a remote location can smoothly navigate a mobile robot in an environment with low-wireless signals. The third contribution is a robust stochastic position optimisation algorithm for multiple autonomous relay robots which are used for wireless tethering of radio signals and thereby to enhance the wireless communication qualities. All the proposed methods and algorithms are verified and validated using simulations and field experiments with a variety of mobile robots available at CERN. In summary, this thesis offers novel methods and demonstrates their use to predict energy autonomy and wireless communication range, optimise robots position to improve communication quality and enhance communication range and wireless network qualities of mobile robots for use in applications in hostile environmental characteristics such as scientific facilities emitting ionising radiations. In simpler terms, a set of tools are developed in this thesis for improving, easing and making safer robotic missions in hostile environments. This thesis validates both in theory and experiments that mobile robots can improve wireless communication quality by exploiting robots mobility to dynamically optimise their positions and maintain connectivity even when the (radio signal) environment possess non-line-of-sight characteristics. The methods developed in this thesis are well-suited for easier integration in mobile robots and can be applied directly at the application layer of the wireless network. The results of the proposed methods have outperformed other comparable state-of-the-art methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Two mathematical models are used to simulate pollution in the Bay of Santander. The first is the hydrodynamic model that provides the velocity field and height of the water. The second gives the pollutant concentration field as a resultant. Both models are formulated in two-dimensional equations. Linear triangular finite elements are used in the Galerkin procedure for spatial discretization. A finite difference scheme is used for the time integration. At each time step the calculated results of the first model are input to the second model as field data. The efficiency and accuracy of the models are tested by their application to a simple illustrative example. Finally a case study in simulation of pollution evolution in the Bay of Santander is presented

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A consistent Finite Element formulation was developed for four classical 1-D beam models. This formulation is based upon the solution of the homogeneous differential equation (or equations) associated with each model. Results such as the shape functions, stiffness matrices and consistent force vectors for the constant section beam were found. Some of these results were compared with the corresponding ones obtained by the standard Finite Element Method (i.e. using polynomial expansions for the field variables). Some of the difficulties reported in the literature concerning some of these models may be avoided by this technique and some numerical sensitivity analysis on this subject are presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En España existen del orden de 1,300 grandes presas, de las cuales un 20% fueron construidas antes de los años 60. El hecho de que existan actualmente una gran cantidad de presas antiguas aún en operación, ha producido un creciente interés en reevaluar su seguridad empleando herramientas nuevas o modificadas que incorporan modelos de fallo teóricos más completos, conceptos geotécnicos más complejos y nuevas técnicas de evaluación de la seguridad. Una manera muy común de abordar el análisis de estabilidad de presas de gravedad es, por ejemplo, considerar el deslizamiento a través de la interfase presa-cimiento empleando el criterio de rotura lineal de Mohr-Coulomb, en donde la cohesión y el ángulo de rozamiento son los parámetros que definen la resistencia al corte de la superficie de contacto. Sin embargo la influencia de aspectos como la presencia de planos de debilidad en el macizo rocoso de cimentación; la influencia de otros criterios de rotura para la junta y para el macizo rocoso (ej. el criterio de rotura de Hoek-Brown); las deformaciones volumétricas que ocurren durante la deformación plástica en el fallo del macizo rocoso (i.e., influencia de la dilatancia) no son usualmente consideradas durante el diseño original de la presa. En este contexto, en la presente tesis doctoral se propone una metodología analítica para el análisis de la estabilidad al deslizamiento de presas de hormigón, considerando un mecanismo de fallo en la cimentación caracterizado por la presencia de una familia de discontinuidades. En particular, se considera la posibilidad de que exista una junta sub-horizontal, preexistente y persistente en el macizo rocoso de la cimentación, con una superficie potencial de fallo que se extiende a través del macizo rocoso. El coeficiente de seguridad es entonces estimado usando una combinación de las resistencias a lo largo de los planos de rotura, cuyas resistencias son evaluadas empleando los criterios de rotura no lineales de Barton y Choubey (1977) y Barton y Bandis (1990), a lo largo del plano de deslizamiento de la junta; y el criterio de rotura de Hoek y Brown (1980) en su versión generalizada (Hoek et al. 2002), a lo largo del macizo rocoso. La metodología propuesta también considera la influencia del comportamiento del macizo rocoso cuando este sigue una ley de flujo no asociada con ángulo de dilatancia constante (Hoek y Brown 1997). La nueva metodología analítica propuesta es usada para evaluar las condiciones de estabilidad empleando dos modelos: un modelo determinista y un modelo probabilista, cuyos resultados son el valor del coeficiente de seguridad y la probabilidad de fallo al deslizamiento, respectivamente. El modelo determinista, implementado en MATLAB, es validado usando soluciones numéricas calculadas mediante el método de las diferencias finitas, empleando el código FLAC 6.0. El modelo propuesto proporciona resultados que son bastante similares a aquellos calculados con FLAC; sin embargo, los costos computacionales de la formulación propuesta son significativamente menores, facilitando el análisis de sensibilidad de la influencia de los diferentes parámetros de entrada sobre la seguridad de la presa, de cuyos resultados se obtienen los parámetros que más peso tienen en la estabilidad al deslizamiento de la estructura, manifestándose además la influencia de la ley de flujo en la rotura del macizo rocoso. La probabilidad de fallo es obtenida empleando el método de fiabilidad de primer orden (First Order Reliability Method; FORM), y los resultados de FORM son posteriormente validados mediante simulaciones de Monte Carlo. Los resultados obtenidos mediante ambas metodologías demuestran que, para el caso no asociado, los valores de probabilidad de fallo se ajustan de manera satisfactoria a los obtenidos mediante las simulaciones de Monte Carlo. Los resultados del caso asociado no son tan buenos, ya que producen resultados con errores del 0.7% al 66%, en los que no obstante se obtiene una buena concordancia cuando los casos se encuentran en, o cerca de, la situación de equilibrio límite. La eficiencia computacional es la principal ventaja que ofrece el método FORM para el análisis de la estabilidad de presas de hormigón, a diferencia de las simulaciones de Monte Carlo (que requiere de al menos 4 horas por cada ejecución) FORM requiere tan solo de 1 a 3 minutos en cada ejecución. There are 1,300 large dams in Spain, 20% of which were built before 1960. The fact that there are still many old dams in operation has produced an interest of reevaluate their safety using new or updated tools that incorporate state-of-the-art failure modes, geotechnical concepts and new safety assessment techniques. For instance, for gravity dams one common design approach considers the sliding through the dam-foundation interface, using a simple linear Mohr-Coulomb failure criterion with constant friction angle and cohesion parameters. But the influence of aspects such as the persistence of joint sets in the rock mass below the dam foundation; of the influence of others failure criteria proposed for rock joint and rock masses (e.g. the Hoek-Brown criterion); or the volumetric strains that occur during plastic failure of rock masses (i.e., the influence of dilatancy) are often no considered during the original dam design. In this context, an analytical methodology is proposed herein to assess the sliding stability of concrete dams, considering an extended failure mechanism in its rock foundation, which is characterized by the presence of an inclined, and impersistent joint set. In particular, the possibility of a preexisting sub-horizontal and impersistent joint set is considered, with a potential failure surface that could extend through the rock mass; the safety factor is therefore computed using a combination of strength along the rock joint (using the nonlinear Barton and Choubey (1977) and Barton and Bandis (1990) failure criteria) and along the rock mass (using the nonlinear failure criterion of Hoek and Brown (1980) in its generalized expression from Hoek et al. (2002)). The proposed methodology also considers the influence of a non-associative flow rule that has been incorporated using a (constant) dilation angle (Hoek and Brown 1997). The newly proposed analytical methodology is used to assess the dam stability conditions, employing for this purpose the deterministic and probabilistic models, resulting in the sliding safety factor and the probability of failure respectively. The deterministic model, implemented in MATLAB, is validated using numerical solution computed with the finite difference code FLAC 6.0. The proposed deterministic model provides results that are very similar to those computed with FLAC; however, since the new formulation can be implemented in a spreadsheet, the computational cost of the proposed model is significantly smaller, hence allowing to more easily conduct parametric analyses of the influence of the different input parameters on the dam’s safety. Once the model is validated, parametric analyses are conducting using the main parameters that describe the dam’s foundation. From this study, the impact of the more influential parameters on the sliding stability analysis is obtained and the error of considering the flow rule is assessed. The probability of failure is obtained employing the First Order Reliability Method (FORM). The probabilistic model is then validated using the Monte Carlo simulation method. Results obtained using both methodologies show good agreement for cases in which the rock mass has a nonassociate flow rule. For cases with an associated flow rule errors between 0.70% and 66% are obtained, so that the better adjustments are obtained for cases with, or close to, limit equilibrium conditions. The main advantage of FORM on sliding stability analyses of gravity dams is its computational efficiency, so that Monte Carlo simulations require at least 4 hours on each execution, whereas FORM requires only 1 to 3 minutes on each execution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Debido al gran auge en las comunicaciones móviles, los terminales cada vez son más finos a la par que más grandes, pues cada vez los usuarios quieren tener terminales delgados pero con pantallas mayores. Por ello, el objetivo principal del proyecto es aprender y analizar las antenas usadas en los teléfonos móviles, concretamente las antenas impresas. En los últimos años con el aumento de los servicios ofrecidos por los terminales móviles se han ido añadiendo distintas bandas de frecuencia en las que trabajan estos terminales. Por ello, ha sido necesario diseñar antenas que no funcionen únicamente en una banda de frecuencia, sino antenas multibanda, es decir, antenas capaces de funcionar en las distintas bandas de frecuencias. Para realizar las simulaciones y pruebas de este proyecto se utilizó el software FEKO, tanto el CAD FEKO como el POST FEKO. El CAD FEKO se empleó para el diseño de la antena, mientras que el POST FEKO sirvió para analizar las simulaciones. Por último, hay que añadir que FEKO aunque está basado en el Método de los Momentos (MoM) es una herramienta que puede utilizar varios métodos numéricos. Además del MoM puede utilizar otras técnicas (por separado o hibridizadas) como son el Métodos de Elementos Finitos (FEM), Óptica Física (PO), Lanzamiento de rayos con Óptica Geométrica (RL-GO), Teoría Uniforme de la Difracción (UTD), Método de las Diferencias Finitas en el Dominio del Tiempo (FDTD), ... ABSTRACT. Because of the boom in mobile communications, terminals are thinner and so large, because users want to thin terminals but with large screens. Therefore, the main objective of the project is to learn and analyse the antennas used in mobile phones, specifically printed antennas. In recent years with the rise of the services offered by mobile terminals have been adding different frequency bands in which these terminals work. For that reason, it has been necessary to design antennas that not work only in a frequency band, but multiband antennas, i.e., antennas capable of operating in different frequency bands. For performing simulations and testing in this project will be used software FEKO, as the CAD FEKO and POST FEKO. The CAD FEKO is used for the design of the antenna, whereas the POST FEKO is used for simulation analysis. Finally, it has to add that FEKO is based on the Method of Moments (MoM) but also it can use several numerical methods. Besides the MoM, FEKO can use other techniques (separated or hybridized) such as the Finite Element Method (FEM), Physical Optics (PO), Ray-launching Geometrical Optics (RL-GO), Uniform Theory of Diffraction (UTD), Finite Difference Time Domain (FDTD) …

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the thin-film photovoltaic industry, to achieve a high light scattering in one or more of the cell interfaces is one of the strategies that allow an enhancement of light absorption inside the cell and, therefore, a better device behavior and efficiency. Although chemical etching is the standard method to texture surfaces for that scattering improvement, laser light has shown as a new way for texturizing different materials, maintaining a good control of the final topography with a unique, clean, and quite precise process. In this work AZO films with different texture parameters are fabricated. The typical parameters used to characterize them, as the root mean square roughness or the haze factor, are discussed and, for deeper understanding of the scattering mechanisms, the light behavior in the films is simulated using a finite element method code. This method gives information about the light intensity in each point of the system, allowing the precise characterization of the scattering behavior near the film surface, and it can be used as well to calculate a simulated haze factor that can be compared with experimental measurements. A discussion of the validation of the numerical code, based in a comprehensive comparison with experimental data is included.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using the 3-D equations of linear elasticity and the asylllptotic expansion methods in terms of powers of the beam cross-section area as small parameter different beam theories can be obtained, according to the last term kept in the expansion. If it is used only the first two terms of the asymptotic expansion the classical beam theories can be recovered without resort to any "a priori" additional hypotheses. Moreover, some small corrections and extensions of the classical beam theories can be found and also there exists the possibility to use the asymptotic general beam theory as a basis procedure for a straightforward derivation of the stiffness matrix and the equivalent nodal forces of the beam. In order to obtain the above results a set of functions and constants only dependent on the cross-section of the beam it has to be computed them as solutions of different 2-D laplacian boundary value problems over the beam cross section domain. In this paper two main numerical procedures to solve these boundary value pf'oblems have been discussed, namely the Boundary Element Method (BEM) and the Finite Element Method (FEM). Results for some regular and geometrically simple cross-sections are presented and compared with ones computed analytically. Extensions to other arbitrary cross-sections are illustrated.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thermal buckling behavior of automotive clutch and brake discs is studied by making the use of finite element method. It is found that the temperature distribution along the radius and the thickness affects the critical buckling load considerably. The results indicate that a monotonic temperature profile leads to a coning mode with the highest temperature located at the inner radius. Whereas a temperature profile with the maximum temperature located in the middle leads to a dominant non-axisymmetric buckling mode, which results in a much higher buckling temperature. A periodic variation of temperature cannot lead to buckling. The temperature along the thickness can be simplified by the mean temperature method in the single material model. The thermal buckling analysis of friction discs with friction material layer, cone angle geometry and fixed teeth boundary conditions are also studied in detail. The angular geometry and the fixed teeth can improve the buckling temperature significantly. Young’s Modulus has no effect when single material is applied in the free or restricted conditions. Several equations are derived to validate the result. Young’s modulus ratio is a useful factor when the clutch has several material layers. The research findings from this paper are useful for automotive clutch and brake discs design against structural instability induced by thermal buckling.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Context. The X-ray spectra observed in the persistent emission of magnetars are evidence for the existence of a magnetosphere. The high-energy part of the spectra is explained by resonant cyclotron upscattering of soft thermal photons in a twisted magnetosphere, which has motivated an increasing number of efforts to improve and generalize existing magnetosphere models. Aims. We want to build more general configurations of twisted, force-free magnetospheres as a first step to understanding the role played by the magnetic field geometry in the observed spectra. Methods. First we reviewed and extended previous analytical works to assess the viability and limitations of semi-analytical approaches. Second, we built a numerical code able to relax an initial configuration of a nonrotating magnetosphere to a force-free geometry, provided any arbitrary form of the magnetic field at the star surface. The numerical code is based on a finite-difference time-domain, divergence-free, and conservative scheme, based of the magneto-frictional method used in other scenarios. Results. We obtain new numerical configurations of twisted magnetospheres, with distributions of twist and currents that differ from previous analytical solutions. The range of global twist of the new family of solutions is similar to the existing semi-analytical models (up to some radians), but the achieved geometry may be quite different. Conclusions. The geometry of twisted, force-free magnetospheres shows a wider variety of possibilities than previously considered. This has implications for the observed spectra and opens the possibility of implementing alternative models in simulations of radiative transfer aiming at providing spectra to be compared with observations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Different non-Fourier models of heat conduction, that incorporate time lags in the heat flux and/or the temperature gradient, have been increasingly considered in the last years to model microscale heat transfer problems in engineering. Numerical schemes to obtain approximate solutions of constant coefficients lagging models of heat conduction have already been proposed. In this work, an explicit finite difference scheme for a model with coefficients variable in time is developed, and their properties of convergence and stability are studied. Numerical computations showing examples of applications of the scheme are presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper shows the analysis results obtained from more than 200 finite element method (FEM) models used to calculate the settlement of a foundation resting on two soils of differing deformability. The analysis considers such different parameters as the foundation geometry, the percentage of each soil in contact with the foundation base and the ratio of the soils’ elastic moduli. From the described analysis, it is concluded that the maximum settlement of the foundation, calculated by assuming that the foundation is completely resting on the most deformable soil, can be correlated with the settlement calculated by FEM models through a correction coefficient named “settlement reduction factor” (α). As a consequence, a novel expression is proposed for calculating the real settlement of a foundation resting on two soils of different deformability with maximum errors lower than 1.57%, as demonstrated by the statistical analysis carried out. A guide for the application of the proposed simple method is also explained in the paper. Finally, the proposed methodology has been validated using settlement data from an instrumented foundation, indicating that this is a simple, reliable and quick method which allows the computation of the maximum elastic settlement of a raft foundation, evaluates its suitability and optimises its selection process.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Numerical modelling methodologies are important by their application to engineering and scientific problems, because there are processes where analytical mathematical expressions cannot be obtained to model them. When the only available information is a set of experimental values for the variables that determine the state of the system, the modelling problem is equivalent to determining the hyper-surface that best fits the data. This paper presents a methodology based on the Galerkin formulation of the finite elements method to obtain representations of relationships that are defined a priori, between a set of variables: y = z(x1, x2,...., xd). These representations are generated from the values of the variables in the experimental data. The approximation, piecewise, is an element of a Sobolev space and has derivatives defined in a general sense into this space. The using of this approach results in the need of inverting a linear system with a structure that allows a fast solver algorithm. The algorithm can be used in a variety of fields, being a multidisciplinary tool. The validity of the methodology is studied considering two real applications: a problem in hydrodynamics and a problem of engineering related to fluids, heat and transport in an energy generation plant. Also a test of the predictive capacity of the methodology is performed using a cross-validation method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Far-field stresses are those present in a volume of rock prior to excavations being created. Estimates of the orientation and magnitude of far-field stresses, often used in mine design, are generally obtained by single-point measurements of stress, or large-scale, regional trends. Point measurements can be a poor representation of far-field stresses as a result of excavation-induced stresses and geological structures. For these reasons, far-field stress estimates can be associated with high levels of uncertainty. The purpose of this thesis is to investigate the practical feasibility, applications, and limitations of calibrating far-field stress estimates through tunnel deformation measurements captured using LiDAR imaging. A method that estimates the orientation and magnitude of excavation-induced principal stress changes through back-analysis of deformation measurements from LiDAR imaged tunnels was developed and tested using synthetic data. If excavation-induced stress change orientations and magnitudes can be accurately estimated, they can be used in the calibration of far-field stress input to numerical models. LiDAR point clouds have been proven to have a number of underground applications, thus it is desired to explore their use in numerical model calibration. The back-analysis method is founded on the superposition of stresses and requires a two-dimensional numerical model of the deforming tunnel. Principal stress changes of known orientation and magnitude are applied to the model to create calibration curves. Estimation can then be performed by minimizing squared differences between the measured tunnel and sets of calibration curve deformations. In addition to the back-analysis estimation method, a procedure consisting of previously existing techniques to measure tunnel deformation using LiDAR imaging was documented. Under ideal conditions, the back-analysis method estimated principal stress change orientations within ±5° and magnitudes within ±2 MPa. Results were comparable for four different tunnel profile shapes. Preliminary testing using plastic deformation, a rough tunnel profile, and profile occlusions suggests that the method can work under more realistic conditions. The results from this thesis set the groundwork for the continued development of a new, inexpensive, and efficient far-field stress estimate calibration method.