67 resultados para Numerical Algorithms and Problems
Resumo:
The formulation of thermodynamically consistent (TC) time integration methods was introduced by a general procedure based on the GENERIC form of the evolution equations for thermo-mechanical problems. The use of the entropy was reported to be the best choice for the thermodynamical variable to easily provide TC integrators. Also the employment of the internal energy was proved to not involve excessive complications. However, attempts towards the use of the temperature in the design of GENERIC-based TC schemes have so far been unfruitful. This paper complements the said procedure to attain TC integrators by presenting a TC scheme based on the temperature as thermodynamical state variable. As a result, the problems which arise due to the use of the entropy are overcome, mainly the definition of boundary conditions. What is more, the newly proposed method exhibits the general enhanced numerical stability and robustness properties of the entropy formulation.
Resumo:
Para las decisiones urgentes sobre intervenciones quirúrgicas en el sistema cardiovascular se necesitan simulaciones computacionales con resultados fiables y que consuman un tiempo de cálculo razonable. Durante años los investigadores han trabajado en diversos métodos numéricos de cálculo que resulten atractivos para los cirujanos. Estos métodos, precisos pero costosos desde el punto de vista del coste computacional, crean un desajuste entre la oferta de los ingenieros que realizan las simulaciones y los médicos que operan en el quirófano. Por otra parte, los métodos de cálculo más simplificados reducen el tiempo de cálculo pero pueden proporcionar resultados no realistas. El objetivo de esta tesis es combinar los conceptos de autorregulación e impedancia del sistema circulatorio, la interacción flujo sanguíneo-pared arterial y modelos geométricos idealizados tridimensionales de las arterias pero sin pérdida de realismo, con objeto de proponer una metodología de simulación que proporcione resultados correctos y completos, con tiempos de cálculo moderados. En las simulaciones numéricas, las condiciones de contorno basadas en historias de presión presentan inconvenientes por ser difícil conocerlas con detalle, y porque los resultados son muy sensibles ante pequeñas variaciones de dichas historias. La metodología propuesta se basa en los conceptos de autorregulación, para imponer la demanda de flujo aguas abajo del modelo en el ciclo cardiaco, y la impedancia, para representar el efecto que ejerce el flujo en el resto del sistema circulatorio sobre las arterias modeladas. De este modo las historias de presión en el contorno son resultados del cálculo, que se obtienen de manera iterativa. El método propuesto se aplica en una geometría idealizada del arco aórtico sin patologías y en otra geometría correspondiente a una disección Stanford de tipo A, considerando la interacción del flujo pulsátil con las paredes arteriales. El efecto de los tejidos circundantes también se incorpora en los modelos. También se hacen aplicaciones considerando la interacción en una geometría especifica de un paciente anciano que proviene de una tomografía computarizada. Finalmente se analiza una disección Stanford tipo B con tres modelos que incluyen la fenestración del saco. Clinicians demand fast and reliable numerical results of cardiovascular biomechanic simulations for their urgent pre-surgery decissions. Researchers during many years have work on different numerical methods in order to attract the clinicians' confidence to their colorful contours. Though precise but expensive and time-consuming methodologies create a gap between numerical biomechanics and hospital personnel. On the other hand, simulation simplifications with the aim of reduction in computational time may cause in production of unrealistic outcomes. The main objective of the current investigation is to combine ideas such as autoregulation, impedance, fluid-solid interaction and idealized geometries in order to propose a computationally cheap methodology without excessive or unrealistic simplifications. The pressure boundary conditions are critical and polemic in numerical simulations of cardiovascular system, in which a specific arterial site is of interest and the rest of the netwrok is neglected but represented by a boundary condition. The proposed methodology is a pressure boundary condition which takes advantage of numerical simplicity of application of an imposed pressure boundary condition on outlets, while it includes more sophisticated concepts such as autoregulation and impedance to gain more realistic results. Incorporation of autoregulation and impedance converts the pressure boundary conditions to an active and dynamic boundary conditions, receiving feedback from the results during the numerical calculations and comparing them with the physiological requirements. On the other hand, the impedance boundary condition defines the shapes of the pressure history curves applied at outlets. The applications of the proposed method are seen on idealized geometry of the healthy arotic arch as well as idealized Stanford type A dissection, considering the interaction of the arterial walls with the pulsatile blood flow. The effect of surrounding tissues is incorporated and studied in the models. The simulations continue with FSI analysis of a patient-specific CT scanned geometry of an old individual. Finally, inspiring of the statistic results of mortality rates in Stanford type B dissection, three models of fenestrated dissection sac is studied and discussed. Applying the developed boundary condition, an alternative hypothesis is proposed by the author with respect to the decrease in mortality rates in patients with fenestrations.
Resumo:
Determinar con buena precisión la posición en la que se encuentra un terminal móvil, cuando éste se halla inmerso en un entorno de interior (centros comerciales, edificios de oficinas, aeropuertos, estaciones, túneles, etc), es el pilar básico sobre el que se sustentan un gran número de aplicaciones y servicios. Muchos de esos servicios se encuentran ya disponibles en entornos de exterior, aunque los entornos de interior se prestan a otros servicios específicos para ellos. Ese número, sin embargo, podría ser significativamente mayor de lo que actualmente es, si no fuera necesaria una costosa infraestructura para llevar a cabo el posicionamiento con la precisión adecuada a cada uno de los hipotéticos servicios. O, igualmente, si la citada infraestructura pudiera tener otros usos distintos, además del relacionado con el posicionamiento. La usabilidad de la misma infraestructura para otros fines distintos ofrecería la oportunidad de que la misma estuviera ya presente en las diferentes localizaciones, porque ha sido previamente desplegada para esos otros usos; o bien facilitaría su despliegue, porque el coste de esa operación ofreciera un mayor retorno de usabilidad para quien lo realiza. Las tecnologías inalámbricas de comunicaciones basadas en radiofrecuencia, ya en uso para las comunicaciones de voz y datos (móviles, WLAN, etc), cumplen el requisito anteriormente indicado y, por tanto, facilitarían el crecimiento de las aplicaciones y servicios basados en el posicionamiento, en el caso de poderse emplear para ello. Sin embargo, determinar la posición con el nivel de precisión adecuado mediante el uso de estas tecnologías, es un importante reto hoy en día. El presente trabajo pretende aportar avances significativos en este campo. A lo largo del mismo se llevará a cabo, en primer lugar, un estudio de los principales algoritmos y técnicas auxiliares de posicionamiento aplicables en entornos de interior. La revisión se centrará en aquellos que sean aptos tanto para tecnologías móviles de última generación como para entornos WLAN. Con ello, se pretende poner de relieve las ventajas e inconvenientes de cada uno de estos algoritmos, teniendo como motivación final su aplicabilidad tanto al mundo de las redes móviles 3G y 4G (en especial a las femtoceldas y small-cells LTE) como al indicado entorno WLAN; y teniendo siempre presente que el objetivo último es que vayan a ser usados en interiores. La principal conclusión de esa revisión es que las técnicas de triangulación, comúnmente empleadas para realizar la localización en entornos de exterior, se muestran inútiles en los entornos de interior, debido a efectos adversos propios de este tipo de entornos como la pérdida de visión directa o los caminos múltiples en el recorrido de la señal. Los métodos de huella radioeléctrica, más conocidos bajo el término inglés “fingerprinting”, que se basan en la comparación de los valores de potencia de señal que se están recibiendo en el momento de llevar a cabo el posicionamiento por un terminal móvil, frente a los valores registrados en un mapa radio de potencias, elaborado durante una fase inicial de calibración, aparecen como los mejores de entre los posibles para los escenarios de interior. Sin embargo, estos sistemas se ven también afectados por otros problemas, como por ejemplo los importantes trabajos a realizar para ponerlos en marcha, y la variabilidad del canal. Frente a ellos, en el presente trabajo se presentan dos contribuciones originales para mejorar los sistemas basados en los métodos fingerprinting. La primera de esas contribuciones describe un método para determinar, de manera sencilla, las características básicas del sistema a nivel del número de muestras necesarias para crear el mapa radio de la huella radioeléctrica de referencia, junto al número mínimo de emisores de radiofrecuencia que habrá que desplegar; todo ello, a partir de unos requerimientos iniciales relacionados con el error y la precisión buscados en el posicionamiento a realizar, a los que uniremos los datos correspondientes a las dimensiones y realidad física del entorno. De esa forma, se establecen unas pautas iniciales a la hora de dimensionar el sistema, y se combaten los efectos negativos que, sobre el coste o el rendimiento del sistema en su conjunto, son debidos a un despliegue ineficiente de los emisores de radiofrecuencia y de los puntos de captura de su huella. La segunda contribución incrementa la precisión resultante del sistema en tiempo real, gracias a una técnica de recalibración automática del mapa radio de potencias. Esta técnica tiene en cuenta las medidas reportadas continuamente por unos pocos puntos de referencia estáticos, estratégicamente distribuidos en el entorno, para recalcular y actualizar las potencias registradas en el mapa radio. Un beneficio adicional a nivel operativo de la citada técnica, es la prolongación del tiempo de usabilidad fiable del sistema, bajando la frecuencia en la que se requiere volver a capturar el mapa radio de potencias completo. Las mejoras anteriormente citadas serán de aplicación directa en la mejora de los mecanismos de posicionamiento en interiores basados en la infraestructura inalámbrica de comunicaciones de voz y datos. A partir de ahí, esa mejora será extensible y de aplicabilidad sobre los servicios de localización (conocimiento personal del lugar donde uno mismo se encuentra), monitorización (conocimiento por terceros del citado lugar) y seguimiento (monitorización prolongada en el tiempo), ya que todos ellas toman como base un correcto posicionamiento para un adecuado desempeño. ABSTRACT To find the position where a mobile is located with good accuracy, when it is immersed in an indoor environment (shopping centers, office buildings, airports, stations, tunnels, etc.), is the cornerstone on which a large number of applications and services are supported. Many of these services are already available in outdoor environments, although the indoor environments are suitable for other services that are specific for it. That number, however, could be significantly higher than now, if an expensive infrastructure were not required to perform the positioning service with adequate precision, for each one of the hypothetical services. Or, equally, whether that infrastructure may have other different uses beyond the ones associated with positioning. The usability of the same infrastructure for purposes other than positioning could give the opportunity of having it already available in the different locations, because it was previously deployed for these other uses; or facilitate its deployment, because the cost of that operation would offer a higher return on usability for the deployer. Wireless technologies based on radio communications, already in use for voice and data communications (mobile, WLAN, etc), meet the requirement of additional usability and, therefore, could facilitate the growth of applications and services based on positioning, in the case of being able to use it. However, determining the position with the appropriate degree of accuracy using these technologies is a major challenge today. This paper provides significant advances in this field. Along this work, a study about the main algorithms and auxiliar techniques related with indoor positioning will be initially carried out. The review will be focused in those that are suitable to be used with both last generation mobile technologies and WLAN environments. By doing this, it is tried to highlight the advantages and disadvantages of each one of these algorithms, having as final motivation their applicability both in the world of 3G and 4G mobile networks (especially in femtocells and small-cells of LTE) and in the WLAN world; and having always in mind that the final aim is to use it in indoor environments. The main conclusion of that review is that triangulation techniques, commonly used for localization in outdoor environments, are useless in indoor environments due to adverse effects of such environments as loss of sight or multipaths. Triangulation techniques used for external locations are useless due to adverse effects like the lack of line of sight or multipath. Fingerprinting methods, based on the comparison of Received Signal Strength values measured by the mobile phone with a radio map of RSSI Recorded during the calibration phase, arise as the best methods for indoor scenarios. However, these systems are also affected by other problems, for example the important load of tasks to be done to have the system ready to work, and the variability of the channel. In front of them, in this paper we present two original contributions to improve the fingerprinting methods based systems. The first one of these contributions describes a method for find, in a simple way, the basic characteristics of the system at the level of the number of samples needed to create the radio map inside the referenced fingerprint, and also by the minimum number of radio frequency emitters that are needed to be deployed; and both of them coming from some initial requirements for the system related to the error and accuracy in positioning wanted to have, which it will be joined the data corresponding to the dimensions and physical reality of the environment. Thus, some initial guidelines when dimensioning the system will be in place, and the negative effects into the cost or into the performance of the whole system, due to an inefficient deployment of the radio frequency emitters and of the radio map capture points, will be minimized. The second contribution increases the resulting accuracy of the system when working in real time, thanks to a technique of automatic recalibration of the power measurements stored in the radio map. This technique takes into account the continuous measures reported by a few static reference points, strategically distributed in the environment, to recalculate and update the measurements stored into the map radio. An additional benefit at operational level of such technique, is the extension of the reliable time of the system, decreasing the periodicity required to recapture the radio map within full measurements. The above mentioned improvements are directly applicable to improve indoor positioning mechanisms based on voice and data wireless communications infrastructure. From there, that improvement will be also extensible and applicable to location services (personal knowledge of the location where oneself is), monitoring (knowledge by other people of your location) and monitoring (prolonged monitoring over time) as all of them are based in a correct positioning for proper performance.
Resumo:
Hazard and risk assessment of landslides with potentially long run-out is becoming more and more important. Numerical tools exploiting different constitutive models, initial data and numerical solution techniques are important for making the expert’s assessment more objective, even though they cannot substitute for the expert’s understanding of the site-specific conditions and the involved processes. This paper presents a depth-integrated model accounting for pore water pressure dissipation and applications both to real events and problems for which analytical solutions exist. The main ingredients are: (i) The mathematical model, which includes pore pressure dissipation as an additional equation. This makes possible to model flowslide problems with a high mobility at the beginning, the landslide mass coming to rest once pore water pressures dissipate. (ii) The rheological models describing basal friction: Bingham, frictional, Voellmy and cohesive-frictional viscous models. (iii) We have implemented simple erosion laws, providing a comparison between the approaches of Egashira, Hungr and Blanc. (iv) We propose a Lagrangian SPH model to discretize the equations, including pore water pressure information associated to the moving SPH nodes
Resumo:
The optimal design of a vertical cantilever beam is presented in this paper. The beam is assumed immersed in an elastic Winkler soil and subjected to several loads: a point force at the tip section, its self weight and a uniform distributed load along its length. lbe optimal design problem is to find the beam of a given length and minimum volume, such that the resultant compressive stresses are admisible. This prohlem is analyzed according to linear elasticity theory and within different alternative structural models: column, Navier-Bernoulli beam-column, Timoshenko beamcolumn (i.e. with shear strain) under conservative loads, typically, constant direction loads. Results obtained in each case are compared, in order to evaluate the sensitivity of model on the numerical results. The beam optimal design is described by the section distribution layout (area, second moment, shear area etc.) along the beam span and the corresponding beam total volume. Other situations, some of them very interesting from a theoretical point of view, with follower loads (Beck and Leipholz problems) are also discussed, leaving for future work numerical details and results.
Resumo:
The French CEA, together with EDF and the IAEA, recently organised an international benchmark to evaluate the ability to model the mechanical behaviour of a typical nuclear reinforced concrete structure subjected to seismic demands. The participants were provided with descriptions of the structure and the testing campaign; they had to propose the numerical model and the material laws for the concrete (stage #1). A mesh of beam and shell elements was generated; for modelling the concrete a damaged plasticity model was used, but a smeared crack model was also investigated. Some of the initial experimental results, with the mock-up remaining in the elastic range, were provided to the participants for calibrating their models (stage #2). Predictions had to be produced in terms of eigen-frequencies and motion time histories. The calculated frequencies reproduced reasonably the experimental ones; the time histories, calculated by modal response analysis, also reproduced adequately the observed amplifications. The participants were then expected to predict the structural response under strong ground motions (stage #3), which increased progressively up to a history recorded during the 1994 Northridge earthquake, followed by an aftershock. These results were produced using an explicit solver and a damaged plasticity model for the concrete, although an implicit solver with a smeared crack model was also investigated. The paper presents the conclusions of the pre-test exercise, as well as some observations from additional simulations conducted after the experimental results were made available.
Resumo:
This paper presents a high-accuracy fully analytical formulation to compute the miss distance and collision probability of two approaching objects following an impulsive collision avoidance maneuver. The formulation hinges on a linear relation between the applied impulse and the objects? relative motion in the b-plane, which allows one to formulate the maneuver optimization problem as an eigenvalue problem coupled to a simple nonlinear algebraic equation. The optimization criterion consists of minimizing the maneuver cost in terms of delta-V magnitude to either maximize collision miss distance or to minimize Gaussian collision probability. The algorithm, whose accuracy is verified in representative mission scenarios, can be employed for collision avoidance maneuver planning with reduced computational cost when compared with fully numerical algorithms.