35 resultados para simulation methods
em Universidad Politécnica de Madrid
Resumo:
The need for the simulation of spectrum compatible earthquake time histories has existed since earthquake engineering for complicated structures began. More than the safety of the main structure, the analysis of the equipment (piping, racks, etc.) can only be assessed on the basis of the time history of the floor in which they are contained. This paper presents several methods for calculating simulated spectrum compatible earthquakes as well as a comparison between them. As a result of this comparison, the use of the phase content in real earthquakes as proposed by Ohsaki appears as an effective alternative to the classical methods. With this method, it is possible to establish an approach without the arbitrary modulation commonly used in other methods. Different procedures are described as is the influence of the different parameters which appear in the analysis. Several numerical examples are also presented, and the effectiveness of Ohsaki's method is confirmed.
Resumo:
Recent research has shown large differences between the expected and the actual energy consumption in buildings. The differences have been attributed partially, to the assumptions made during the design phase of buildings when simulation methods are employed. More accurate occupancy profiles on building operation could help to carry out more precise building performance calculations. This study focuses on the post-occupancy evaluation of two apartments, one renovated and one non renovated, in Madrid within the same building complex. The aim of this paper is to present an application of the mixed-methods methodology (Creswell, 2007) to assess thermal comfort and occupancy practices used in the case studies, and to discuss the shortcomings and opportunities associated with it. The mixed-methods methodology offers strategies for integrating qualitative and quantitative methods to investigate complex phenomena. This approach is expected to contribute to the growing knowledge of occupants’ behaviour and building performance by explaining the differences observed between energy consumption and thermal comfort in relation to people’s saving and comfort practices and the related experiences, preferences and values.
Resumo:
La vulnerabilidad de los sistemas ganaderos de pastoreo pone en evidencia la necesidad de herramientas para evaluar y mitigar los efectos de la sequía. El avance en la teledetección ha despertado el interés por explotar potenciales aplicaciones, y está dando lugar a un intenso desarrollo de innovaciones en distintos campos. Una de estas áreas es la gestión del riesgo climático, en donde la utilización de índices de vegetación permite la evaluación de la sequía. En esta investigación, se analiza el impacto de la sequía y se evalúa el potencial de nuevas tecnologías como la teledetección para la gestión del riesgo de sequía en sistemas de ganadería extensiva. Para ello, se desarrollan tres aplicaciones: (i) evaluar el impacto económico de la sequía en una explotación ganadera extensiva de la dehesa de Andalucía, (ii) elaborar mapas de vulnerabilidad a la sequía en pastos de Chile y (iii) diseñar y evaluar el potencial de un seguro indexado para sequía en pastos en la región de Coquimbo en Chile. En la primera aplicación, se diseña un modelo dinámico y estocástico que integra aspectos climáticos, ecológicos, agronómicos y socioeconómicos para evaluar el riesgo de sequía. El modelo simula una explotación ganadera tipo de la dehesa de Andalucía para el período 1999-2010. El método de Análisis Histórico y la simulación de MonteCarlo se utilizan para identificar los principales factores de riesgo de la explotación, entre los que destacan, los periodos de inicios del verano e inicios de invierno. Los resultados muestran la existencia de un desfase temporal entre el riesgo climático y riesgo económico, teniendo este último un periodo de duración más extenso en el tiempo. También, revelan que la intensidad, frecuencia y duración son tres atributos cruciales que determinan el impacto económico de la sequía. La estrategia de reducción de la carga ganadera permite aminorar el riesgo, pero conlleva una disminución en el margen bruto de la explotación. La segunda aplicación está dedicada a la elaboración de mapas de vulnerabilidad a la sequia en pastos de Chile. Para ello, se propone y desarrolla un índice de riesgo económico (IRESP) sencillo de interpretar y replicable, que integra factores de riesgo y estrategias de adaptación para obtener una medida del Valor en Riesgo, es decir, la máxima pérdida esperada en un año con un nivel de significación del 5%.La representación espacial del IRESP pone en evidencia patrones espaciales y diferencias significativas en la vulnerabilidad a la sequía a lo largo de Chile. Además, refleja que la vulnerabilidad no siempre esta correlacionada con el riesgo climático y demuestra la importancia de considerar las estrategias de adaptación. Las medidas de autocorrelación espacial revelan que el riesgo sistémico es considerablemente mayor en el sur que en el resto de zonas. Los resultados demuestran que el IRESP transmite información pertinente y, que los mapas de vulnerabilidad pueden ser una herramienta útil en el diseño de políticas y toma de decisiones para la gestión del riesgo de sequía. La tercera aplicación evalúa el potencial de un seguro indexado para sequía en pastos en la región de Coquimbo en Chile. Para lo cual, se desarrolla un modelo estocástico para estimar la prima actuarialmente justa del seguro y se proponen y evalúan pautas alternativas para mejorar el diseño del contrato. Se aborda el riesgo base, el principal problema de los seguros indexados identificado en la literatura y, que está referido a la correlación imperfecta del índice con las pérdidas de la explotación. Para ello, se sigue un enfoque bayesiano que permite evaluar el impacto en el riesgo base de las pautas de diseño propuestas: i) una zonificación por clúster que considera aspectos espacio-temporales, ii) un período de garantía acotado a los ciclos fenológicos del pasto y iii) umbral de garantía. Los resultados muestran que tanto la zonificación como el periodo de garantía reducen el riesgo base considerablemente. Sin embargo, el umbral de garantía tiene un efecto ambiguo sobre el riesgo base. Por otra parte, la zonificación por clúster contribuye a aminorar el riesgo sistémico que enfrentan las aseguradoras. Estos resultados han puesto de manifiesto que un buen diseño de contrato puede tener un doble dividendo, por un lado aumentar su utilidad y, por otro, reducir el coste del seguro. Un diseño de contrato eficiente junto con los avances en la teledetección y un adecuado marco institucional son los pilares básicos para el buen funcionamiento de un programa de seguro. Las nuevas tecnologías ofrecen un importante potencial para la innovación en la gestión del riesgo climático. Los avances en este campo pueden proporcionar importantes beneficios sociales en los países en desarrollo y regiones vulnerables, donde las herramientas para gestionar eficazmente los riesgos sistémicos como la sequía pueden ser de gran ayuda para el desarrollo. The vulnerability of grazing livestock systems highlights the need for tools to assess and mitigate the adverse impact of drought. The recent and rapid progress in remote sensing has awakened an interest for tapping into potential applications, triggering intensive efforts to develop innovations in a number of spheres. One of these areas is climate risk management, where the use of vegetation indices facilitates assessment of drought. This research analyzes drought impacts and evaluates the potential of new technologies such as remote sensing to manage drought risk in extensive livestock systems. Three essays in drought risk management are developed to: (i) assess the economic impact of drought on a livestock farm in the Andalusian Dehesa, (ii) build drought vulnerability maps in Chilean grazing lands, and (iii) design and evaluate the potential of an index insurance policy to address the risk of drought in grazing lands in Coquimbo, Chile. In the first essay, a dynamic and stochastic farm model is designed combining climate, agronomic, socio-economic and ecological aspects to assess drought risk. The model is developed to simulate a representative livestock farm in the Dehesa of Andalusia for the time period 1999-2010. Burn analysis and MonteCarlo simulation methods are used to identify the significance of various risk sources at the farm. Most notably, early summer and early winter are identified as periods of peak risk. Moreover, there is a significant time lag between climate and economic risk and this later last longer than the former. It is shown that intensity, frequency and duration of the drought are three crucial attributes that shape the economic impact of drought. Sensitivity analysis is conducted to assess the sustainability of farm management strategies and demonstrates that lowering the stocking rate reduces farmer exposure to drought risk but entails a reduction in the expected gross margin. The second essay, mapping drought vulnerability in Chilean grazing lands, proposes and builds an index of economic risk (IRESP) that is replicable and simple to interpret. This methodology integrates risk factors and adaptation strategies to deliver information on Value at Risk, maximum expected losses at 5% significance level. Mapping IRESP provides evidence about spatial patterns and significant differences in drought vulnerability across Chilean grazing lands. Spatial autocorrelation measures reveal that systemic risk is considerably larger in the South as compared to Northern or Central Regions. Furthermore, it is shown that vulnerability is not necessarily correlated with climate risk and that adaptation strategies do matter. These results show that IRESP conveys relevant information and that vulnerability maps may be useful tools to assess policy design and decision-making in drought risk management. The third essay develops a stochastic model to estimate the actuarially fair premium and evaluates the potential of an indexed insurance policy to manage drought risk in Coquimbo, a relevant livestock farming region of Chile. Basis risk refers to the imperfect correlation of the index and farmer loses and is identified in the literature as a main limitation of index insurance. A Bayesian approach is proposed to assess the impact on basis risk of alternative guidelines in contract design: i) A cluster zoning that considers space-time aspects, ii) A guarantee period bounded to fit phenological cycles, and iii) the triggering index threshold. Results show that both the proposed zoning and guarantee period considerably reduces basis risk. However, the triggering index threshold has an ambiguous effect on basis risk. On the other hand, cluster zoning contributes to ameliorate systemic risk faced by the insurer. These results highlighted that adequate contract design is important and may result in double dividend. On the one hand, increasing farmers’ utility and, secondly, reducing the cost of insurance. An efficient contract design coupled with advances in remote sensing and an appropriate institutional framework are the basis for an efficient operation of an insurance program. The new technologies offer significant potential for innovation in climate risk managements. Progress in this field is capturing increasing attention and may provide important social gains in developing countries and vulnerable regions where the tools to efficiently manage systemic risks, such as drought, may be a means to foster development.
Resumo:
This paper is concerned with the study of the berth system in port terminals. The main objective is to present the management methodologies, which include empirical methods, analytical methods and simulation methods The comparison shows that these three methods are not independent, but they are complementary. Each method has advantages and limitations and these depend on the type of study performed.
Resumo:
Nanoinformatics has recently emerged to address the need of computing applications at the nano level. In this regard, the authors have participated in various initiatives to identify its concepts, foundations and challenges. While nanomaterials open up the possibility for developing new devices in many industrial and scientific areas, they also offer breakthrough perspectives for the prevention, diagnosis and treatment of diseases. In this paper, we analyze the different aspects of nanoinformatics and suggest five research topics to help catalyze new research and development in the area, particularly focused on nanomedicine. We also encompass the use of informatics to further the biological and clinical applications of basic research in nanoscience and nanotechnology, and the related concept of an extended ?nanotype? to coalesce information related to nanoparticles. We suggest how nanoinformatics could accelerate developments in nanomedicine, similarly to what happened with the Human Genome and other -omics projects, on issues like exchanging modeling and simulation methods and tools, linking toxicity information to clinical and personal databases or developing new approaches for scientific ontologies, among many others.
Resumo:
In the recent years the missing fourth component, the memristor, was successfully synthesized. However, the mathematical complexity and variety of the models behind this component, in addition to the existence of convergence problems in the simulations, make the design of memristor-based applications long and difficult. In this work we present a memristor model characterization framework which supports the automated generation of subcircuit files. The proposed environment allows the designer to choose and parameterize the memristor model that best suits for a given application. The framework carries out characterizing simulations in order to study the possible non-convergence problems, solving the dependence on the simulation conditions and guaranteeing the functionality and performance of the design. Additionally, the occurrence of undesirable effects related to PVT variations is also taken into account. By performing a Monte Carlo or a corner analysis, the designer is aware of the safety margins which assure the correct device operation.
Resumo:
Dynamic thermal management techniques require a collection of on-chip thermal sensors that imply a significant area and power overhead. Finding the optimum number of temperature monitors and their location on the chip surface to optimize accuracy is an NP-hard problem. In this work we improve the modeling of the problem by including area, power and networking constraints along with the consideration of three inaccuracy terms: spatial errors, sampling rate errors and monitor-inherent errors. The problem is solved by the simulated annealing algorithm. We apply the algorithm to a test case employing three different types of monitors to highlight the importance of the different metrics. Finally we present a case study of the Alpha 21364 processor under two different constraint scenarios.
Resumo:
Como contribución del estudio de medios heterogéneos, esta tesis recoge el trabajo llevado a cabo sobre modelado teórico y simulación del estudio de las propiedades ópticas de la piel y del agua del mar, como ejemplos paradigmáticos de medios heterogéneos. Se ha tomado como punto de partida el estudio de la propagación de la radiación óptica, más concretamente de la radiación láser, en un tejido biológico. La importancia de la caracterización óptica de un tejido es fundamental para manejar la interacción radiación-tejido que permite tanto el diagnóstico como la terapéutica de enfermedades y/o de disfunciones en las Ciencias de la Salud. Sin olvidar el objetivo de ofrecer una metodología de estudio, con un «enfoque ingenieril», de las propiedades ópticas en un medio heterogéneo, que no tiene por qué ser exclusivamente el tejido biológico. Como consecuencia de lo anterior y de la importancia que tiene el agua dentro de los tejidos biológicos se decide estudiar en otro capítulo las propiedades ópticas del agua dentro de un entorno heterogéneo como es el agua del mar. La selección del agua del mar, como objeto de estudio adicional, es motivada, principalmente, porque se trata de un sistema heterogéneo fácilmente descriptible en cada uno de sus elementos y permite evaluar una amplia bibliografía. Además se considera que los avances que han tenido lugar en los últimos años en las tecnologías fotónicas van a permitir su uso en los métodos experimentales de análisis de las aguas. El conocimiento de sus propiedades ópticas permite caracterizar los diferentes tipos de aguas de acuerdo con sus compuestos, así como poder identificar su presencia. Todo ello abre un amplio abanico de aplicaciones. En esta tesis doctoral, se ha conseguido de manera general: • Realizar un estudio del estado del arte del conocimiento de las propiedades ópticas de la piel y la identificación de sus elementos dispersores de la luz. • Establecer una metodología de estudio que nos permita obtener datos sobre posibles efectos de la radiación en los tejidos biológicos. •Usar distintas herramientas informáticas para simular el transporte de la radiación laser en tejidos biológicos. • Realizar experimentos mediante simulación de láser, tejidos biológicos y detectores. • Comparar los resultados conocidos experimentalmente con los simulados. • Estudiar los instrumentos de medida de la respuesta a la propagación de radiación laser en tejidos anisotrópicos. • Obtener resultados originales para el diagnóstico y tratamiento de pieles, considerando diferente razas y como alteración posible en la piel, se ha estudiado la presencia del basalioma. • Aplicación de la metodología de estudio realizada en la piel a la simulación de agua de mar. • Obtener resultados originales de simulación y análisis de cantidad de fitoplancton en agua; con el objetivo de facilitar la caracterización de diferentes tipos de aguas. La tesis doctoral se articula en 6 capítulos y 3 anexos perfectamente diferenciados con su propia bibliografía en cada uno de ellos. El primer capítulo está centrado en la problemática del difícil estudio y caracterización de los medios heterogéneos debidos a su comportamiento no homogéneo y anisotrópico ante las radiaciones ópticas. Así pues, presentaremos una breve introducción al comportamiento tanto de los tejidos como del océano ante radiaciones ópticas y definiremos sus principales propiedades: la absorción, el scattering, la anisotropía y los coeficientes de reflexión. Como continuación, un segundo capítulo trata de acercarnos a la resolución del problema de cómo caracterizar las propiedades ópticas descritas en el primer capítulo. Para ello, primero se introducen los modelos teóricos, en segundo lugar los métodos de simulación más empleados y, por último, enumerar las principales técnicas de medida de la propagación de la luz en los tejidos vivos. El tercer capítulo, centrado en la piel y sus propiedades, intenta realizar una síntesis de lo que se conoce sobre el comportamiento de la piel frente a la propagación de las radiaciones ópticas. Se estudian sus elementos constituyentes y los distintos tipos de pieles. Por último se describe un ejemplo de aplicación más inmediata que se beneficia de este conocimiento. Sabemos que el porcentaje de agua en el cuerpo humano es muy elevado, en concreto en la piel se considera de aproximadamente un 70%. Es obvio, por tanto, que conocer cómo afecta el agua en la propagación de una radiación óptica facilitaría el disponer de patrones de referencia; para ello, se realiza el estudio del agua del mar. En el cuarto capítulo se estudian las propiedades del agua del mar como medio heterogéneo de partículas. En este capítulo presentamos una síntesis de los elementos más significativos de dispersores en el océano, un estudio de su comportamiento individual frente a radiaciones ópticas y su contribución al océano en su conjunto. Finalmente, en el quinto capítulo se describen los resultados obtenidos en los distintos tipos de simulaciones realizadas. Las herramientas de simulación empleadas han sido las mismas tanto para el caso del estudio de la piel como para el agua del mar, por ello ambos resultados son expuestos en el mismo capítulo. En el primer caso se analizan diferentes tipos de agua oceánica, mediante la variación de las concentraciones de fitoplancton. El método empleado permite comprobar las diferencias que pueden encontrarse en la caracterización y diagnóstico de aguas. El segundo caso analizado es el de la piel; donde se estudia el comportamiento de distintos tipos de piel, se analizan para validar el método y se comprueba cómo el resultado es compatible con aplicaciones, actualmente comerciales, como la de la depilación con láser. Como resultado significativo se muestra la posible metodología a aplicar para el diagnóstico del cáncer de piel conocido como basalioma. Finalmente presentamos un capítulo dedicado a los trabajos futuros basados en experimentación real y el coste asociado que implicaría el llevarlo a cabo. Los anexos que concluyen la tesis doctoral versan por un lado sobre el funcionamiento del vector común de toda la tesis: el láser, sus aplicaciones y su control en la seguridad y por otro presentamos los coeficientes de absorción y scattering que hemos utilizado en nuestras simulaciones. El primero condensa las principales características de una radiación láser desde el punto de vista de su generación, el segundo presenta la seguridad en su uso y el tercero son tablas propias, cuyos parámetros son los utilizados en el apartado de experimentación. Aunque por el tipo de tesis que defiendo no se ajusta a los modelos canónicos de tesis doctoral, el lector podrá encontrar en esta tesis de forma imbricada, el modelo común a todas las tesis o proyectos de investigación con una sección dedicada al estado del arte con ejemplos pedagógicos para facilitar la compresión y se plantean unos objetivos (capítulos 1-4), y un capítulo que se subdivide en materiales y métodos y resultados y discusiones (capítulo 5 con sus subsecciones), para finalizar con una vista al futuro y los trabajos futuros que se desprenden de la tesis (capítulo 6). ABSTRACT As contribution to the study of heterogeneous media, this thesis covers the work carried out on theoretical modelling and simulation study of the optical properties of the skin and seawater, as paradigmatic examples of heterogeneous media. It is taken as a starting point the study of the propagation of optical radiation, in particular laser radiation in a biological tissue. The importance of optical characterization of a tissue is critical for managing the interaction between radiation and tissues that allows both diagnosis and therapy of diseases and / or dysfunctions in Health Sciences. Without forgetting the aim of providing a methodology of study, with "engineering approach" of the optical properties in a heterogeneous environment, which does not have to be exclusively biological tissue. As a result of this and the importance of water in biological tissues, we have decided to study the optical properties of water in a heterogeneous environment such as seawater in another chapter. The selection of sea water as an object of further study is motivated mainly because it is considered that the advances that have taken place in recent years in photonic technologies will allow its use in experimental methods of water analysis. Knowledge of the optical properties to characterize the different types of waters according to their compounds, as well as to identify its presence. All of this opens a wide range of applications. In this thesis, it has been generally achieved: • Conduct a study of the state of the art knowledge of the optical properties of the skin and identifying its light scattering elements. • Establish a study methodology that allows us to obtain data on possible effects of radiation on biological tissues. • Use different computer tools to simulate the transport of laser radiation in biological tissues. • Conduct experiments by simulating: laser, detectors, and biological tissues. • Compare the known results with our experimentally simulation. • Study the measuring instruments and its response to the propagation of laser radiation in anisotropic tissues. • Get innovative results for diagnosis and treatment of skin, considering different races and a possible alteration in the skin that we studied: the presence of basal cell carcinoma. • Application of the methodology of the study conducted in the skin to simulate seawater. • Get innovative results of simulation and analysis of amount of phytoplankton in water; in order to facilitate the characterization of different types of water. The dissertation is divided into six chapters and three annexes clearly distinguished by their own literature in each of them. The first chapter is focused on the problem of difficult study and characterization of heterogeneous media due to their inhomogeneous and anisotropic behaviour of optical radiation. So we present a brief introduction to the behaviour of both tissues at the cellular level as the ocean, to optical radiation and define the main optical properties: absorption, scattering, anisotropy and reflection coefficients. Following from this, a second chapter is an approach to solving the problem of how to characterize the optical properties described in the first chapter. For this, first the theoretical models are introduced, secondly simulation methods more used and, finally, the main techniques for measuring the propagation of light in living tissue. The third chapter is focused on the skin and its properties, tries to make a synthesis of what is known about the behaviour of the skin and its constituents tackle the spread of optical radiation. Different skin types are studied and an example of immediate application of this knowledge benefits described. We know that the percentage of water in the human body is very high, particularly in the skin is considered about 70%. It is obvious, therefore, that knowing how the water is affected by the propagation of an optical radiation facilitate to get reference patterns; For this, the study of seawater is performed. In the fourth chapter the properties of seawater as a heterogeneous component particles are studied. This chapter presents a summary of the scattering elements in the ocean, its individual response to optical radiation and its contribution to the ocean as a whole. In the fifth chapter the results of the different types of simulations are described. Simulation tools used were the same for the study of skin and seawater, so both results are presented in the chapter. In the first case different types of ocean water is analysed by varying the concentrations of phytoplankton. The method allows to check the differences that can be found in the characterization and diagnosis of water. The second case analysed is the skin; where the behaviour of different skin types are studied and checked how the result is compatible with applications currently trade, such as laser hair removal. As a significant result of the possible methodology to be applied for the diagnosis of skin cancer known as basal cell carcinoma is shown. Finally we present a chapter on future work based on actual experimentation and the associated cost which it would involve carrying out. The annexes conclude the thesis deal with one hand on the functioning of the common vector of the whole thesis: laser, control applications and safety and secondly we present the absorption and scattering coefficients we used in our simulations. The first condenses the main characteristics of laser radiation from the point of view of their generation, the second presents the safety in use and the third are own tables, whose parameters are used in the experimental section. Although the kind of view which I advocate does not meet the standard models doctoral thesis, the reader will find in this thesis so interwoven, the common model to all theses or research projects with a section on the state of the art pedagogical examples to facilitate the understanding and objectives (Chapters 1-4), and a chapter is divided into materials and methods and results and discussions (Chapter 5 subsections) arise, finishing with a view to the future and work future arising from the thesis (Chapter 6).
Resumo:
Los ensayos de bombeo son, sin lugar a dudas, una de las pruebas más fiables y de mayor interés que se hacen en el medio físico. No son pruebas estrictamente puntuales, dado que el bombeo atrae flujo desde distancias lejanas al pozo, la prueba tiene una excelente representatividad espacial. Los métodos de interpretación mediante ensayos de bombeo se empezaron a plantear en la primera mitad del pasado siglo. Con los ensayos de bombeo se puede calcular la transmisividad y coeficiente de almacenamiento de las formaciones acuíferas y suministran información sobre el tipo de acuífero, la calidad constructiva del pozo de extracción, la existencia de barreras impermeable o bordes de recarga próximos, e incluso en algunas circunstancias permiten el cálculo del área de embalse subterráneo. Desde mediados del siglo 20 existe una eficaz y abundante gama de métodos analítico-interpretativos de ensayos de bombeo, tanto en régimen permanente como transitorio. Estos métodos son ampliamente conocidos y están muy experimentados a lo largo de muchos países, sin embargo, hoy día, podrían utilizarse modelos de flujo para la interpretación, logrando la misma fiabilidad e incluso mejores posibilidades de análisis. Muchos ensayos que no pueden interpretarse porque las configuraciones del medio son demasiado complejas y no están disponibles, o no es posible, el desarrollo de métodos analíticos, tienen buena adaptación y en ocasiones muy fácil solución haciendo uso de los métodos numéricos de simulación del flujo. En esta tesis se ha buscado una vía de interpretar ensayos de bombeo haciendo uso de modelos de simulación del flujo. Se utiliza el modelo universal MODFLOW del United States Geological Survey, en el cual se configura una celda de simulación y mallado particularmente adecuados para el problema a tratar, se valida con los métodos analíticos existentes. Con la célula convenientemente validada se simulan otros casos en los que no existen métodos analíticos desarrollados dada la complejidad del medio físico a tratar y se sacan las oportunas conclusiones. Por último se desarrolla un modelo específico y la correspondiente aplicación de uso general para la interpretación numérica de ensayos de bombeo tanto con las configuraciones normales como con configuraciones complejas del medio físico. ABSTRACT Pumping tests are, without doubt, one of the most reliable and most interesting tests done in the physical environment. They are not strictly anecdotal evidence, since pumping flow attracts from far distances to the well, the test has excellent spatial representation. Methods of interpretation by pumping tests began to arise in the first half of last century. With pumping tests, can be calculated transmissivity and storage coefficient of the aquifer formations, and provide information on the type of aquifer, the construction quality of the well, the existence of waterproof barriers or borders next recharge, and even in some circumstances allow calculating the area of underground reservoir. Since the mid-20th century there is effective and abundant range of analytical interpretative pumping tests, both in steady state and transient methods. These methods are very widely known and experienced over many countries, however, nowadays, may flow models used for interpretation, obtaining equally reliable or even better possibilities for analysis. Many trials cannot be interpreted as environmental settings are too complex and are not available, or not possible, the development of analytical methods, have good adaptation and sometimes very easily solved using numerical flow simulation methods. This thesis has sought a way to interpret pumping tests using flow simulation models. MODFLOW universal model of United States Geological Survey, in which a simulation cell and meshing particularly suitable for the problem to be treated, is validated with existing analytical methods used is set. With suitably validated cell other cases where there are no analytical methods developed given the complexity of the physical environment to try and draw appropriate conclusions are simulated. Finally, a specific model and the corresponding application commonly used for numerical interpretation of pumping tests both with normal settings as complex configurations of the physical environment is developed.
Resumo:
Connectivity analysis on diffusion MRI data of the whole-brain suffers from distortions caused by the standard echo-planar imaging acquisition strategies. These images show characteristic geometrical deformations and signal destruction that are an important drawback limiting the success of tractography algorithms. Several retrospective correction techniques are readily available. In this work, we use a digital phantom designed for the evaluation of connectivity pipelines. We subject the phantom to a “theoretically correct” and plausible deformation that resembles the artifact under investigation. We correct data back, with three standard methodologies (namely fieldmap-based, reversed encoding-based, and registration- based). Finally, we rank the methods based on their geometrical accuracy, the dropout compensation, and their impact on the resulting connectivity matrices.
Resumo:
Anew, simple, and quick-calculationmethodology to obtain a solar panel model, based on the manufacturers’ datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
Resumo:
En esta tesis presentamos una teoría adaptada a la simulación de fenómenos lentos de transporte en sistemas atomísticos. En primer lugar, desarrollamos el marco teórico para modelizar colectividades estadísticas de equilibrio. A continuación, lo adaptamos para construir modelos de colectividades estadísticas fuera de equilibrio. Esta teoría reposa sobre los principios de la mecánica estadística, en particular el principio de máxima entropía de Jaynes, utilizado tanto para sistemas en equilibrio como fuera de equilibrio, y la teoría de las aproximaciones del campo medio. Expresamos matemáticamente el problema como un principio variacional en el que maximizamos una entropía libre, en lugar de una energía libre. La formulación propuesta permite definir equivalentes atomísticos de variables macroscópicas como la temperatura y la fracción molar. De esta forma podemos considerar campos macroscópicos no uniformes. Completamos el marco teórico con reglas de cuadratura de Monte Carlo, gracias a las cuales obtenemos modelos computables. A continuación, desarrollamos el conjunto completo de ecuaciones que gobiernan procesos de transporte. Deducimos la desigualdad de disipación entrópica a partir de fuerzas y flujos termodinámicos discretos. Esta desigualdad nos permite identificar la estructura que deben cumplir los potenciales cinéticos discretos. Dichos potenciales acoplan las tasas de variación en el tiempo de las variables microscópicas con las fuerzas correspondientes. Estos potenciales cinéticos deben ser completados con una relación fenomenológica, del tipo definido por la teoría de Onsanger. Por último, aportamos validaciones numéricas. Con ellas ilustramos la capacidad de la teoría presentada para simular propiedades de equilibrio y segregación superficial en aleaciones metálicas. Primero, simulamos propiedades termodinámicas de equilibrio en el sistema atomístico. A continuación evaluamos la habilidad del modelo para reproducir procesos de transporte en sistemas complejos que duran tiempos largos con respecto a los tiempos característicos a escala atómica. ABSTRACT In this work, we formulate a theory to address simulations of slow time transport effects in atomic systems. We first develop this theoretical framework in the context of equilibrium of atomic ensembles, based on statistical mechanics. We then adapt it to model ensembles away from equilibrium. The theory stands on Jaynes' maximum entropy principle, valid for the treatment of both, systems in equilibrium and away from equilibrium and on meanfield approximation theory. It is expressed in the entropy formulation as a variational principle. We interpret atomistic equivalents of macroscopic variables such as the temperature and the molar fractions, wich are not required to be uniform, but can vary from particle to particle. We complement this theory with Monte Carlo summation rules for further approximation. In addition, we provide a framework for studying transport processes with the full set of equations driving the evolution of the system. We first derive a dissipation inequality for the entropic production involving discrete thermodynamic forces and fluxes. This discrete dissipation inequality identifies the adequate structure for discrete kinetic potentials which couple the microscopic field rates to the corresponding driving forces. Those kinetic potentials must finally be expressed as a phenomenological rule of the Onsanger Type. We present several validation cases, illustrating equilibrium properties and surface segregation of metallic alloys. We first assess the ability of a simple meanfield model to reproduce thermodynamic equilibrium properties in systems with atomic resolution. Then, we evaluate the ability of the model to reproduce a long-term transport process in complex systems.
Resumo:
Background: Several meta-analysis methods can be used to quantitatively combine the results of a group of experiments, including the weighted mean difference, statistical vote counting, the parametric response ratio and the non-parametric response ratio. The software engineering community has focused on the weighted mean difference method. However, other meta-analysis methods have distinct strengths, such as being able to be used when variances are not reported. There are as yet no guidelines to indicate which method is best for use in each case. Aim: Compile a set of rules that SE researchers can use to ascertain which aggregation method is best for use in the synthesis phase of a systematic review. Method: Monte Carlo simulation varying the number of experiments in the meta analyses, the number of subjects that they include, their variance and effect size. We empirically calculated the reliability and statistical power in each case Results: WMD is generally reliable if the variance is low, whereas its power depends on the effect size and number of subjects per meta-analysis; the reliability of RR is generally unaffected by changes in variance, but it does require more subjects than WMD to be powerful; NPRR is the most reliable method, but it is not very powerful; SVC behaves well when the effect size is moderate, but is less reliable with other effect sizes. Detailed tables of results are annexed. Conclusions: Before undertaking statistical aggregation in software engineering, it is worthwhile checking whether there is any appreciable difference in the reliability and power of the methods. If there is, software engineers should select the method that optimizes both parameters.
Resumo:
This paper describes new approaches to improve the local and global approximation (matching) and modeling capability of Takagi–Sugeno (T-S) fuzzy model. The main aim is obtaining high function approximation accuracy and fast convergence. The main problem encountered is that T-S identification method cannot be applied when the membership functions are overlapped by pairs. This restricts the application of the T-S method because this type of membership function has been widely used during the last 2 decades in the stability, controller design of fuzzy systems and is popular in industrial control applications. The approach developed here can be considered as a generalized version of T-S identification method with optimized performance in approximating nonlinear functions. We propose a noniterative method through weighting of parameters approach and an iterative algorithm by applying the extended Kalman filter, based on the same idea of parameters’ weighting. We show that the Kalman filter is an effective tool in the identification of T-S fuzzy model. A fuzzy controller based linear quadratic regulator is proposed in order to show the effectiveness of the estimation method developed here in control applications. An illustrative example of an inverted pendulum is chosen to evaluate the robustness and remarkable performance of the proposed method locally and globally in comparison with the original T-S model. Simulation results indicate the potential, simplicity, and generality of the algorithm. An illustrative example is chosen to evaluate the robustness. In this paper, we prove that these algorithms converge very fast, thereby making them very practical to use.
Resumo:
This paper presents the architecture and the methods used to dynamically simulate the sea backscatter of an airborne radar operating in a medium repetition frequency mode (MPRF). It offers a method of generating a sea backscatter signal which fulfills the intensity statistics of real clutter in time domain, spatial correlation and local Doppler spectrum of real data. Three antenna channels (sum, guard and difference) and their cross-correlation properties are simulated. The objective of this clutter generator is to serve as the signal source for the simulation of complex airborne pulsed radar signal processors