988 resultados para Simulation tools
Resumo:
Como contribución del estudio de medios heterogéneos, esta tesis recoge el trabajo llevado a cabo sobre modelado teórico y simulación del estudio de las propiedades ópticas de la piel y del agua del mar, como ejemplos paradigmáticos de medios heterogéneos. Se ha tomado como punto de partida el estudio de la propagación de la radiación óptica, más concretamente de la radiación láser, en un tejido biológico. La importancia de la caracterización óptica de un tejido es fundamental para manejar la interacción radiación-tejido que permite tanto el diagnóstico como la terapéutica de enfermedades y/o de disfunciones en las Ciencias de la Salud. Sin olvidar el objetivo de ofrecer una metodología de estudio, con un «enfoque ingenieril», de las propiedades ópticas en un medio heterogéneo, que no tiene por qué ser exclusivamente el tejido biológico. Como consecuencia de lo anterior y de la importancia que tiene el agua dentro de los tejidos biológicos se decide estudiar en otro capítulo las propiedades ópticas del agua dentro de un entorno heterogéneo como es el agua del mar. La selección del agua del mar, como objeto de estudio adicional, es motivada, principalmente, porque se trata de un sistema heterogéneo fácilmente descriptible en cada uno de sus elementos y permite evaluar una amplia bibliografía. Además se considera que los avances que han tenido lugar en los últimos años en las tecnologías fotónicas van a permitir su uso en los métodos experimentales de análisis de las aguas. El conocimiento de sus propiedades ópticas permite caracterizar los diferentes tipos de aguas de acuerdo con sus compuestos, así como poder identificar su presencia. Todo ello abre un amplio abanico de aplicaciones. En esta tesis doctoral, se ha conseguido de manera general: • Realizar un estudio del estado del arte del conocimiento de las propiedades ópticas de la piel y la identificación de sus elementos dispersores de la luz. • Establecer una metodología de estudio que nos permita obtener datos sobre posibles efectos de la radiación en los tejidos biológicos. •Usar distintas herramientas informáticas para simular el transporte de la radiación laser en tejidos biológicos. • Realizar experimentos mediante simulación de láser, tejidos biológicos y detectores. • Comparar los resultados conocidos experimentalmente con los simulados. • Estudiar los instrumentos de medida de la respuesta a la propagación de radiación laser en tejidos anisotrópicos. • Obtener resultados originales para el diagnóstico y tratamiento de pieles, considerando diferente razas y como alteración posible en la piel, se ha estudiado la presencia del basalioma. • Aplicación de la metodología de estudio realizada en la piel a la simulación de agua de mar. • Obtener resultados originales de simulación y análisis de cantidad de fitoplancton en agua; con el objetivo de facilitar la caracterización de diferentes tipos de aguas. La tesis doctoral se articula en 6 capítulos y 3 anexos perfectamente diferenciados con su propia bibliografía en cada uno de ellos. El primer capítulo está centrado en la problemática del difícil estudio y caracterización de los medios heterogéneos debidos a su comportamiento no homogéneo y anisotrópico ante las radiaciones ópticas. Así pues, presentaremos una breve introducción al comportamiento tanto de los tejidos como del océano ante radiaciones ópticas y definiremos sus principales propiedades: la absorción, el scattering, la anisotropía y los coeficientes de reflexión. Como continuación, un segundo capítulo trata de acercarnos a la resolución del problema de cómo caracterizar las propiedades ópticas descritas en el primer capítulo. Para ello, primero se introducen los modelos teóricos, en segundo lugar los métodos de simulación más empleados y, por último, enumerar las principales técnicas de medida de la propagación de la luz en los tejidos vivos. El tercer capítulo, centrado en la piel y sus propiedades, intenta realizar una síntesis de lo que se conoce sobre el comportamiento de la piel frente a la propagación de las radiaciones ópticas. Se estudian sus elementos constituyentes y los distintos tipos de pieles. Por último se describe un ejemplo de aplicación más inmediata que se beneficia de este conocimiento. Sabemos que el porcentaje de agua en el cuerpo humano es muy elevado, en concreto en la piel se considera de aproximadamente un 70%. Es obvio, por tanto, que conocer cómo afecta el agua en la propagación de una radiación óptica facilitaría el disponer de patrones de referencia; para ello, se realiza el estudio del agua del mar. En el cuarto capítulo se estudian las propiedades del agua del mar como medio heterogéneo de partículas. En este capítulo presentamos una síntesis de los elementos más significativos de dispersores en el océano, un estudio de su comportamiento individual frente a radiaciones ópticas y su contribución al océano en su conjunto. Finalmente, en el quinto capítulo se describen los resultados obtenidos en los distintos tipos de simulaciones realizadas. Las herramientas de simulación empleadas han sido las mismas tanto para el caso del estudio de la piel como para el agua del mar, por ello ambos resultados son expuestos en el mismo capítulo. En el primer caso se analizan diferentes tipos de agua oceánica, mediante la variación de las concentraciones de fitoplancton. El método empleado permite comprobar las diferencias que pueden encontrarse en la caracterización y diagnóstico de aguas. El segundo caso analizado es el de la piel; donde se estudia el comportamiento de distintos tipos de piel, se analizan para validar el método y se comprueba cómo el resultado es compatible con aplicaciones, actualmente comerciales, como la de la depilación con láser. Como resultado significativo se muestra la posible metodología a aplicar para el diagnóstico del cáncer de piel conocido como basalioma. Finalmente presentamos un capítulo dedicado a los trabajos futuros basados en experimentación real y el coste asociado que implicaría el llevarlo a cabo. Los anexos que concluyen la tesis doctoral versan por un lado sobre el funcionamiento del vector común de toda la tesis: el láser, sus aplicaciones y su control en la seguridad y por otro presentamos los coeficientes de absorción y scattering que hemos utilizado en nuestras simulaciones. El primero condensa las principales características de una radiación láser desde el punto de vista de su generación, el segundo presenta la seguridad en su uso y el tercero son tablas propias, cuyos parámetros son los utilizados en el apartado de experimentación. Aunque por el tipo de tesis que defiendo no se ajusta a los modelos canónicos de tesis doctoral, el lector podrá encontrar en esta tesis de forma imbricada, el modelo común a todas las tesis o proyectos de investigación con una sección dedicada al estado del arte con ejemplos pedagógicos para facilitar la compresión y se plantean unos objetivos (capítulos 1-4), y un capítulo que se subdivide en materiales y métodos y resultados y discusiones (capítulo 5 con sus subsecciones), para finalizar con una vista al futuro y los trabajos futuros que se desprenden de la tesis (capítulo 6). ABSTRACT As contribution to the study of heterogeneous media, this thesis covers the work carried out on theoretical modelling and simulation study of the optical properties of the skin and seawater, as paradigmatic examples of heterogeneous media. It is taken as a starting point the study of the propagation of optical radiation, in particular laser radiation in a biological tissue. The importance of optical characterization of a tissue is critical for managing the interaction between radiation and tissues that allows both diagnosis and therapy of diseases and / or dysfunctions in Health Sciences. Without forgetting the aim of providing a methodology of study, with "engineering approach" of the optical properties in a heterogeneous environment, which does not have to be exclusively biological tissue. As a result of this and the importance of water in biological tissues, we have decided to study the optical properties of water in a heterogeneous environment such as seawater in another chapter. The selection of sea water as an object of further study is motivated mainly because it is considered that the advances that have taken place in recent years in photonic technologies will allow its use in experimental methods of water analysis. Knowledge of the optical properties to characterize the different types of waters according to their compounds, as well as to identify its presence. All of this opens a wide range of applications. In this thesis, it has been generally achieved: • Conduct a study of the state of the art knowledge of the optical properties of the skin and identifying its light scattering elements. • Establish a study methodology that allows us to obtain data on possible effects of radiation on biological tissues. • Use different computer tools to simulate the transport of laser radiation in biological tissues. • Conduct experiments by simulating: laser, detectors, and biological tissues. • Compare the known results with our experimentally simulation. • Study the measuring instruments and its response to the propagation of laser radiation in anisotropic tissues. • Get innovative results for diagnosis and treatment of skin, considering different races and a possible alteration in the skin that we studied: the presence of basal cell carcinoma. • Application of the methodology of the study conducted in the skin to simulate seawater. • Get innovative results of simulation and analysis of amount of phytoplankton in water; in order to facilitate the characterization of different types of water. The dissertation is divided into six chapters and three annexes clearly distinguished by their own literature in each of them. The first chapter is focused on the problem of difficult study and characterization of heterogeneous media due to their inhomogeneous and anisotropic behaviour of optical radiation. So we present a brief introduction to the behaviour of both tissues at the cellular level as the ocean, to optical radiation and define the main optical properties: absorption, scattering, anisotropy and reflection coefficients. Following from this, a second chapter is an approach to solving the problem of how to characterize the optical properties described in the first chapter. For this, first the theoretical models are introduced, secondly simulation methods more used and, finally, the main techniques for measuring the propagation of light in living tissue. The third chapter is focused on the skin and its properties, tries to make a synthesis of what is known about the behaviour of the skin and its constituents tackle the spread of optical radiation. Different skin types are studied and an example of immediate application of this knowledge benefits described. We know that the percentage of water in the human body is very high, particularly in the skin is considered about 70%. It is obvious, therefore, that knowing how the water is affected by the propagation of an optical radiation facilitate to get reference patterns; For this, the study of seawater is performed. In the fourth chapter the properties of seawater as a heterogeneous component particles are studied. This chapter presents a summary of the scattering elements in the ocean, its individual response to optical radiation and its contribution to the ocean as a whole. In the fifth chapter the results of the different types of simulations are described. Simulation tools used were the same for the study of skin and seawater, so both results are presented in the chapter. In the first case different types of ocean water is analysed by varying the concentrations of phytoplankton. The method allows to check the differences that can be found in the characterization and diagnosis of water. The second case analysed is the skin; where the behaviour of different skin types are studied and checked how the result is compatible with applications currently trade, such as laser hair removal. As a significant result of the possible methodology to be applied for the diagnosis of skin cancer known as basal cell carcinoma is shown. Finally we present a chapter on future work based on actual experimentation and the associated cost which it would involve carrying out. The annexes conclude the thesis deal with one hand on the functioning of the common vector of the whole thesis: laser, control applications and safety and secondly we present the absorption and scattering coefficients we used in our simulations. The first condenses the main characteristics of laser radiation from the point of view of their generation, the second presents the safety in use and the third are own tables, whose parameters are used in the experimental section. Although the kind of view which I advocate does not meet the standard models doctoral thesis, the reader will find in this thesis so interwoven, the common model to all theses or research projects with a section on the state of the art pedagogical examples to facilitate the understanding and objectives (Chapters 1-4), and a chapter is divided into materials and methods and results and discussions (Chapter 5 subsections) arise, finishing with a view to the future and work future arising from the thesis (Chapter 6).
Resumo:
Los tratamientos biopelícula fueron unos de los primeros tratamientos biológicos que se aplicaron en las aguas residuales. Los tratamientos biopelícula presentan importantes ventajas frente a los cultivos en suspensión, sin embargo, el control de los tratamientos biopelícula es complicado y su modelización también. Las bases teóricas del comportamiento de las biopelículas empezaron a desarrollarse fundamentalmente a partir de los años 80. Dado que el proceso es complejo con ecuaciones de difícil resolución, estas conceptualizaciones han sido consideradas durante años como ejercicios matemáticos más que como herramientas de diseño y simulación. Los diseños de los reactores estaban basados en experiencias de plantas piloto o en comportamientos empíricos de determinadas plantas. Las ecuaciones de diseño eran regresiones de los datos empíricos. La aplicabilidad de las ecuaciones se reducía a las condiciones particulares de la planta de la que provenían los datos empíricos. De tal forma que existía una gran variedad y diversidad de ecuaciones empíricas para cada tipo de reactor. La investigación médica durante los años 90 centró su atención en la formación y eliminación de las biopelículas. Gracias al desarrollo de nuevas prácticas de laboratorio que permitían estudiar el interior de las biopelículas y gracias también al aumento de la capacidad de los ordenadores, la simulación del comportamiento de las biopelículas tomó un nuevo impulso en esta década. El desarrollo de un tipo de biopelículas, fangos granulares, en condiciones aerobias realizando simultaneamente procesos de eliminación de nutrientes ha sido recientemente patentado. Esta patente ha recibido numerosos premios y reconocimientos internacionales tales como la Eurpean Invention Award (2012). En 1995 se descubrió que determinadas bacterias podían realizar un nuevo proceso de eliminación de nitrógeno denominado Anammox. Este nuevo tipo de proceso de eliminación de nitrógeno tiene el potencial de ofrecer importantes mejoras en el rendimiento de eliminación y en el consumo de energía. En los últimos 10 años, se han desarrollado una serie de tratamientos denominados “innovadores” de eliminación de nutrientes. Dado que no resulta posible el establecimiento de estas bacterias Anammox en fangos activos convencionales, normalmente se recurre al uso de cultivos biopelícula. La investigación se ha centrado en el desarrollo de estos procesos innovadores en cultivos biopelícula, en particular en los fangos granulares y MBBR e IFAs, con el objeto de establecer las condiciones bajo las cuales estos procesos se pueden desarrollar de forma estable. Muchas empresas y organizaciones buscan una segunda patente. Una cuestión principal en el desarrollo de estos procesos se encuentra la correcta selección de las condiciones ambientales y de operación para que unas bacterias desplacen a otras en el interior de las biopelículas. El diseño de plantas basado en cultivos biopelícula con procesos convencionales se ha realizado normalmente mediante el uso de métodos empíricos y semi-empíricos. Sin embargo, los criterios de selección avanzados aplicados en los Tratamientos Innovadores de Eliminación de Nitrógeno unido a la complejidad de los mecanismos de transporte de sustratos y crecimiento de la biomasa en las biopelículas, hace necesario el uso de herramientas de modelización para poder conclusiones no evidentes. Biofilms were one of the first biological treatments used in the wastewater treatment. Biofilms exhibit important advantages over suspended growth activated sludge. However, controlling biofilms growth is complicated and likewise its simulation. The theoretical underpinnings of biofilms performance began to be developed during 80s. As the equations that govern the growth of biofilms are complex and its resolution is challenging, these conceptualisations have been considered for years as mathematical exercises instead of practical design and simulation tools. The design of biofilm reactors has been based on performance information of pilot plants and specific plants. Most of the times, the designing equations were simple regressions of empirical data. The applicability of these equations were confined to the particular conditions of the plant from where the data came from. Consequently, there were a wide range of design equations for each type of reactor During 90s medical research focused its efforts on how biofilm´s growth with the ultimate goal of avoiding it. Thanks to the development of new laboratory techniques that allowed the study the interior of the biofilms and thanks as well to the development of the computers, simulation of biofilms’ performance had a considerable evolution during this decade. In 1995 it was discovered that certain bacteria can carry out a new sort of nutrient removal process named Anammox. This new type of nutrient removal process potentially can enhance considerably the removal performance and the energy consumption. In the last decade, it has been developed a range of treatments based on the Anammox generally named “Innovative Nutrient Removal Treatments”. As it is not possible to cultivate Anammox bacteria in activated sludge, normally scientists and designers resort to the use of biofilms. A critical issue in the development of these innovative processes is the correct selection of environment and operation conditions so as to certain bacterial population displace to others bacteria within the biofilm. The design of biofilm technology plants is normally based on the use of empirical and semi-empirical methods. However, the advanced control strategies used in the Innovative Nutrient Removal Processes together with the complexity of the mass transfer and biomass growth in biofilms, require the use of modeling tools to be able to set non evident conclusions.
Resumo:
Present research is framed within the project MODIFICA (MODelo predictivo - edIFIcios - Isla de Calor Urbana) aimed at developing a predictive model for dwelling energy performance under the urban heat island effect in order to implement it in the evaluation of real energy demand and consumption of dwellings as well as in the selection of energy retrofitting strategies. It is funded by Programa de I+D+i orientada a los retos de la sociedad 'Retos Investigación' 2013. The scope of our predictive model is defined by the heat island effect (UHI) of urban structures that compose the city of Madrid. In particular, we focus on the homogeneous areas for urban structures with the same urban and building characteristics. Data sources for the definition of such homogeneous areas were provided by previous research on the UHI of Madrid. The objective is to establish a critical analysis of climate records used for energy simulation tools, which data come from weather stations placed in decontextualized areas from the usual urban reality, where the thermal conditions differs by up to 6ºC. In this way, we intend to develop a new predictive model for the consumption and demand in buildings depending on their location, the urban structure and the associated UHI, improving the future energy rehabilitation interventions
Resumo:
El cálculo de cargas de aerogeneradores flotantes requiere herramientas de simulación en el dominio del tiempo que consideren todos los fenómenos que afectan al sistema, como la aerodinámica, la dinámica estructural, la hidrodinámica, las estrategias de control y la dinámica de las líneas de fondeo. Todos estos efectos están acoplados entre sí y se influyen mutuamente. Las herramientas integradas se utilizan para calcular las cargas extremas y de fatiga que son empleadas para dimensionar estructuralmente los diferentes componentes del aerogenerador. Por esta razón, un cálculo preciso de las cargas influye de manera importante en la optimización de los componentes y en el coste final del aerogenerador flotante. En particular, el sistema de fondeo tiene gran impacto en la dinámica global del sistema. Muchos códigos integrados para la simulación de aerogeneradores flotantes utilizan modelos simplificados que no consideran los efectos dinámicos de las líneas de fondeo. Una simulación precisa de las líneas de fondeo dentro de los modelos integrados puede resultar fundamental para obtener resultados fiables de la dinámica del sistema y de los niveles de cargas en los diferentes componentes. Sin embargo, el impacto que incluir la dinámica de los fondeos tiene en la simulación integrada y en las cargas todavía no ha sido cuantificada rigurosamente. El objetivo principal de esta investigación es el desarrollo de un modelo dinámico para la simulación de líneas de fondeo con precisión, validarlo con medidas en un tanque de ensayos e integrarlo en un código de simulación para aerogeneradores flotantes. Finalmente, esta herramienta, experimentalmente validada, es utilizada para cuantificar el impacto que un modelos dinámicos de líneas de fondeo tienen en la computación de las cargas de fatiga y extremas de aerogeneradores flotantes en comparación con un modelo cuasi-estático. Esta es una información muy útil para los futuros diseñadores a la hora de decidir qué modelo de líneas de fondeo es el adecuado, dependiendo del tipo de plataforma y de los resultados esperados. El código dinámico de líneas de fondeo desarrollado en esta investigación se basa en el método de los Elementos Finitos, utilizando en concreto un modelo ”Lumped Mass” para aumentar su eficiencia de computación. Los experimentos realizados para la validación del código se realizaron en el tanque del École Céntrale de Nantes (ECN), en Francia, y consistieron en sumergir una cadena con uno de sus extremos anclados en el fondo del tanque y excitar el extremo suspendido con movimientos armónicos de diferentes periodos. El código demostró su capacidad para predecir la tensión y los movimientos en diferentes posiciones a lo largo de la longitud de la línea con gran precisión. Los resultados indicaron la importancia de capturar la dinámica de las líneas de fondeo para la predicción de la tensión especialmente en movimientos de alta frecuencia. Finalmente, el código se utilizó en una exhaustiva evaluación del efecto que la dinámica de las líneas de fondeo tiene sobre las cargas extremas y de fatiga de diferentes conceptos de aerogeneradores flotantes. Las cargas se calcularon para tres tipologías de aerogenerador flotante (semisumergible, ”spar-buoy” y ”tension leg platform”) y se compararon con las cargas obtenidas utilizando un modelo cuasi-estático de líneas de fondeo. Se lanzaron y postprocesaron más de 20.000 casos de carga definidos por la norma IEC 61400-3 siguiendo todos los requerimientos que una entidad certificadora requeriría a un diseñador industrial de aerogeneradores flotantes. Los resultados mostraron que el impacto de la dinámica de las líneas de fondeo, tanto en las cargas de fatiga como en las extremas, se incrementa conforme se consideran elementos situados más cerca de la plataforma: las cargas en la pala y en el eje sólo son ligeramente modificadas por la dinámica de las líneas, las cargas en la base de la torre pueden cambiar significativamente dependiendo del tipo de plataforma y, finalmente, la tensión en las líneas de fondeo depende fuertemente de la dinámica de las líneas, tanto en fatiga como en extremas, en todos los conceptos de plataforma que se han evaluado. ABSTRACT The load calculation of floating offshore wind turbine requires time-domain simulation tools taking into account all the phenomena that affect the system such as aerodynamics, structural dynamics, hydrodynamics, control actions and the mooring lines dynamics. These effects present couplings and are mutually influenced. The results provided by integrated simulation tools are used to compute the fatigue and ultimate loads needed for the structural design of the different components of the wind turbine. For this reason, their accuracy has an important influence on the optimization of the components and the final cost of the floating wind turbine. In particular, the mooring system greatly affects the global dynamics of the floater. Many integrated codes for the simulation of floating wind turbines use simplified approaches that do not consider the mooring line dynamics. An accurate simulation of the mooring system within the integrated codes can be fundamental to obtain reliable results of the system dynamics and the loads. The impact of taking into account the mooring line dynamics in the integrated simulation still has not been thoroughly quantified. The main objective of this research consists on the development of an accurate dynamic model for the simulation of mooring lines, validate it against wave tank tests and then integrate it in a simulation code for floating wind turbines. This experimentally validated tool is finally used to quantify the impact that dynamic mooring models have on the computation of fatigue and ultimate loads of floating wind turbines in comparison with quasi-static tools. This information will be very useful for future designers to decide which mooring model is adequate depending on the platform type and the expected results. The dynamic mooring lines code developed in this research is based in the Finite Element Method and is oriented to the achievement of a computationally efficient code, selecting a Lumped Mass approach. The experimental tests performed for the validation of the code were carried out at the `Ecole Centrale de Nantes (ECN) wave tank in France, consisting of a chain submerged into a water basin, anchored at the bottom of the basin, where the suspension point of the chain was excited with harmonic motions of different periods. The code showed its ability to predict the tension and the motions at several positions along the length of the line with high accuracy. The results demonstrated the importance of capturing the evolution of the mooring dynamics for the prediction of the line tension, especially for the high frequency motions. Finally, the code was used for an extensive assessment of the effect of mooring dynamics on the computation of fatigue and ultimate loads for different floating wind turbines. The loads were computed for three platforms topologies (semisubmersible, spar-buoy and tension leg platform) and compared with the loads provided using a quasi-static mooring model. More than 20,000 load cases were launched and postprocessed following the IEC 61400-3 guideline and fulfilling the conditions that a certification entity would require to an offshore wind turbine designer. The results showed that the impact of mooring dynamics in both fatigue and ultimate loads increases as elements located closer to the platform are evaluated; the blade and the shaft loads are only slightly modified by the mooring dynamics in all the platform designs, the tower base loads can be significantly affected depending on the platform concept and the mooring lines tension strongly depends on the lines dynamics both in fatigue and extreme loads in all the platform concepts evaluated.
Resumo:
La presente Tesis Doctoral evalúa la contribución de una fachada activa, constituida por acristalamientos con circulación de agua, en el rendimiento energético del edificio. Con especial énfasis en la baja afección sobre su imagen, su integración ha de favorecer la calificación del edificio con el futuro estándar de Edificio de consumo de Energía Casi Nulo (EECN). El propósito consiste en cuantificar su aportación a limitar la demanda de climatización, como solución de fachada transparente acorde a las normas de la energía del 2020. En el primer capítulo se introduce el planteamiento del problema. En el segundo capítulo se desarrollan la hipótesis y el objetivo fundamental de la investigación. Para tal fin, en el tercer capítulo, se revisa el estado del arte de la tecnología y de la investigación científica, mediante el análisis de la literatura de referencia. Se comparan patentes, prototipos, sistemas comerciales asimilables, investigaciones en curso en Universidades, y proyectos de investigación y desarrollo, sobre envolventes que incorporan acristalamientos con circulación de agua. El método experimental, expuesto en el cuarto capítulo, acomete el diseño, la fabricación y la monitorización de un prototipo expuesto, durante ciclos de ensayos, a las condiciones climáticas de Madrid. Esta fase ha permitido adquirir información precisa sobre el rendimiento del acristalamiento en cada orientación de incidencia solar, en las distintas estaciones del año. En paralelo, se aborda el desarrollo de modelos teóricos que, mediante su asimilación a soluciones multicapa caracterizadas en las herramientas de simulación EnergyPlus y IDA-ICE (IDA Indoor Climate and Energy), reproducen el efecto experimental. En el quinto capítulo se discuten los resultados experimentales y teóricos, y se analiza la respuesta del acristalamiento asociado a un determinado volumen y temperatura del agua. Se calcula la eficiencia en la captación de la radiación y, mediante la comparativa con un acristalamiento convencional, se determina la reducción de las ganancias solares y las pérdidas de energía. Se comparan el rendimiento del acristalamiento, obtenido experimentalmente, con el ofrecido por paneles solares fototérmicos disponibles en el mercado. Mediante la traslación de los resultados experimentales a casos de células de tamaño habitable, se cuantifica la afección del acristalamiento sobre el consumo en refrigeración y calefacción. Diferenciando cada caso por su composición constructiva y orientación, se extraen conclusiones sobre la reducción del gasto en climatización, en condiciones de bienestar. Posteriormente, se evalúa el ahorro de su incorporación en un recinto existente, de construcción ligera, localizado en la Escuela de Arquitectura de la Universidad Politécnica de Madrid (UPM). Mediante el planteamiento de escenarios de rehabilitación energética, se estima su compatibilidad con un sistema de climatización mediante bomba de calor y extracción geotérmica. Se describe el funcionamiento del sistema, desde la perspectiva de la operación conjunta de los acristalamientos activos e intercambio geotérmico, en nuestro clima. Mediante la parametrización de sus funciones, se estima el beneficio adicional de su integración, a partir de la mejora del rendimiento de la bomba de calor COP (Coefficient of Performance) en calefacción, y de la eficiencia EER (Energy Efficiency Ratio) en refrigeración. En el recinto de la ETSAM, se ha analizado la contribución de la fachada activa en su calificación como Edificio de Energía Casi Nula, y estudiado la rentabilidad económica del sistema. En el sexto capítulo se exponen las conclusiones de la investigación. A la fecha, el sistema supone alta inversión inicial, no obstante, genera elevada eficiencia con bajo impacto arquitectónico, reduciéndose los costes operativos, y el dimensionado de los sistemas de producción, de mayor afección sobre el edificio. Mediante la envolvente activa con suministro geotérmico no se condena la superficie de cubierta, no se ocupa volumen útil por la presencia de equipos emisores, y no se reduce la superficie o altura útil a base de reforzar los aislamientos. Tras su discusión, se considera una alternativa de valor en procesos de diseño y construcción de Edificios de Energía Casi Nulo. Se proponen líneas de futuras investigación cuyo propósito sea el conocimiento de la tecnología de los acristalamientos activos. En el último capítulo se presentan las actividades de difusión de la investigación. Adicionalmente se ha proporcionado una mejora tecnológica a las fachadas activas existentes, que ha derivado en la solicitud de una patente, actualmente en tramitación. ABSTRACT This Thesis evaluates the contribution of an active water flow glazing façade on the energy performance of buildings. Special emphasis is made on the low visual impact on its image, and the active glazing implementation has to encourage the qualification of the building with the future standard of Nearly Zero Energy Building (nZEB). The purpose is to quantify the façade system contribution to limit air conditioning demand, resulting in a transparent façade solution according to the 2020 energy legislation. An initial approach to the problem is presented in first chapter. The second chapter develops the hypothesis and the main objective of the research. To achieve this purpose, the third chapter reviews the state of the art of the technology and scientific research, through the analysis of reference literature. Patents, prototypes, assimilable commercial systems, ongoing research in other universities, and finally research and development projects incorporating active fluid flow glazing are compared. The experimental method, presented in fourth chapter, undertakes the design, manufacture and monitoring of a water flow glazing prototype exposed during test cycles to weather conditions in Madrid. This phase allowed the acquisition of accurate information on the performance of water flow glazing on each orientation of solar incidence, during different seasons. In parallel, the development of theoretical models is addressed which, through the assimilation to multilayer solutions characterized in the simulation tools EnergyPlus and IDA-Indoor Climate and Energy, reproduce the experimental effect. Fifth chapter discusses experimental and theoretical results focused to the analysis of the active glazing behavior, associated with a specific volume and water flow temperature. The efficiency on harvesting incident solar radiation is calculated, and, by comparison with a conventional glazing, the reduction of solar gains and energy losses are determined. The experimental performance of fluid flow glazing against the one offered by photothermal solar panels available on the market are compared. By translating the experimental and theoretical results to cases of full-size cells, the reduction in cooling and heating consumption achieved by active fluid glazing is quantified. The reduction of energy costs to achieve comfort conditions is calculated, differentiating each case by its whole construction composition and orientation. Subsequently, the saving of the implementation of the system on an existing lightweight construction enclosure, located in the School of Architecture at the Polytechnic University of Madrid (UPM), is then calculated. The compatibility between the active fluid flow glazing and a heat pump with geothermal heat supply system is estimated through the approach of different energy renovation scenarios. The overall system operation is described, from the perspective of active glazing and geothermal heat exchange combined operation, in our climate. By parameterization of its functions, the added benefit of its integration it is discussed, particularly from the improvement of the heat pump performance COP (Coefficient of Performance) in heating and efficiency EER (Energy Efficiency Ratio) in cooling. In the case study of the enclosure in the School of Architecture, the contribution of the active glazing façade in qualifying the enclosure as nearly Zero Energy Building has been analyzed, and the feasibility and profitability of the system are studied. The sixth chapter sets the conclusions of the investigation. To date, the system may require high initial investment; however, high efficiency with low architectural impact is generated. Operational costs are highly reduced as well as the size and complexity of the energy production systems, which normally have huge visual impact on buildings. By the active façade with geothermal supply, the deck area it is not condemned. Useful volume is not consumed by the presence of air-conditioning equipment. Useful surface and room height are not reduced by insulation reinforcement. After discussion, water flow glazing is considered a potential value alternative in nZEB design and construction processes. Finally, this chapter proposes future research lines aiming to increase the knowledge of active water flow glazing technology. The last chapter presents research dissemination activities. Additionally, a technological improvement to existing active facades has been developed, which has resulted in a patent application, currently in handling process.
Resumo:
In the last decades accumulated clinical evidence has proven that intra-operative radiation therapy (IORT) is a very valuable technique. In spite of that, planning technology has not evolved since its conception, being outdated in comparison to current state of the art in other radiotherapy techniques and therefore slowing down the adoption of IORT. RADIANCE is an IORT planning system, CE and FDA certified, developed by a consortium of companies, hospitals and universities to overcome such technological backwardness. RADIANCE provides all basic radiotherapy planning tools which are specifically adapted to IORT. These include, but are not limited to image visualization, contouring, dose calculation algorithms-Pencil Beam (PB) and Monte Carlo (MC), DVH calculation and reporting. Other new tools, such as surgical simulation tools have been developed to deal with specific conditions of the technique. Planning with preoperative images (preplanning) has been evaluated and the validity of the system being proven in terms of documentation, treatment preparation, learning as well as improvement of surgeons/radiation oncologists (ROs) communication process. Preliminary studies on Navigation systems envisage benefits on how the specialist to accurately/safely apply the pre-plan into the treatment, updating the plan as needed. Improvements on the usability of this kind of systems and workflow are needed to make them more practical. Preliminary studies on Intraoperative imaging could provide an improved anatomy for the dose computation, comparing it with the previous pre-plan, although not all devices in the market provide good characteristics to do so. DICOM.RT standard, for radiotherapy information exchange, has been updated to cover IORT particularities and enabling the possibility of dose summation with external radiotherapy. The effect of this planning technology on the global risk of the IORT technique has been assessed and documented as part of a failure mode and effect analysis (FMEA). Having these technological innovations and their clinical evaluation (including risk analysis) we consider that RADIANCE is a very valuable tool to the specialist covering the demands from professional societies (AAPM, ICRU, EURATOM) for current radiotherapy procedures.
Resumo:
Microscopic traffic-simulation tools are increasingly being applied to evaluate the impacts of a wide variety of intelligent transport, systems (ITS) applications and other dynamic problems that are difficult to solve using traditional analytical models. The accuracy of a traffic-simulation system depends highly on the quality of the traffic-flow model at its core, with the two main critical components being the car-following and lane-changing models. This paper presents findings from a comparative evaluation of car-following behavior in a number of traffic simulators [advanced interactive microscopic simulator for urban and nonurban networks (AIMSUN), parallel microscopic simulation (PARAMICS), and Verkehr in Statiten-simulation (VISSIM)]. The car-following algorithms used in these simulators have been developed from a variety of theoretical backgrounds and are reported to have been calibrated on a number of different data sets. Very few independent studies have attempted to evaluate the performance of the underlying algorithms based on the same data set. The results reported in this study are based on a car-following experiment that used instrumented vehicles to record the speed and relative distance between follower and leader vehicles on a one-lane road. The experiment was replicated in each tool and the simulated car-following behavior was compared to the field data using a number of error tests. The results showed lower error values for the Gipps-based models implemented in AIMSUN and similar error values for the psychophysical spacing models used in VISSIM and PARAMICS. A qualitative drift and goal-seeking behavior test, which essentially shows how the distance headway between leader and follower vehicles should oscillate around a stable distance, also confirmed the findings.
Resumo:
Oggi, i dispositivi portatili sono diventati la forza trainante del mercato consumer e nuove sfide stanno emergendo per aumentarne le prestazioni, pur mantenendo un ragionevole tempo di vita della batteria. Il dominio digitale è la miglior soluzione per realizzare funzioni di elaborazione del segnale, grazie alla scalabilità della tecnologia CMOS, che spinge verso l'integrazione a livello sub-micrometrico. Infatti, la riduzione della tensione di alimentazione introduce limitazioni severe per raggiungere un range dinamico accettabile nel dominio analogico. Minori costi, minore consumo di potenza, maggiore resa e una maggiore riconfigurabilità sono i principali vantaggi dell'elaborazione dei segnali nel dominio digitale. Da più di un decennio, diverse funzioni puramente analogiche sono state spostate nel dominio digitale. Ciò significa che i convertitori analogico-digitali (ADC) stanno diventando i componenti chiave in molti sistemi elettronici. Essi sono, infatti, il ponte tra il mondo digitale e analogico e, di conseguenza, la loro efficienza e la precisione spesso determinano le prestazioni globali del sistema. I convertitori Sigma-Delta sono il blocco chiave come interfaccia in circuiti a segnale-misto ad elevata risoluzione e basso consumo di potenza. I tools di modellazione e simulazione sono strumenti efficaci ed essenziali nel flusso di progettazione. Sebbene le simulazioni a livello transistor danno risultati più precisi ed accurati, questo metodo è estremamente lungo a causa della natura a sovracampionamento di questo tipo di convertitore. Per questo motivo i modelli comportamentali di alto livello del modulatore sono essenziali per il progettista per realizzare simulazioni veloci che consentono di identificare le specifiche necessarie al convertitore per ottenere le prestazioni richieste. Obiettivo di questa tesi è la modellazione del comportamento del modulatore Sigma-Delta, tenendo conto di diverse non idealità come le dinamiche dell'integratore e il suo rumore termico. Risultati di simulazioni a livello transistor e dati sperimentali dimostrano che il modello proposto è preciso ed accurato rispetto alle simulazioni comportamentali.
Resumo:
In analysing manufacturing systems, for either design or operational reasons, failure to account for the potentially significant dynamics could produce invalid results. There are many analysis techniques that can be used, however, simulation is unique in its ability to assess detailed, dynamic behaviour. The use of simulation to analyse manufacturing systems would therefore seem appropriate if not essential. Many simulation software products are available but their ease of use and scope of application vary greatly. This is illustrated at one extreme by simulators which offer rapid but limited application whilst at the other simulation languages which are extremely flexible but tedious to code. Given that a typical manufacturing engineer does not posses in depth programming and simulation skills then the use of simulators over simulation languages would seem a more appropriate choice. Whilst simulators offer ease of use their limited functionality may preclude their use in many applications. The construction of current simulators makes it difficult to amend or extend the functionality of the system to meet new challenges. Some simulators could even become obsolete as users, demand modelling functionality that reflects the latest manufacturing system design and operation concepts. This thesis examines the deficiencies in current simulation tools and considers whether they can be overcome by the application of object-oriented principles. Object-oriented techniques have gained in popularity in recent years and are seen as having the potential to overcome any of the problems traditionally associated with software construction. There are a number of key concepts that are exploited in the work described in this thesis: the use of object-oriented techniques to act as a framework for abstracting engineering concepts into a simulation tool and the ability to reuse and extend object-oriented software. It is argued that current object-oriented simulation tools are deficient and that in designing such tools, object -oriented techniques should be used not just for the creation of individual simulation objects but for the creation of the complete software. This results in the ability to construct an easy to use simulator that is not limited by its initial functionality. The thesis presents the design of an object-oriented data driven simulator which can be freely extended. Discussion and work is focused on discrete parts manufacture. The system developed retains the ease of use typical of data driven simulators. Whilst removing any limitation on its potential range of applications. Reference is given to additions made to the simulator by other developers not involved in the original software development. Particular emphasis is put on the requirements of the manufacturing engineer and the need for Ihe engineer to carrv out dynamic evaluations.
Resumo:
Web-based distributed modelling architectures are gaining increasing recognition as potentially useful tools to build holistic environmental models, combining individual components in complex workflows. However, existing web-based modelling frameworks currently offer no support for managing uncertainty. On the other hand, the rich array of modelling frameworks and simulation tools which support uncertainty propagation in complex and chained models typically lack the benefits of web based solutions such as ready publication, discoverability and easy access. In this article we describe the developments within the UncertWeb project which are designed to provide uncertainty support in the context of the proposed ‘Model Web’. We give an overview of uncertainty in modelling, review uncertainty management in existing modelling frameworks and consider the semantic and interoperability issues raised by integrated modelling. We describe the scope and architecture required to support uncertainty management as developed in UncertWeb. This includes tools which support elicitation, aggregation/disaggregation, visualisation and uncertainty/sensitivity analysis. We conclude by highlighting areas that require further research and development in UncertWeb, such as model calibration and inference within complex environmental models.
Resumo:
A pénzügyekben mind elméletileg, mind az alkalmazások szempontjából fontos kérdés a tőkeallokáció. Hogyan osszuk szét egy adott portfólió kockázatát annak alportfóliói között? Miként tartalékoljunk tőkét a fennálló kockázatok fedezetére, és a tartalékokat hogyan rendeljük az üzleti egységekhez? A tőkeallokáció vizsgálatára axiomatikus megközelítést alkalmazunk, tehát alapvető tulajdonságok megkövetelésével dolgozunk. Cikkünk kiindulópontja Csóka-Pintér [2010] azon eredménye, hogy a koherens kockázati mértékek axiómái, valamint a tőkeallokációra vonatkozó méltányossági, ösztönzési és stabilitási követelmények nincsenek összhangban egymással. Ebben a cikkben analitikus és szimulációs eszközökkel vizsgáljuk ezeket a követelményeket. A gyakorlati alkalmazások során használt, illetve az elméleti szempontból érdekes tőkeallokációs módszereket is elemezzük. A cikk fő következtetése, hogy a Csóka-Pintér [2010] által felvetett probléma gyakorlati szempontból is releváns, tehát az nemcsak az elméleti vizsgálatok során merül fel, hanem igen sokszor előforduló és gyakorlati probléma. A cikk további eredménye, hogy a vizsgált tőkeallokációs módszerek jellemzésével segítséget nyújt az alkalmazóknak a különböző módszerek közötti választáshoz. / === / Risk capital allocation in finance is important theoretically and also in practical applications. How can the risk of a portfolio be shared among its sub-portfolios? How should the capital reserves be set to cover risks, and how should the reserves be assigned to the business units? The study uses an axiomatic approach to analyse risk capital allocation, by working with requiring basic properties. The starting point is a 2010 study by Csoka and Pinter (2010), who showed that the axioms of coherent measures of risk are not compatible with some fairness, incentive compatibility and stability requirements of risk allocation. This paper discusses these requirements using analytical and simulation tools. It analyses methods used in practical applications that have theoretically interesting properties. The main conclusion is that the problems identified in Csoka and Pinter (2010) remain relevant in practical applications, so that it is not just a theoretical issue, it is a common practical problem. A further contribution is made because analysis of risk allocation methods helps practitioners choose among the different methods available.
Resumo:
This work aims at modeling power consumption at the nodes of a Wireless Sensor Network (WSN). For doing so, a finite state machine was implemented by means of SystemC-AMS and Stateflow modeling and simulation tools. In order to achieve this goal, communication data in a WSN were collected. Based on the collected data, a simulation environment for power consumption characterization, which aimed at describing the network operation, was developed. Other than performing power consumption simulation, this environment also takes into account a discharging model as to analyze the battery charge level at any given moment. Such analysis result in a graph illustrating the battery voltage variations as well as its state of charge (SOC). Finally, a case study of the WSN power consumption aims to analyze the acquisition mode and network data communication. With this analysis, it is possible make adjustments in node-sensors to reduce the total power consumption of the network.
Resumo:
This report is a review of additive and subtractive manufacturing techniques. This approach (additive manufacturing) has resided largely in the prototyping realm, where the methods of producing complex freeform solid objects directly from a computer model without part-specific tooling or knowledge. But these technologies are evolving steadily and are beginning to encompass related systems of material addition, subtraction, assembly, and insertion of components made by other processes. Furthermore, these various additive processes are starting to evolve into rapid manufacturing techniques for mass-customized products, away from narrowly defined rapid prototyping. Taking this idea far enough down the line, and several years hence, a radical restructuring of manufacturing could take place. Manufacturing itself would move from a resource base to a knowledge base and from mass production of single use products to mass customized, high value, life cycle products, majority of research and development was focused on advanced development of existing technologies by improving processing performance, materials, modelling and simulation tools, and design tools to enable the transition from prototyping to manufacturing of end use parts.
Resumo:
A RET network consists of a network of photo-active molecules called chromophores that can participate in inter-molecular energy transfer called resonance energy transfer (RET). RET networks are used in a variety of applications including cryptographic devices, storage systems, light harvesting complexes, biological sensors, and molecular rulers. In this dissertation, we focus on creating a RET device called closed-diffusive exciton valve (C-DEV) in which the input to output transfer function is controlled by an external energy source, similar to a semiconductor transistor like the MOSFET. Due to their biocompatibility, molecular devices like the C-DEVs can be used to introduce computing power in biological, organic, and aqueous environments such as living cells. Furthermore, the underlying physics in RET devices are stochastic in nature, making them suitable for stochastic computing in which true random distribution generation is critical.
In order to determine a valid configuration of chromophores for the C-DEV, we developed a systematic process based on user-guided design space pruning techniques and built-in simulation tools. We show that our C-DEV is 15x better than C-DEVs designed using ad hoc methods that rely on limited data from prior experiments. We also show ways in which the C-DEV can be improved further and how different varieties of C-DEVs can be combined to form more complex logic circuits. Moreover, the systematic design process can be used to search for valid chromophore network configurations for a variety of RET applications.
We also describe a feasibility study for a technique used to control the orientation of chromophores attached to DNA. Being able to control the orientation can expand the design space for RET networks because it provides another parameter to tune their collective behavior. While results showed limited control over orientation, the analysis required the development of a mathematical model that can be used to determine the distribution of dipoles in a given sample of chromophore constructs. The model can be used to evaluate the feasibility of other potential orientation control techniques.
Resumo:
Structural Health Monitoring (SHM) is an emerging area of research associated to improvement of maintainability and the safety of aerospace, civil and mechanical infrastructures by means of monitoring and damage detection. Guided wave structural testing method is an approach for health monitoring of plate-like structures using smart material piezoelectric transducers. Among many kinds of transducers, the ones that have beam steering feature can perform more accurate surface interrogation. A frequency steerable acoustic transducer (FSATs) is capable of beam steering by varying the input frequency and consequently can detect and localize damage in structures. Guided wave inspection is typically performed through phased arrays which feature a large number of piezoelectric transducers, complexity and limitations. To overcome the weight penalty, the complex circuity and maintenance concern associated with wiring a large number of transducers, new FSATs are proposed that present inherent directional capabilities when generating and sensing elastic waves. The first generation of Spiral FSAT has two main limitations. First, waves are excited or sensed in one direction and in the opposite one (180 ̊ ambiguity) and second, just a relatively rude approximation of the desired directivity has been attained. Second generation of Spiral FSAT is proposed to overcome the first generation limitations. The importance of simulation tools becomes higher when a new idea is proposed and starts to be developed. The shaped transducer concept, especially the second generation of spiral FSAT is a novel idea in guided waves based of Structural Health Monitoring systems, hence finding a simulation tool is a necessity to develop various design aspects of this innovative transducer. In this work, the numerical simulation of the 1st and 2nd generations of Spiral FSAT has been conducted to prove the directional capability of excited guided waves through a plate-like structure.