10 resultados para Real and nominal effective exchange rates
em Universidad Politécnica de Madrid
Resumo:
La tomografía axial computerizada (TAC) es la modalidad de imagen médica preferente para el estudio de enfermedades pulmonares y el análisis de su vasculatura. La segmentación general de vasos en pulmón ha sido abordada en profundidad a lo largo de los últimos años por la comunidad científica que trabaja en el campo de procesamiento de imagen; sin embargo, la diferenciación entre irrigaciones arterial y venosa es aún un problema abierto. De hecho, la separación automática de arterias y venas está considerado como uno de los grandes retos futuros del procesamiento de imágenes biomédicas. La segmentación arteria-vena (AV) permitiría el estudio de ambas irrigaciones por separado, lo cual tendría importantes consecuencias en diferentes escenarios médicos y múltiples enfermedades pulmonares o estados patológicos. Características como la densidad, geometría, topología y tamaño de los vasos sanguíneos podrían ser analizados en enfermedades que conllevan remodelación de la vasculatura pulmonar, haciendo incluso posible el descubrimiento de nuevos biomarcadores específicos que aún hoy en dípermanecen ocultos. Esta diferenciación entre arterias y venas también podría ayudar a la mejora y el desarrollo de métodos de procesamiento de las distintas estructuras pulmonares. Sin embargo, el estudio del efecto de las enfermedades en los árboles arterial y venoso ha sido inviable hasta ahora a pesar de su indudable utilidad. La extrema complejidad de los árboles vasculares del pulmón hace inabordable una separación manual de ambas estructuras en un tiempo realista, fomentando aún más la necesidad de diseñar herramientas automáticas o semiautomáticas para tal objetivo. Pero la ausencia de casos correctamente segmentados y etiquetados conlleva múltiples limitaciones en el desarrollo de sistemas de separación AV, en los cuales son necesarias imágenes de referencia tanto para entrenar como para validar los algoritmos. Por ello, el diseño de imágenes sintéticas de TAC pulmonar podría superar estas dificultades ofreciendo la posibilidad de acceso a una base de datos de casos pseudoreales bajo un entorno restringido y controlado donde cada parte de la imagen (incluyendo arterias y venas) está unívocamente diferenciada. En esta Tesis Doctoral abordamos ambos problemas, los cuales están fuertemente interrelacionados. Primero se describe el diseño de una estrategia para generar, automáticamente, fantomas computacionales de TAC de pulmón en humanos. Partiendo de conocimientos a priori, tanto biológicos como de características de imagen de CT, acerca de la topología y relación entre las distintas estructuras pulmonares, el sistema desarrollado es capaz de generar vías aéreas, arterias y venas pulmonares sintéticas usando métodos de crecimiento iterativo, que posteriormente se unen para formar un pulmón simulado con características realistas. Estos casos sintéticos, junto a imágenes reales de TAC sin contraste, han sido usados en el desarrollo de un método completamente automático de segmentación/separación AV. La estrategia comprende una primera extracción genérica de vasos pulmonares usando partículas espacio-escala, y una posterior clasificación AV de tales partículas mediante el uso de Graph-Cuts (GC) basados en la similitud con arteria o vena (obtenida con algoritmos de aprendizaje automático) y la inclusión de información de conectividad entre partículas. La validación de los fantomas pulmonares se ha llevado a cabo mediante inspección visual y medidas cuantitativas relacionadas con las distribuciones de intensidad, dispersión de estructuras y relación entre arterias y vías aéreas, los cuales muestran una buena correspondencia entre los pulmones reales y los generados sintéticamente. La evaluación del algoritmo de segmentación AV está basada en distintas estrategias de comprobación de la exactitud en la clasificación de vasos, las cuales revelan una adecuada diferenciación entre arterias y venas tanto en los casos reales como en los sintéticos, abriendo así un amplio abanico de posibilidades en el estudio clínico de enfermedades cardiopulmonares y en el desarrollo de metodologías y nuevos algoritmos para el análisis de imágenes pulmonares. ABSTRACT Computed tomography (CT) is the reference image modality for the study of lung diseases and pulmonary vasculature. Lung vessel segmentation has been widely explored by the biomedical image processing community, however, differentiation of arterial from venous irrigations is still an open problem. Indeed, automatic separation of arterial and venous trees has been considered during last years as one of the main future challenges in the field. Artery-Vein (AV) segmentation would be useful in different medical scenarios and multiple pulmonary diseases or pathological states, allowing the study of arterial and venous irrigations separately. Features such as density, geometry, topology and size of vessels could be analyzed in diseases that imply vasculature remodeling, making even possible the discovery of new specific biomarkers that remain hidden nowadays. Differentiation between arteries and veins could also enhance or improve methods processing pulmonary structures. Nevertheless, AV segmentation has been unfeasible until now in clinical routine despite its objective usefulness. The huge complexity of pulmonary vascular trees makes a manual segmentation of both structures unfeasible in realistic time, encouraging the design of automatic or semiautomatic tools to perform the task. However, this lack of proper labeled cases seriously limits in the development of AV segmentation systems, where reference standards are necessary in both algorithm training and validation stages. For that reason, the design of synthetic CT images of the lung could overcome these difficulties by providing a database of pseudorealistic cases in a constrained and controlled scenario where each part of the image (including arteries and veins) is differentiated unequivocally. In this Ph.D. Thesis we address both interrelated problems. First, the design of a complete framework to automatically generate computational CT phantoms of the human lung is described. Starting from biological and imagebased knowledge about the topology and relationships between structures, the system is able to generate synthetic pulmonary arteries, veins, and airways using iterative growth methods that can be merged into a final simulated lung with realistic features. These synthetic cases, together with labeled real CT datasets, have been used as reference for the development of a fully automatic pulmonary AV segmentation/separation method. The approach comprises a vessel extraction stage using scale-space particles and their posterior artery-vein classification using Graph-Cuts (GC) based on arterial/venous similarity scores obtained with a Machine Learning (ML) pre-classification step and particle connectivity information. Validation of pulmonary phantoms from visual examination and quantitative measurements of intensity distributions, dispersion of structures and relationships between pulmonary air and blood flow systems, show good correspondence between real and synthetic lungs. The evaluation of the Artery-Vein (AV) segmentation algorithm, based on different strategies to assess the accuracy of vessel particles classification, reveal accurate differentiation between arteries and vein in both real and synthetic cases that open a huge range of possibilities in the clinical study of cardiopulmonary diseases and the development of methodological approaches for the analysis of pulmonary images.
Resumo:
The main objective of this paper is to review the state of the art of residential PV systems in Belgium by the analysis of the operational data of 993 installations. For that, three main questions are posed: how much energy do they produce? What level of performance is associated to their production? Which are the key parameters that most influence their quality? This work brings answers to these questions. A middling commercial PV system, optimally oriented, produces a mean annual energy of 892 kWh/kWp. As a whole, the orientation of PV generators causes energy productions to be some 6% inferior to optimally oriented PV systems. The mean performance ratio is 78% and the mean performance index is 85%. That is to say, the energy produced by a typical PV system in Belgium is 15% inferior to the energy produced by a very high quality PV system. Finally, on average, the real power of the PV modules falls 5% below its corresponding nominal power announced on the manufacturer's datasheet. Differences between real and nominal power of up to 16% have been detected.
Resumo:
NPV is a static measure of project value which does not discriminate between levels of internal and external risk in project valuation. Due to current investment project?s characteristics, a much more complex model is needed: one that includes the value of flexibility and the different risk levels associated with variables subject to uncertainty (price, costs, exchange rates, grade and tonnage of the deposits, cut off grade, among many others). Few of these variables present any correlation or can be treated uniformly. In this context, Real Option Valuation (ROV) arose more than a decade ago, as a mainly theoretical model with the potential for simultaneous calculation of the risk associated with such variables. This paper reviews the literature regarding the application of Real Options Valuation in mining, noting the prior focus on external risks, and presents a case study where ROV is applied to quantify risk associated to mine planning.
Resumo:
In recent decades, full electric and hybrid electric vehicles have emerged as an alternative to conventional cars due to a range of factors, including environmental and economic aspects. These vehicles are the result of considerable efforts to seek ways of reducing the use of fossil fuel for vehicle propulsion. Sophisticated technologies such as hybrid and electric powertrains require careful study and optimization. Mathematical models play a key role at this point. Currently, many advanced mathematical analysis tools, as well as computer applications have been built for vehicle simulation purposes. Given the great interest of hybrid and electric powertrains, along with the increasing importance of reliable computer-based models, the author decided to integrate both aspects in the research purpose of this work. Furthermore, this is one of the first final degree projects held at the ETSII (Higher Technical School of Industrial Engineers) that covers the study of hybrid and electric propulsion systems. The present project is based on MBS3D 2.0, a specialized software for the dynamic simulation of multibody systems developed at the UPM Institute of Automobile Research (INSIA). Automobiles are a clear example of complex multibody systems, which are present in nearly every field of engineering. The work presented here benefits from the availability of MBS3D software. This program has proven to be a very efficient tool, with a highly developed underlying mathematical formulation. On this basis, the focus of this project is the extension of MBS3D features in order to be able to perform dynamic simulations of hybrid and electric vehicle models. This requires the joint simulation of the mechanical model of the vehicle, together with the model of the hybrid or electric powertrain. These sub-models belong to completely different physical domains. In fact the powertrain consists of energy storage systems, electrical machines and power electronics, connected to purely mechanical components (wheels, suspension, transmission, clutch…). The challenge today is to create a global vehicle model that is valid for computer simulation. Therefore, the main goal of this project is to apply co-simulation methodologies to a comprehensive model of an electric vehicle, where sub-models from different areas of engineering are coupled. The created electric vehicle (EV) model consists of a separately excited DC electric motor, a Li-ion battery pack, a DC/DC chopper converter and a multibody vehicle model. Co-simulation techniques allow car designers to simulate complex vehicle architectures and behaviors, which are usually difficult to implement in a real environment due to safety and/or economic reasons. In addition, multi-domain computational models help to detect the effects of different driving patterns and parameters and improve the models in a fast and effective way. Automotive designers can greatly benefit from a multidisciplinary approach of new hybrid and electric vehicles. In this case, the global electric vehicle model includes an electrical subsystem and a mechanical subsystem. The electrical subsystem consists of three basic components: electric motor, battery pack and power converter. A modular representation is used for building the dynamic model of the vehicle drivetrain. This means that every component of the drivetrain (submodule) is modeled separately and has its own general dynamic model, with clearly defined inputs and outputs. Then, all the particular submodules are assembled according to the drivetrain configuration and, in this way, the power flow across the components is completely determined. Dynamic models of electrical components are often based on equivalent circuits, where Kirchhoff’s voltage and current laws are applied to draw the algebraic and differential equations. Here, Randles circuit is used for dynamic modeling of the battery and the electric motor is modeled through the analysis of the equivalent circuit of a separately excited DC motor, where the power converter is included. The mechanical subsystem is defined by MBS3D equations. These equations consider the position, velocity and acceleration of all the bodies comprising the vehicle multibody system. MBS3D 2.0 is entirely written in MATLAB and the structure of the program has been thoroughly studied and understood by the author. MBS3D software is adapted according to the requirements of the applied co-simulation method. Some of the core functions are modified, such as integrator and graphics, and several auxiliary functions are added in order to compute the mathematical model of the electrical components. By coupling and co-simulating both subsystems, it is possible to evaluate the dynamic interaction among all the components of the drivetrain. ‘Tight-coupling’ method is used to cosimulate the sub-models. This approach integrates all subsystems simultaneously and the results of the integration are exchanged by function-call. This means that the integration is done jointly for the mechanical and the electrical subsystem, under a single integrator and then, the speed of integration is determined by the slower subsystem. Simulations are then used to show the performance of the developed EV model. However, this project focuses more on the validation of the computational and mathematical tool for electric and hybrid vehicle simulation. For this purpose, a detailed study and comparison of different integrators within the MATLAB environment is done. Consequently, the main efforts are directed towards the implementation of co-simulation techniques in MBS3D software. In this regard, it is not intended to create an extremely precise EV model in terms of real vehicle performance, although an acceptable level of accuracy is achieved. The gap between the EV model and the real system is filled, in a way, by introducing the gas and brake pedals input, which reflects the actual driver behavior. This input is included directly in the differential equations of the model, and determines the amount of current provided to the electric motor. For a separately excited DC motor, the rotor current is proportional to the traction torque delivered to the car wheels. Therefore, as it occurs in the case of real vehicle models, the propulsion torque in the mathematical model is controlled through acceleration and brake pedal commands. The designed transmission system also includes a reduction gear that adapts the torque coming for the motor drive and transfers it. The main contribution of this project is, therefore, the implementation of a new calculation path for the wheel torques, based on performance characteristics and outputs of the electric powertrain model. Originally, the wheel traction and braking torques were input to MBS3D through a vector directly computed by the user in a MATLAB script. Now, they are calculated as a function of the motor current which, in turn, depends on the current provided by the battery pack across the DC/DC chopper converter. The motor and battery currents and voltages are the solutions of the electrical ODE (Ordinary Differential Equation) system coupled to the multibody system. Simultaneously, the outputs of MBS3D model are the position, velocity and acceleration of the vehicle at all times. The motor shaft speed is computed from the output vehicle speed considering the wheel radius, the gear reduction ratio and the transmission efficiency. This motor shaft speed, somehow available from MBS3D model, is then introduced in the differential equations corresponding to the electrical subsystem. In this way, MBS3D and the electrical powertrain model are interconnected and both subsystems exchange values resulting as expected with tight-coupling approach.When programming mathematical models of complex systems, code optimization is a key step in the process. A way to improve the overall performance of the integration, making use of C/C++ as an alternative programming language, is described and implemented. Although this entails a higher computational burden, it leads to important advantages regarding cosimulation speed and stability. In order to do this, it is necessary to integrate MATLAB with another integrated development environment (IDE), where C/C++ code can be generated and executed. In this project, C/C++ files are programmed in Microsoft Visual Studio and the interface between both IDEs is created by building C/C++ MEX file functions. These programs contain functions or subroutines that can be dynamically linked and executed from MATLAB. This process achieves reductions in simulation time up to two orders of magnitude. The tests performed with different integrators, also reveal the stiff character of the differential equations corresponding to the electrical subsystem, and allow the improvement of the cosimulation process. When varying the parameters of the integration and/or the initial conditions of the problem, the solutions of the system of equations show better dynamic response and stability, depending on the integrator used. Several integrators, with variable and non-variable step-size, and for stiff and non-stiff problems are applied to the coupled ODE system. Then, the results are analyzed, compared and discussed. From all the above, the project can be divided into four main parts: 1. Creation of the equation-based electric vehicle model; 2. Programming, simulation and adjustment of the electric vehicle model; 3. Application of co-simulation methodologies to MBS3D and the electric powertrain subsystem; and 4. Code optimization and study of different integrators. Additionally, in order to deeply understand the context of the project, the first chapters include an introduction to basic vehicle dynamics, current classification of hybrid and electric vehicles and an explanation of the involved technologies such as brake energy regeneration, electric and non-electric propulsion systems for EVs and HEVs (hybrid electric vehicles) and their control strategies. Later, the problem of dynamic modeling of hybrid and electric vehicles is discussed. The integrated development environment and the simulation tool are also briefly described. The core chapters include an explanation of the major co-simulation methodologies and how they have been programmed and applied to the electric powertrain model together with the multibody system dynamic model. Finally, the last chapters summarize the main results and conclusions of the project and propose further research topics. In conclusion, co-simulation methodologies are applicable within the integrated development environments MATLAB and Visual Studio, and the simulation tool MBS3D 2.0, where equation-based models of multidisciplinary subsystems, consisting of mechanical and electrical components, are coupled and integrated in a very efficient way.
Resumo:
A stress-detection system is proposed based on physiological signals. Concretely, galvanic skin response (GSR) and heart rate (HR) are proposed to provide information on the state of mind of an individual, due to their nonintrusiveness and noninvasiveness. Furthermore, specific psychological experiments were designed to induce properly stress on individuals in order to acquire a database for training, validating, and testing the proposed system. Such system is based on fuzzy logic, and it described the behavior of an individual under stressing stimuli in terms of HR and GSR. The stress-detection accuracy obtained is 99.5% by acquiring HR and GSR during a period of 10 s, and what is more, rates over 90% of success are achieved by decreasing that acquisition period to 3-5 s. Finally, this paper comes up with a proposal that an accurate stress detection only requires two physiological signals, namely, HR and GSR, and the fact that the proposed stress-detection system is suitable for real-time applications.
Resumo:
Mine soils usually contain large levels of heavy metals and poor fertility conditions which limit their reclamation and the application of phyto-remediation technologies. Two organic waste materials (pine bark compost and sheep and horse manure compost), with different pHs and varying degrees of humification and nutrient contents, were applied as amendments to assess their effects on copper (Cu) and zinc (Zn) bioavailability and on fertility conditions of mine soils. Soil samples collected from two abandoned mining areas near Madrid (Spain) were mixed with 0, 30 and 60 t ha?1 of the organic amendments. The concentrations of metals among the different mineral and organic fractions of soil were determined by several extraction procedures to study the metal distribution in the solid phase of the soil affected by the organic amendments. The results showed that the manure amendment increased the soil pH and the cation exchange capacity and enhanced the nutrient levels of these soils. The pine bark amendment decreased the soil pH and did not significantly change the nutrient status of soil. Soil pH, organic matter content and its degree of humification, which were altered by the amendments, were the main factors affecting Cu fractionation. Zn fractionation was mainly affected by soil pH. The addition of manure not only improved soil fertility, but also decreased metal bioavailability resulting in a reduction of metal toxicity. Conversely, pine bark amendment increased metal ioavailability. The use of sheep and horse manure could be a cost-effective practice for the restoration of contaminated mine soils.
Resumo:
En la interacción con el entorno que nos rodea durante nuestra vida diaria (utilizar un cepillo de dientes, abrir puertas, utilizar el teléfono móvil, etc.) y en situaciones profesionales (intervenciones médicas, procesos de producción, etc.), típicamente realizamos manipulaciones avanzadas que incluyen la utilización de los dedos de ambas manos. De esta forma el desarrollo de métodos de interacción háptica multi-dedo dan lugar a interfaces hombre-máquina más naturales y realistas. No obstante, la mayoría de interfaces hápticas disponibles en el mercado están basadas en interacciones con un solo punto de contacto; esto puede ser suficiente para la exploración o palpación del entorno pero no permite la realización de tareas más avanzadas como agarres. En esta tesis, se investiga el diseño mecánico, control y aplicaciones de dispositivos hápticos modulares con capacidad de reflexión de fuerzas en los dedos índice, corazón y pulgar del usuario. El diseño mecánico de la interfaz diseñada, ha sido optimizado con funciones multi-objetivo para conseguir una baja inercia, un amplio espacio de trabajo, alta manipulabilidad y reflexión de fuerzas superiores a 3 N en el espacio de trabajo. El ancho de banda y la rigidez del dispositivo se han evaluado mediante simulación y experimentación real. Una de las áreas más importantes en el diseño de estos dispositivos es el efector final, ya que es la parte que está en contacto con el usuario. Durante este trabajo se ha diseñado un dedal de bajo peso, adaptable a diferentes usuarios que, mediante la incorporación de sensores de contacto, permite estimar fuerzas normales y tangenciales durante la interacción con entornos reales y virtuales. Para el diseño de la arquitectura de control, se estudiaron los principales requisitos para estos dispositivos. Entre estos, cabe destacar la adquisición, procesado e intercambio a través de internet de numerosas señales de control e instrumentación; la computación de equaciones matemáticas incluyendo la cinemática directa e inversa, jacobiana, algoritmos de detección de agarres, etc. Todos estos componentes deben calcularse en tiempo real garantizando una frecuencia mínima de 1 KHz. Además, se describen sistemas para manipulación de precisión virtual y remota; así como el diseño de un método denominado "desacoplo cinemático iterativo" para computar la cinemática inversa de robots y la comparación con otros métodos actuales. Para entender la importancia de la interacción multimodal, se ha llevado a cabo un estudio para comprobar qué estímulos sensoriales se correlacionan con tiempos de respuesta más rápidos y de mayor precisión. Estos experimentos se desarrollaron en colaboración con neurocientíficos del instituto Technion Israel Institute of Technology. Comparando los tiempos de respuesta en la interacción unimodal (auditiva, visual y háptica) con combinaciones bimodales y trimodales de los mismos, se demuestra que el movimiento sincronizado de los dedos para generar respuestas de agarre se basa principalmente en la percepción háptica. La ventaja en el tiempo de procesamiento de los estímulos hápticos, sugiere que los entornos virtuales que incluyen esta componente sensorial generan mejores contingencias motoras y mejoran la credibilidad de los eventos. Se concluye que, los sistemas que incluyen percepción háptica dotan a los usuarios de más tiempo en las etapas cognitivas para rellenar información de forma creativa y formar una experiencia más rica. Una aplicación interesante de los dispositivos hápticos es el diseño de nuevos simuladores que permitan entrenar habilidades manuales en el sector médico. En colaboración con fisioterapeutas de Griffith University en Australia, se desarrolló un simulador que permite realizar ejercicios de rehabilitación de la mano. Las propiedades de rigidez no lineales de la articulación metacarpofalange del dedo índice se estimaron mediante la utilización del efector final diseñado. Estos parámetros, se han implementado en un escenario que simula el comportamiento de la mano humana y que permite la interacción háptica a través de esta interfaz. Las aplicaciones potenciales de este simulador están relacionadas con entrenamiento y educación de estudiantes de fisioterapia. En esta tesis, se han desarrollado nuevos métodos que permiten el control simultáneo de robots y manos robóticas en la interacción con entornos reales. El espacio de trabajo alcanzable por el dispositivo háptico, se extiende mediante el cambio de modo de control automático entre posición y velocidad. Además, estos métodos permiten reconocer el gesto del usuario durante las primeras etapas de aproximación al objeto para su agarre. Mediante experimentos de manipulación avanzada de objetos con un manipulador y diferentes manos robóticas, se muestra que el tiempo en realizar una tarea se reduce y que el sistema permite la realización de la tarea con precisión. Este trabajo, es el resultado de una colaboración con investigadores de Harvard BioRobotics Laboratory. ABSTRACT When we interact with the environment in our daily life (using a toothbrush, opening doors, using cell-phones, etc.), or in professional situations (medical interventions, manufacturing processes, etc.) we typically perform dexterous manipulations that involve multiple fingers and palm for both hands. Therefore, multi-Finger haptic methods can provide a realistic and natural human-machine interface to enhance immersion when interacting with simulated or remote environments. Most commercial devices allow haptic interaction with only one contact point, which may be sufficient for some exploration or palpation tasks but are not enough to perform advanced object manipulations such as grasping. In this thesis, I investigate the mechanical design, control and applications of a modular haptic device that can provide force feedback to the index, thumb and middle fingers of the user. The designed mechanical device is optimized with a multi-objective design function to achieve a low inertia, a large workspace, manipulability, and force-feedback of up to 3 N within the workspace; the bandwidth and rigidity for the device is assessed through simulation and real experimentation. One of the most important areas when designing haptic devices is the end-effector, since it is in contact with the user. In this thesis the design and evaluation of a thimble-like, lightweight, user-adaptable, and cost-effective device that incorporates four contact force sensors is described. This design allows estimation of the forces applied by a user during manipulation of virtual and real objects. The design of a real-time, modular control architecture for multi-finger haptic interaction is described. Requirements for control of multi-finger haptic devices are explored. Moreover, a large number of signals have to be acquired, processed, sent over the network and mathematical computations such as device direct and inverse kinematics, jacobian, grasp detection algorithms, etc. have to be calculated in Real Time to assure the required high fidelity for the haptic interaction. The Hardware control architecture has different modules and consists of an FPGA for the low-level controller and a RT controller for managing all the complex calculations (jacobian, kinematics, etc.); this provides a compact and scalable solution for the required high computation capabilities assuring a correct frequency rate for the control loop of 1 kHz. A set-up for dexterous virtual and real manipulation is described. Moreover, a new algorithm named the iterative kinematic decoupling method was implemented to solve the inverse kinematics of a robotic manipulator. In order to understand the importance of multi-modal interaction including haptics, a subject study was carried out to look for sensory stimuli that correlate with fast response time and enhanced accuracy. This experiment was carried out in collaboration with neuro-scientists from Technion Israel Institute of Technology. By comparing the grasping response times in unimodal (auditory, visual, and haptic) events with the response times in events with bimodal and trimodal combinations. It is concluded that in grasping tasks the synchronized motion of the fingers to generate the grasping response relies on haptic cues. This processing-speed advantage of haptic cues suggests that multimodalhaptic virtual environments are superior in generating motor contingencies, enhancing the plausibility of events. Applications that include haptics provide users with more time at the cognitive stages to fill in missing information creatively and form a richer experience. A major application of haptic devices is the design of new simulators to train manual skills for the medical sector. In collaboration with physical therapists from Griffith University in Australia, we developed a simulator to allow hand rehabilitation manipulations. First, the non-linear stiffness properties of the metacarpophalangeal joint of the index finger were estimated by using the designed end-effector; these parameters are implemented in a scenario that simulates the behavior of the human hand and that allows haptic interaction through the designed haptic device. The potential application of this work is related to educational and medical training purposes. In this thesis, new methods to simultaneously control the position and orientation of a robotic manipulator and the grasp of a robotic hand when interacting with large real environments are studied. The reachable workspace is extended by automatically switching between rate and position control modes. Moreover, the human hand gesture is recognized by reading the relative movements of the index, thumb and middle fingers of the user during the early stages of the approximation-to-the-object phase and then mapped to the robotic hand actuators. These methods are validated to perform dexterous manipulation of objects with a robotic manipulator, and different robotic hands. This work is the result of a research collaboration with researchers from the Harvard BioRobotics Laboratory. The developed experiments show that the overall task time is reduced and that the developed methods allow for full dexterity and correct completion of dexterous manipulations.
Resumo:
El óxido nitroso (N2O) es un potente gas de efecto invernadero (GHG) proveniente mayoritariamente de la fertilización nitrogenada de los suelos agrícolas. Identificar estrategias de manejo de la fertilización que reduzcan estas emisiones sin suponer un descenso de los rendimientos es vital tanto a nivel económico como medioambiental. Con ese propósito, en esta Tesis se han evaluado: (i) estrategias de manejo directo de la fertilización (inhibidores de la nitrificación/ureasa); y (ii) interacciones de los fertilizantes con (1) el manejo del agua, (2) residuos de cosecha y (3) diferentes especies de plantas. Para conseguirlo se llevaron a cabo meta-análisis, incubaciones de laboratorio, ensayos en invernadero y experimentos de campo. Los inhibidores de la nitrificación y de la actividad ureasa se proponen habitualmente como medidas para reducir las pérdidas de nitrógeno (N), por lo que su aplicación estaría asociada al uso eficiente del N por parte de los cultivos (NUE). Sin embargo, su efecto sobre los rendimientos es variable. Con el objetivo de evaluar en una primera fase su efectividad para incrementar el NUE y la productividad de los cultivos, se llevó a cabo un meta-análisis. Los inhibidores de la nitrificación dicyandiamide (DCD) y 3,4-dimetilepyrazol phosphate (DMPP) y el inhibidor de la ureasa N-(n-butyl) thiophosphoric triamide (NBPT) fueron seleccionados para el análisis ya que generalmente son considerados las mejores opciones disponibles comercialmente. Nuestros resultados mostraron que su uso puede ser recomendado con el fin de incrementar tanto el rendimiento del cultivo como el NUE (incremento medio del 7.5% y 12.9%, respectivamente). Sin embargo, se observó que su efectividad depende en gran medida de los factores medioambientales y de manejo de los estudios evaluados. Una mayor respuesta fue encontrada en suelos de textura gruesa, sistemas irrigados y/o en cultivos que reciben altas tasas de fertilizante nitrogenado. En suelos alcalinos (pH ≥ 8), el inhibidor de la ureasa NBPT produjo el mayor efecto. Dado que su uso representa un coste adicional para los agricultores, entender las mejores prácticas que permitan maximizar su efectividad es necesario para posteriormente realizar comparaciones efectivas con otras prácticas que incrementen la productividad de los cultivos y el NUE. En base a los resultados del meta-análisis, se seleccionó el NBPT como un inhibidor con gran potencial. Inicialmente desarrollado para reducir la volatilización de amoniaco (NH3), en los últimos años algunos investigadores han demostrado en estudios de campo un efecto mitigador de este inhibidor sobre las pérdidas de N2O provenientes de suelos fertilizados bajo condiciones de baja humedad del suelo. Dada la alta variabilidad de los experimentos de campo, donde la humedad del suelo cambia rápidamente, ha sido imposible entender mecanísticamente el potencial de los inhibidores de la ureasa (UIs) para reducir emisiones de N2O y su dependencia con respecto al porcentaje de poros llenos de agua del suelo (WFPS). Por lo tanto se realizó una incubación en laboratorio con el propósito de evaluar cuál es el principal mecanismo biótico tras las emisiones de N2O cuando se aplican UIs bajo diferentes condiciones de humedad del suelo (40, 60 y 80% WFPS), y para analizar hasta qué punto el WFPS regula el efecto del inhibidor sobre las emisiones de N2O. Un segundo UI (i.e. PPDA) fue utilizado para comparar el efecto del NBPT con el de otro inhibidor de la ureasa disponible comercialmente; esto nos permitió comprobar si el efecto de NBPT es específico de ese inhibidor o no. Las emisiones de N2O al 40% WFPS fueron despreciables, siendo significativamente más bajas que las de todos los tratamientos fertilizantes al 60 y 80% WFPS. Comparado con la urea sin inhibidor, NBPT+U redujo las emisiones de N2O al 60% WFPS pero no tuvo efecto al 80% WFPS. La aplicación de PPDA incrementó significativamente las emisiones con respecto a la urea al 80% WFPS mientras que no se encontró un efecto significativo al 60% WFPS. Al 80% WFPS la desnitrificación fue la principal fuente de las emisiones de N2O en todos los tratamientos mientras que al 60% tanto la nitrificación como la desnitrificación tuvieron un papel relevante. Estos resultados muestran que un correcto manejo del NBPT puede suponer una estrategia efectiva para mitigar las emisiones de N2O. Con el objetivo de trasladar nuestros resultados de los estudios previos a condiciones de campo reales, se desarrolló un experimento en el que se evaluó la efectividad del NBPT para reducir pérdidas de N y aumentar la productividad durante un cultivo de cebada (Hordeum vulgare L.) en secano Mediterráneo. Se determinó el rendimiento del cultivo, las concentraciones de N mineral del suelo, el carbono orgánico disuelto (DOC), el potencial de desnitrificación, y los flujos de NH3, N2O y óxido nítrico (NO). La adición del inhibidor redujo las emisiones de NH3 durante los 30 días posteriores a la aplicación de urea en un 58% y las emisiones netas de N2O y NO durante los 95 días posteriores a la aplicación de urea en un 86 y 88%, respectivamente. El uso de NBPT también incrementó el rendimiento en grano en un 5% y el consumo de N en un 6%, aunque ninguno de estos incrementos fue estadísticamente significativo. Bajo las condiciones experimentales dadas, estos resultados demuestran el potencial del inhibidor de la ureasa NBPT para mitigar las emisiones de NH3, N2O y NO provenientes de suelos arables fertilizados con urea, mediante la ralentización de la hidrólisis de la urea y posterior liberación de menores concentraciones de NH4 + a la capa superior del suelo. El riego por goteo combinado con la aplicación dividida de fertilizante nitrogenado disuelto en el agua de riego (i.e. fertirriego por goteo) se considera normalmente una práctica eficiente para el uso del agua y de los nutrientes. Algunos de los principales factores (WFPS, NH4 + y NO3 -) que regulan las emisiones de GHGs (i.e. N2O, CO2 y CH4) y NO pueden ser fácilmente manipulados por medio del fertirriego por goteo sin que se generen disminuciones del rendimiento. Con ese propósito se evaluaron opciones de manejo para reducir estas emisiones en un experimento de campo durante un cultivo de melón (Cucumis melo L.). Los tratamientos incluyeron distintas frecuencias de riego (semanal/diario) y tipos de fertilizantes nitrogenados (urea/nitrato cálcico) aplicados por fertirriego. Fertirrigar con urea en lugar de nitrato cálcico aumentó las emisiones de N2O y NO por un factor de 2.4 y 2.9, respectivamente (P < 0.005). El riego diario redujo las emisiones de NO un 42% (P < 0.005) pero aumentó las emisiones de CO2 un 21% (P < 0.05) comparado con el riego semanal. Analizando el Poder de Calentamiento global en base al rendimiento así como los factores de emisión del NO, concluimos que el fertirriego semanal con un fertilizante de tipo nítrico es la mejor opción para combinar productividad agronómica con sostenibilidad medioambiental en este tipo de agroecosistemas. Los suelos agrícolas en las áreas semiáridas Mediterráneas se caracterizan por su bajo contenido en materia orgánica y bajos niveles de fertilidad. La aplicación de residuos de cosecha y/o abonos es una alternativa sostenible y eficiente desde el punto de vista económico para superar este problema. Sin embargo, estas prácticas podrían inducir cambios importantes en las emisiones de N2O de estos agroecosistemas, con impactos adicionales en las emisiones de CO2. En este contexto se llevó a cabo un experimento de campo durante un cultivo de cebada (Hordeum vulgare L.) bajo condiciones Mediterráneas para evaluar el efecto de combinar residuos de cosecha de maíz con distintos inputs de fertilizantes nitrogenados (purín de cerdo y/o urea) en estas emisiones. La incorporación de rastrojo de maíz incrementó las emisiones de N2O durante el periodo experimental un 105%. Sin embargo, las emisiones de NO se redujeron significativamente en las parcelas enmendadas con rastrojo. La sustitución parcial de urea por purín de cerdo redujo las emisiones netas de N2O un 46 y 39%, con y sin incorporación de residuo de cosecha respectivamente. Las emisiones netas de NO se redujeron un 38 y un 17% para estos mismos tratamientos. El ratio molar DOC:NO3 - demostró predecir consistentemente las emisiones de N2O y NO. El efecto principal de la interacción entre el fertilizante nitrogenado y el rastrojo de maíz se dio a los 4-6 meses de su aplicación, generando un aumento del N2O y una disminución del NO. La sustitución de urea por purín de cerdo puede considerarse una buena estrategia de manejo dado que el uso de este residuo orgánico redujo las emisiones de óxidos de N. Los pastos de todo el mundo proveen numerosos servicios ecosistémicos pero también suponen una importante fuente de emisión de N2O, especialmente en respuesta a la deposición de N proveniente del ganado mientras pasta. Para explorar el papel de las plantas como mediadoras de estas emisiones, se analizó si las emisiones de N2O dependen de la riqueza en especies herbáceas y/o de la composición específica de especies, en ausencia y presencia de una deposición de orina. Las hipótesis fueron: 1) las emisiones de N2O tienen una relación negativa con la productividad de las plantas; 2) mezclas de cuatro especies generan menores emisiones que monocultivos (dado que su productividad será mayor); 3) las emisiones son menores en combinaciones de especies con distinta morfología radicular y alta biomasa de raíz; y 4) la identidad de las especies clave para reducir el N2O depende de si hay orina o no. Se establecieron monocultivos y mezclas de dos y cuatro especies comunes en pastos con rasgos funcionales divergentes: Lolium perenne L. (Lp), Festuca arundinacea Schreb. (Fa), Phleum pratense L. (Php) y Poa trivialis L. (Pt), y se cuantificaron las emisiones de N2O durante 42 días. No se encontró relación entre la riqueza en especies y las emisiones de N2O. Sin embargo, estas emisiones fueron significativamente menores en ciertas combinaciones de especies. En ausencia de orina, las comunidades de plantas Fa+Php actuaron como un sumidero de N2O, mientras que los monocultivos de estas especies constituyeron una fuente de N2O. Con aplicación de orina la comunidad Lp+Pt redujo (P < 0.001) las emisiones de N2O un 44% comparado con los monocultivos de Lp. Las reducciones de N2O encontradas en ciertas combinaciones de especies pudieron explicarse por una productividad total mayor y por una complementariedad en la morfología radicular. Este estudio muestra que la composición de especies herbáceas es un componente clave que define las emisiones de N2O de los ecosistemas de pasto. La selección de combinaciones de plantas específicas en base a la deposición de N esperada puede, por lo tanto, ser clave para la mitigación de las emisiones de N2O. ABSTRACT Nitrous oxide (N2O) is a potent greenhouse gas (GHG) directly linked to applications of nitrogen (N) fertilizers to agricultural soils. Identifying mitigation strategies for these emissions based on fertilizer management without incurring in yield penalties is of economic and environmental concern. With that aim, this Thesis evaluated: (i) the use of nitrification and urease inhibitors; and (ii) interactions of N fertilizers with (1) water management, (2) crop residues and (3) plant species richness/identity. Meta-analysis, laboratory incubations, greenhouse mesocosm and field experiments were carried out in order to understand and develop effective mitigation strategies. Nitrification and urease inhibitors are proposed as means to reduce N losses, thereby increasing crop nitrogen use efficiency (NUE). However, their effect on crop yield is variable. A meta-analysis was initially conducted to evaluate their effectiveness at increasing NUE and crop productivity. Commonly used nitrification inhibitors (dicyandiamide (DCD) and 3,4-dimethylepyrazole phosphate (DMPP)) and the urease inhibitor N-(n-butyl) thiophosphoric triamide (NBPT) were selected for analysis as they are generally considered the best available options. Our results show that their use can be recommended in order to increase both crop yields and NUE (grand mean increase of 7.5% and 12.9%, respectively). However, their effectiveness was dependent on the environmental and management factors of the studies evaluated. Larger responses were found in coarse-textured soils, irrigated systems and/or crops receiving high nitrogen fertilizer rates. In alkaline soils (pH ≥ 8), the urease inhibitor NBPT produced the largest effect size. Given that their use represents an additional cost for farmers, understanding the best management practices to maximize their effectiveness is paramount to allow effective comparison with other practices that increase crop productivity and NUE. Based on the meta-analysis results, NBPT was identified as a mitigation option with large potential. Urease inhibitors (UIs) have shown to promote high N use efficiency by reducing ammonia (NH3) volatilization. In the last few years, however, some field researches have shown an effective mitigation of UIs over N2O losses from fertilized soils under conditions of low soil moisture. Given the inherent high variability of field experiments where soil moisture content changes rapidly, it has been impossible to mechanistically understand the potential of UIs to reduce N2O emissions and its dependency on the soil water-filled pore space (WFPS). An incubation experiment was carried out aiming to assess what is the main biotic mechanism behind N2O emission when UIs are applied under different soil moisture conditions (40, 60 and 80% WFPS), and to analyze to what extent the soil WFPS regulates the effect of the inhibitor over N2O emissions. A second UI (i.e. PPDA) was also used aiming to compare the effect of NBPT with that of another commercially available urease inhibitor; this allowed us to see if the effect of NBPT was inhibitor-specific or not. The N2O emissions at 40% WFPS were almost negligible, being significantly lower from all fertilized treatments than that produced at 60 and 80% WFPS. Compared to urea alone, NBPT+U reduced the N2O emissions at 60% WFPS but had no effect at 80% WFPS. The application of PPDA significantly increased the emissions with respect to U at 80% WFPS whereas no significant effect was found at 60% WFPS. At 80% WFPS denitrification was the main source of N2O emissions for all treatments. Both nitrification and denitrification had a determinant role on these emissions at 60% WFPS. These results suggest that adequate management of the UI NBPT can provide, under certain soil conditions, an opportunity for N2O mitigation. We translated our previous results to realistic field conditions by means of a field experiment with a barley crop (Hordeum vulgare L.) under rainfed Mediterranean conditions in which we evaluated the effectiveness of NBPT to reduce N losses and increase crop yields. Crop yield, soil mineral N concentrations, dissolved organic carbon (DOC), denitrification potential, NH3, N2O and nitric oxide (NO) fluxes were measured during the growing season. The inclusion of the inhibitor reduced NH3 emissions in the 30 d following urea application by 58% and net N2O and NO emissions in the 95 d following urea application by 86 and 88%, respectively. NBPT addition also increased grain yield by 5% and N uptake by 6%, although neither increase was statistically significant. Under the experimental conditions presented here, these results demonstrate the potential of the urease inhibitor NBPT in abating NH3, N2O and NO emissions from arable soils fertilized with urea, slowing urea hydrolysis and releasing lower concentrations of NH4 + to the upper soil layer. Drip irrigation combined with split application of N fertilizer dissolved in the irrigation water (i.e. drip fertigation) is commonly considered best management practice for water and nutrient efficiency. Some of the main factors (WFPS, NH4 + and NO3 -) regulating the emissions of GHGs (i.e. N2O, carbon dioxide (CO2) and methane (CH4)) and NO can easily be manipulated by drip fertigation without yield penalties. In this study, we tested management options to reduce these emissions in a field experiment with a melon (Cucumis melo L.) crop. Treatments included drip irrigation frequency (weekly/daily) and type of N fertilizer (urea/calcium nitrate) applied by fertigation. Crop yield, environmental parameters, soil mineral N concentrations, N2O, NO, CH4, and CO2 fluxes were measured during the growing season. Fertigation with urea instead of calcium nitrate increased N2O and NO emissions by a factor of 2.4 and 2.9, respectively (P < 0.005). Daily irrigation reduced NO emissions by 42% (P < 0.005) but increased CO2 emissions by 21% (P < 0.05) compared with weekly irrigation. Based on yield-scaled Global Warming Potential as well as NO emission factors, we conclude that weekly fertigation with a NO3 --based fertilizer is the best option to combine agronomic productivity with environmental sustainability. Agricultural soils in semiarid Mediterranean areas are characterized by low organic matter contents and low fertility levels. Application of crop residues and/or manures as amendments is a cost-effective and sustainable alternative to overcome this problem. However, these management practices may induce important changes in the nitrogen oxide emissions from these agroecosystems, with additional impacts on CO2 emissions. In this context, a field experiment was carried out with a barley (Hordeum vulgare L.) crop under Mediterranean conditions to evaluate the effect of combining maize (Zea mays L.) residues and N fertilizer inputs (organic and/or mineral) on these emissions. Crop yield and N uptake, soil mineral N concentrations, dissolved organic carbon (DOC), denitrification capacity, N2O, NO and CO2 fluxes were measured during the growing season. The incorporation of maize stover increased N2O emissions during the experimental period by c. 105 %. Conversely, NO emissions were significantly reduced in the plots amended with crop residues. The partial substitution of urea by pig slurry reduced net N2O emissions by 46 and 39 %, with and without the incorporation of crop residues respectively. Net emissions of NO were reduced 38 and 17 % for the same treatments. Molar DOC:NO3 - ratio was found to be a robust predictor of N2O and NO fluxes. The main effect of the interaction between crop residue and N fertilizer application occurred in the medium term (4-6 month after application), enhancing N2O emissions and decreasing NO emissions as consequence of residue incorporation. The substitution of urea by pig slurry can be considered a good management strategy since N2O and NO emissions were reduced by the use of the organic residue. Grassland ecosystems worldwide provide many important ecosystem services but they also function as a major source of N2O, especially in response to N deposition by grazing animals. In order to explore the role of plants as mediators of these emissions, we tested whether and how N2O emissions are dependent on grass species richness and/or specific grass species composition in the absence and presence of urine deposition. We hypothesized that: 1) N2O emissions relate negatively to plant productivity; 2) four-species mixtures have lower emissions than monocultures (as they are expected to be more productive); 3) emissions are lowest in combinations of species with diverging root morphology and high root biomass; and 4) the identity of the key species that reduce N2O emissions is dependent on urine deposition. We established monocultures and two- and four-species mixtures of common grass species with diverging functional traits: Lolium perenne L. (Lp), Festuca arundinacea Schreb. (Fa), Phleum pratense L. (Php) and Poa trivialis L. (Pt), and quantified N2O emissions for 42 days. We found no relation between plant species richness and N2O emissions. However, N2O emissions were significantly reduced in specific plant species combinations. In the absence of urine, plant communities of Fa+Php acted as a sink for N2O, whereas the monocultures of these species constituted a N2O source. With urine application Lp+Pt plant communities reduced (P < 0.001) N2O emissions by 44% compared to monocultures of Lp. Reductions in N2O emissions by species mixtures could be explained by total biomass productivity and by complementarity in root morphology. Our study shows that plant species composition is a key component underlying N2O emissions from grassland ecosystems. Selection of specific grass species combinations in the context of the expected nitrogen deposition regimes may therefore provide a key management practice for mitigation of N2O emissions.
Resumo:
Un escenario habitualmente considerado para el uso sostenible y prolongado de la energía nuclear contempla un parque de reactores rápidos refrigerados por metales líquidos (LMFR) dedicados al reciclado de Pu y la transmutación de actínidos minoritarios (MA). Otra opción es combinar dichos reactores con algunos sistemas subcríticos asistidos por acelerador (ADS), exclusivamente destinados a la eliminación de MA. El diseño y licenciamiento de estos reactores innovadores requiere herramientas computacionales prácticas y precisas, que incorporen el conocimiento obtenido en la investigación experimental de nuevas configuraciones de reactores, materiales y sistemas. A pesar de que se han construido y operado un cierto número de reactores rápidos a nivel mundial, la experiencia operacional es todavía reducida y no todos los transitorios se han podido entender completamente. Por tanto, los análisis de seguridad de nuevos LMFR están basados fundamentalmente en métodos deterministas, al contrario que las aproximaciones modernas para reactores de agua ligera (LWR), que se benefician también de los métodos probabilistas. La aproximación más usada en los estudios de seguridad de LMFR es utilizar una variedad de códigos, desarrollados a base de distintas teorías, en busca de soluciones integrales para los transitorios e incluyendo incertidumbres. En este marco, los nuevos códigos para cálculos de mejor estimación ("best estimate") que no incluyen aproximaciones conservadoras, son de una importancia primordial para analizar estacionarios y transitorios en reactores rápidos. Esta tesis se centra en el desarrollo de un código acoplado para realizar análisis realistas en reactores rápidos críticos aplicando el método de Monte Carlo. Hoy en día, dado el mayor potencial de recursos computacionales, los códigos de transporte neutrónico por Monte Carlo se pueden usar de manera práctica para realizar cálculos detallados de núcleos completos, incluso de elevada heterogeneidad material. Además, los códigos de Monte Carlo se toman normalmente como referencia para los códigos deterministas de difusión en multigrupos en aplicaciones con reactores rápidos, porque usan secciones eficaces punto a punto, un modelo geométrico exacto y tienen en cuenta intrínsecamente la dependencia angular de flujo. En esta tesis se presenta una metodología de acoplamiento entre el conocido código MCNP, que calcula la generación de potencia en el reactor, y el código de termohidráulica de subcanal COBRA-IV, que obtiene las distribuciones de temperatura y densidad en el sistema. COBRA-IV es un código apropiado para aplicaciones en reactores rápidos ya que ha sido validado con resultados experimentales en haces de barras con sodio, incluyendo las correlaciones más apropiadas para metales líquidos. En una primera fase de la tesis, ambos códigos se han acoplado en estado estacionario utilizando un método iterativo con intercambio de archivos externos. El principal problema en el acoplamiento neutrónico y termohidráulico en estacionario con códigos de Monte Carlo es la manipulación de las secciones eficaces para tener en cuenta el ensanchamiento Doppler cuando la temperatura del combustible aumenta. Entre todas las opciones disponibles, en esta tesis se ha escogido la aproximación de pseudo materiales, y se ha comprobado que proporciona resultados aceptables en su aplicación con reactores rápidos. Por otro lado, los cambios geométricos originados por grandes gradientes de temperatura en el núcleo de reactores rápidos resultan importantes para la neutrónica como consecuencia del elevado recorrido libre medio del neutrón en estos sistemas. Por tanto, se ha desarrollado un módulo adicional que simula la geometría del reactor en caliente y permite estimar la reactividad debido a la expansión del núcleo en un transitorio. éste módulo calcula automáticamente la longitud del combustible, el radio de la vaina, la separación de los elementos de combustible y el radio de la placa soporte en función de la temperatura. éste efecto es muy relevante en transitorios sin inserción de bancos de parada. También relacionado con los cambios geométricos, se ha implementado una herramienta que, automatiza el movimiento de las barras de control en busca d la criticidad del reactor, o bien calcula el valor de inserción axial las barras de control. Una segunda fase en la plataforma de cálculo que se ha desarrollado es la simulació dinámica. Puesto que MCNP sólo realiza cálculos estacionarios para sistemas críticos o supercríticos, la solución más directa que se propone sin modificar el código fuente de MCNP es usar la aproximación de factorización de flujo, que resuelve por separado la forma del flujo y la amplitud. En este caso se han estudiado en profundidad dos aproximaciones: adiabática y quasiestática. El método adiabático usa un esquema de acoplamiento que alterna en el tiempo los cálculos neutrónicos y termohidráulicos. MCNP calcula el modo fundamental de la distribución de neutrones y la reactividad al final de cada paso de tiempo, y COBRA-IV calcula las propiedades térmicas en el punto intermedio de los pasos de tiempo. La evolución de la amplitud de flujo se calcula resolviendo las ecuaciones de cinética puntual. Este método calcula la reactividad estática en cada paso de tiempo que, en general, difiere de la reactividad dinámica que se obtendría con la distribución de flujo exacta y dependiente de tiempo. No obstante, para entornos no excesivamente alejados de la criticidad ambas reactividades son similares y el método conduce a resultados prácticos aceptables. Siguiendo esta línea, se ha desarrollado después un método mejorado para intentar tener en cuenta el efecto de la fuente de neutrones retardados en la evolución de la forma del flujo durante el transitorio. El esquema consiste en realizar un cálculo cuasiestacionario por cada paso de tiempo con MCNP. La simulación cuasiestacionaria se basa EN la aproximación de fuente constante de neutrones retardados, y consiste en dar un determinado peso o importancia a cada ciclo computacial del cálculo de criticidad con MCNP para la estimación del flujo final. Ambos métodos se han verificado tomando como referencia los resultados del código de difusión COBAYA3 frente a un ejercicio común y suficientemente significativo. Finalmente, con objeto de demostrar la posibilidad de uso práctico del código, se ha simulado un transitorio en el concepto de reactor crítico en fase de diseño MYRRHA/FASTEF, de 100 MW de potencia térmica y refrigerado por plomo-bismuto. ABSTRACT Long term sustainable nuclear energy scenarios envisage a fleet of Liquid Metal Fast Reactors (LMFR) for the Pu recycling and minor actinides (MAs) transmutation or combined with some accelerator driven systems (ADS) just for MAs elimination. Design and licensing of these innovative reactor concepts require accurate computational tools, implementing the knowledge obtained in experimental research for new reactor configurations, materials and associated systems. Although a number of fast reactor systems have already been built, the operational experience is still reduced, especially for lead reactors, and not all the transients are fully understood. The safety analysis approach for LMFR is therefore based only on deterministic methods, different from modern approach for Light Water Reactors (LWR) which also benefit from probabilistic methods. Usually, the approach adopted in LMFR safety assessments is to employ a variety of codes, somewhat different for the each other, to analyze transients looking for a comprehensive solution and including uncertainties. In this frame, new best estimate simulation codes are of prime importance in order to analyze fast reactors steady state and transients. This thesis is focused on the development of a coupled code system for best estimate analysis in fast critical reactor. Currently due to the increase in the computational resources, Monte Carlo methods for neutrons transport can be used for detailed full core calculations. Furthermore, Monte Carlo codes are usually taken as reference for deterministic diffusion multigroups codes in fast reactors applications because they employ point-wise cross sections in an exact geometry model and intrinsically account for directional dependence of the ux. The coupling methodology presented here uses MCNP to calculate the power deposition within the reactor. The subchannel code COBRA-IV calculates the temperature and density distribution within the reactor. COBRA-IV is suitable for fast reactors applications because it has been validated against experimental results in sodium rod bundles. The proper correlations for liquid metal applications have been added to the thermal-hydraulics program. Both codes are coupled at steady state using an iterative method and external files exchange. The main issue in the Monte Carlo/thermal-hydraulics steady state coupling is the cross section handling to take into account Doppler broadening when temperature rises. Among every available options, the pseudo materials approach has been chosen in this thesis. This approach obtains reasonable results in fast reactor applications. Furthermore, geometrical changes caused by large temperature gradients in the core, are of major importance in fast reactor due to the large neutron mean free path. An additional module has therefore been included in order to simulate the reactor geometry in hot state or to estimate the reactivity due to core expansion in a transient. The module automatically calculates the fuel length, cladding radius, fuel assembly pitch and diagrid radius with the temperature. This effect will be crucial in some unprotected transients. Also related to geometrical changes, an automatic control rod movement feature has been implemented in order to achieve a just critical reactor or to calculate control rod worth. A step forward in the coupling platform is the dynamic simulation. Since MCNP performs only steady state calculations for critical systems, the more straight forward option without modifying MCNP source code, is to use the flux factorization approach solving separately the flux shape and amplitude. In this thesis two options have been studied to tackle time dependent neutronic simulations using a Monte Carlo code: adiabatic and quasistatic methods. The adiabatic methods uses a staggered time coupling scheme for the time advance of neutronics and the thermal-hydraulics calculations. MCNP computes the fundamental mode of the neutron flux distribution and the reactivity at the end of each time step and COBRA-IV the thermal properties at half of the the time steps. To calculate the flux amplitude evolution a solver of the point kinetics equations is used. This method calculates the static reactivity in each time step that in general is different from the dynamic reactivity calculated with the exact flux distribution. Nevertheless, for close to critical situations, both reactivities are similar and the method leads to acceptable practical results. In this line, an improved method as an attempt to take into account the effect of delayed neutron source in the transient flux shape evolutions is developed. The scheme performs a quasistationary calculation per time step with MCNP. This quasistationary simulations is based con the constant delayed source approach, taking into account the importance of each criticality cycle in the final flux estimation. Both adiabatic and quasistatic methods have been verified against the diffusion code COBAYA3, using a theoretical kinetic exercise. Finally, a transient in a critical 100 MWth lead-bismuth-eutectic reactor concept is analyzed using the adiabatic method as an application example in a real system.
Resumo:
A photo-healable rubber composite based on effective and fast thiol-alkyne click chemistry as a selfhealing agent prestored in glass capillaries is reported. The click reaction and its effect on the mechanical properties of the composite are monitored in real time by dynamic mechanical analysis, showing that the successful bleeding of healing agents to the crack areas and the effective photoinitiated click reaction result in a 30% storage modulus increase after only 5 min of UV light exposure. X-ray tomography confirms capillary-driven bleeding of reactants to the damaged areas. The effect of storing the click chemistry reactants in separate capillaries is also studied, and results show the importance of stoichiometry in achieving a significant level of repair of the composite. No reactant degradation or premature chemical reaction is observed over time in samples stored in the absence of UV radiation; they are able to undergo the self-healing reaction even one month after preparation.