18 resultados para door-to-needle time

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract The creation of atlases, or digital models where information from different subjects can be combined, is a field of increasing interest in biomedical imaging. When a single image does not contain enough information to appropriately describe the organism under study, it is then necessary to acquire images of several individuals, each of them containing complementary data with respect to the rest of the components in the cohort. This approach allows creating digital prototypes, ranging from anatomical atlases of human patients and organs, obtained for instance from Magnetic Resonance Imaging, to gene expression cartographies of embryo development, typically achieved from Light Microscopy. Within such context, in this PhD Thesis we propose, develop and validate new dedicated image processing methodologies that, based on image registration techniques, bring information from multiple individuals into alignment within a single digital atlas model. We also elaborate a dedicated software visualization platform to explore the resulting wealth of multi-dimensional data and novel analysis algo-rithms to automatically mine the generated resource in search of bio¬logical insights. In particular, this work focuses on gene expression data from developing zebrafish embryos imaged at the cellular resolution level with Two-Photon Laser Scanning Microscopy. Disposing of quantitative measurements relating multiple gene expressions to cell position and their evolution in time is a fundamental prerequisite to understand embryogenesis multi-scale processes. However, the number of gene expressions that can be simultaneously stained in one acquisition is limited due to optical and labeling constraints. These limitations motivate the implementation of atlasing strategies that can recreate a virtual gene expression multiplex. The developed computational tools have been tested in two different scenarios. The first one is the early zebrafish embryogenesis where the resulting atlas constitutes a link between the phenotype and the genotype at the cellular level. The second one is the late zebrafish brain where the resulting atlas allows studies relating gene expression to brain regionalization and neurogenesis. The proposed computational frameworks have been adapted to the requirements of both scenarios, such as the integration of partial views of the embryo into a whole embryo model with cellular resolution or the registration of anatom¬ical traits with deformable transformation models non-dependent on any specific labeling. The software implementation of the atlas generation tool (Match-IT) and the visualization platform (Atlas-IT) together with the gene expression atlas resources developed in this Thesis are to be made freely available to the scientific community. Lastly, a novel proof-of-concept experiment integrates for the first time 3D gene expression atlas resources with cell lineages extracted from live embryos, opening up the door to correlate genetic and cellular spatio-temporal dynamics. La creación de atlas, o modelos digitales, donde la información de distintos sujetos puede ser combinada, es un campo de creciente interés en imagen biomédica. Cuando una sola imagen no contiene suficientes datos como para describir apropiadamente el organismo objeto de estudio, se hace necesario adquirir imágenes de varios individuos, cada una de las cuales contiene información complementaria respecto al resto de componentes del grupo. De este modo, es posible crear prototipos digitales, que pueden ir desde atlas anatómicos de órganos y pacientes humanos, adquiridos por ejemplo mediante Resonancia Magnética, hasta cartografías de la expresión genética del desarrollo de embrionario, típicamente adquiridas mediante Microscopía Optica. Dentro de este contexto, en esta Tesis Doctoral se introducen, desarrollan y validan nuevos métodos de procesado de imagen que, basándose en técnicas de registro de imagen, son capaces de alinear imágenes y datos provenientes de múltiples individuos en un solo atlas digital. Además, se ha elaborado una plataforma de visualization específicamente diseñada para explorar la gran cantidad de datos, caracterizados por su multi-dimensionalidad, que resulta de estos métodos. Asimismo, se han propuesto novedosos algoritmos de análisis y minería de datos que permiten inspeccionar automáticamente los atlas generados en busca de conclusiones biológicas significativas. En particular, este trabajo se centra en datos de expresión genética del desarrollo embrionario del pez cebra, adquiridos mediante Microscopía dos fotones con resolución celular. Disponer de medidas cuantitativas que relacionen estas expresiones genéticas con las posiciones celulares y su evolución en el tiempo es un prerrequisito fundamental para comprender los procesos multi-escala característicos de la morfogénesis. Sin embargo, el número de expresiones genéticos que pueden ser simultáneamente etiquetados en una sola adquisición es reducido debido a limitaciones tanto ópticas como del etiquetado. Estas limitaciones requieren la implementación de estrategias de creación de atlas que puedan recrear un multiplexado virtual de expresiones genéticas. Las herramientas computacionales desarrolladas han sido validadas en dos escenarios distintos. El primer escenario es el desarrollo embrionario temprano del pez cebra, donde el atlas resultante permite constituir un vínculo, a nivel celular, entre el fenotipo y el genotipo de este organismo modelo. El segundo escenario corresponde a estadios tardíos del desarrollo del cerebro del pez cebra, donde el atlas resultante permite relacionar expresiones genéticas con la regionalización del cerebro y la formación de neuronas. La plataforma computacional desarrollada ha sido adaptada a los requisitos y retos planteados en ambos escenarios, como la integración, a resolución celular, de vistas parciales dentro de un modelo consistente en un embrión completo, o el alineamiento entre estructuras de referencia anatómica equivalentes, logrado mediante el uso de modelos de transformación deformables que no requieren ningún marcador específico. Está previsto poner a disposición de la comunidad científica tanto la herramienta de generación de atlas (Match-IT), como su plataforma de visualización (Atlas-IT), así como las bases de datos de expresión genética creadas a partir de estas herramientas. Por último, dentro de la presente Tesis Doctoral, se ha incluido una prueba conceptual innovadora que permite integrar los mencionados atlas de expresión genética tridimensionales dentro del linaje celular extraído de una adquisición in vivo de un embrión. Esta prueba conceptual abre la puerta a la posibilidad de correlar, por primera vez, las dinámicas espacio-temporales de genes y células.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analysis of big amount of data is a field with many years of research. It is centred in getting significant values, to make it easier to understand and interpret data. Being the analysis of interdependence between time series an important field of research, mainly as a result of advances in the characterization of dynamical systems from the signals they produce. In the medicine sphere, it is easy to find many researches that try to understand the brain behaviour, its operation mode and its internal connections. The human brain comprises approximately 1011 neurons, each of which makes about 103 synaptic connections. This huge number of connections between individual processing elements provides the fundamental substrate for neuronal ensembles to become transiently synchronized or functionally connected. A similar complex network configuration and dynamics can also be found at the macroscopic scales of systems neuroscience and brain imaging. The emergence of dynamically coupled cell assemblies represents the neurophysiological substrate for cognitive function such as perception, learning, thinking. Understanding the complex network organization of the brain on the basis of neuroimaging data represents one of the most impervious challenges for systems neuroscience. Brain connectivity is an elusive concept that refers to diferent interrelated aspects of brain organization: structural, functional connectivity (FC) and efective connectivity (EC). Structural connectivity refers to a network of physical connections linking sets of neurons, it is the anatomical structur of brain networks. However, FC refers to the statistical dependence between the signals stemming from two distinct units within a nervous system, while EC refers to the causal interactions between them. This research opens the door to try to resolve diseases related with the brain, like Parkinson’s disease, senile dementia, mild cognitive impairment, etc. One of the most important project associated with Alzheimer’s research and other diseases are enclosed in the European project called Blue Brain. The center for Biomedical Technology (CTB) of Universidad Politecnica de Madrid (UPM) forms part of the project. The CTB researches have developed a magnetoencephalography (MEG) data processing tool that allow to visualise and analyse data in an intuitive way. This tool receives the name of HERMES, and it is presented in this document. Analysis of big amount of data is a field with many years of research. It is centred in getting significant values, to make it easier to understand and interpret data. Being the analysis of interdependence between time series an important field of research, mainly as a result of advances in the characterization of dynamical systems from the signals they produce. In the medicine sphere, it is easy to find many researches that try to understand the brain behaviour, its operation mode and its internal connections. The human brain comprises approximately 1011 neurons, each of which makes about 103 synaptic connections. This huge number of connections between individual processing elements provides the fundamental substrate for neuronal ensembles to become transiently synchronized or functionally connected. A similar complex network configuration and dynamics can also be found at the macroscopic scales of systems neuroscience and brain imaging. The emergence of dynamically coupled cell assemblies represents the neurophysiological substrate for cognitive function such as perception, learning, thinking. Understanding the complex network organization of the brain on the basis of neuroimaging data represents one of the most impervious challenges for systems neuroscience. Brain connectivity is an elusive concept that refers to diferent interrelated aspects of brain organization: structural, functional connectivity (FC) and efective connectivity (EC). Structural connectivity refers to a network of physical connections linking sets of neurons, it is the anatomical structur of brain networks. However, FC refers to the statistical dependence between the signals stemming from two distinct units within a nervous system, while EC refers to the causal interactions between them. This research opens the door to try to resolve diseases related with the brain, like Parkinson’s disease, senile dementia, mild cognitive impairment, etc. One of the most important project associated with Alzheimer’s research and other diseases are enclosed in the European project called Blue Brain. The center for Biomedical Technology (CTB) of Universidad Politecnica de Madrid (UPM) forms part of the project. The CTB researches have developed a magnetoencephalography (MEG) data processing tool that allow to visualise and analyse data in an intuitive way. This tool receives the name of HERMES, and it is presented in this document.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traffic flow time series data are usually high dimensional and very complex. Also they are sometimes imprecise and distorted due to data collection sensor malfunction. Additionally, events like congestion caused by traffic accidents add more uncertainty to real-time traffic conditions, making traffic flow forecasting a complicated task. This article presents a new data preprocessing method targeting multidimensional time series with a very high number of dimensions and shows its application to real traffic flow time series from the California Department of Transportation (PEMS web site). The proposed method consists of three main steps. First, based on a language for defining events in multidimensional time series, mTESL, we identify a number of types of events in time series that corresponding to either incorrect data or data with interference. Second, each event type is restored utilizing an original method that combines real observations, local forecasted values and historical data. Third, an exponential smoothing procedure is applied globally to eliminate noise interference and other random errors so as to provide good quality source data for future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A local proper orthogonal decomposition (POD) plus Galerkin projection method was recently developed to accelerate time dependent numerical solvers of PDEs. This method is based on the combined use of a numerical code (NC) and a Galerkin sys- tem (GS) in a sequence of interspersed time intervals, INC and IGS, respectively. POD is performed on some sets of snapshots calculated by the numerical solver in the INC inter- vals. The governing equations are Galerkin projected onto the most energetic POD modes and the resulting GS is time integrated in the next IGS interval. The major computa- tional e®ort is associated with the snapshots calculation in the ¯rst INC interval, where the POD manifold needs to be completely constructed (it is only updated in subsequent INC intervals, which can thus be quite small). As the POD manifold depends only weakly on the particular values of the parameters of the problem, a suitable library can be con- structed adapting the snapshots calculated in other runs to drastically reduce the size of the ¯rst INC interval and thus the involved computational cost. The strategy is success- fully tested in (i) the one-dimensional complex Ginzburg-Landau equation, including the case in which it exhibits transient chaos, and (ii) the two-dimensional unsteady lid-driven cavity problem

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Proof-Carrying Code (PCC) is a general approach to mobile code safety in which the code supplier augments the program with a certifícate (or proof). The intended benefit is that the program consumer can locally validate the certifícate w.r.t. the "untrusted" program by means of a certifícate checker—a process which should be much simpler, eíñcient, and automatic than generating the original proof. Abstraction Carrying Code (ACC) is an enabling technology for PCC in which an abstract model of the program plays the role of certifícate. The generation of the certifícate, Le., the abstraction, is automatically carried out by an abstract interpretation-based analysis engine, which is parametric w.r.t. different abstract domains. While the analyzer on the producer side typically has to compute a semantic fixpoint in a complex, iterative process, on the receiver it is only necessary to check that the certifícate is indeed a fixpoint of the abstract semantics equations representing the program. This is done in a single pass in a much more efficient process. ACC addresses the fundamental issues in PCC and opens the door to the applicability of the large body of frameworks and domains based on abstract interpretation as enabling technology for PCC. We present an overview of ACC and we describe in a tutorial fashion an application to the problem of resource-aware security in mobile code. Essentially the information computed by a cost analyzer is used to genérate cost certificates which attest a safe and efficient use of a mobile code. A receiving side can then reject code which brings cost certificates (which it cannot validate or) which have too large cost requirements in terms of computing resources (in time and/or space) and accept mobile code which meets the established requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The computational study commented by Touchette opens the door to a desirable generalization of standard large deviation theory for special, though ubiquitous, correlations. We focus on three interrelated aspects: (i) numerical results strongly suggest that the standard exponential probability law is asymptotically replaced by a power-law dominant term; (ii) a subdominant term appears to reinforce the thermodynamically extensive entropic nature of q-generalized rate function; (iii) the correlations we discussed, correspond to Q -Gaussian distributions, differing from Lévy?s, except in the case of Cauchy?Lorentz distributions. Touchette has agreeably discussed point (i), but, unfortunately, points (ii) and (iii) escaped to his analysis. Claiming the absence of connection with q-exponentials is unjustified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Salamanca, situated in center of Mexico is among the cities which suffer most from the air pollution in Mexico. The vehicular park and the industry, as well as orography and climatic characteristics have propitiated the increment in pollutant concentration of Sulphur Dioxide (SO2). In this work, a Multilayer Perceptron Neural Network has been used to make the prediction of an hour ahead of pollutant concentration. A database used to train the Neural Network corresponds to historical time series of meteorological variables and air pollutant concentrations of SO2. Before the prediction, Fuzzy c-Means and K-means clustering algorithms have been implemented in order to find relationship among pollutant and meteorological variables. Our experiments with the proposed system show the importance of this set of meteorological variables on the prediction of SO2 pollutant concentrations and the neural network efficiency. The performance estimation is determined using the Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). The results showed that the information obtained in the clustering step allows a prediction of an hour ahead, with data from past 2 hours.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Airbus designs and industrializes aircrafts using Concurrent Engineering techniques since decades. The introduction of new PLM methods, procedures and tools, and the need to reduce time-to-market, led Airbus Military to pursue new working methods. Traditional Engineering works sequentially. Concurrent Engineering basically overlaps tasks between teams. Collaborative Engineering promotes teamwork to develop product, processes and resources from the conceptual phase to the start of the serial production. The CALIPSO-neo pilot project was launched to support the industrialization process of a medium size aerostructure. The aim is to implement the industrial Digital Mock-Up (iDMU) concept and its exploitation to create shop floor documentation. In a framework of a collaborative engineering strategy, the project is part of the efforts to deploy Digital Manufacturing as a key technology for the industrialization of aircraft assembly lines. This paper presents the context, the conceptual approach and the methodology adopted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La gestión del tráfico aéreo (Air Traffic Management, ATM) está experimentando un cambio de paradigma hacia las denominadas operaciones basadas trayectoria. Bajo dicho paradigma se modifica el papel de los controladores de tráfico aéreo desde una operativa basada su intervención táctica continuada hacia una labor de supervisión a más largo plazo. Esto se apoya en la creciente confianza en las soluciones aportadas por las herramientas automatizadas de soporte a la decisión más modernas. Para dar soporte a este concepto, se precisa una importante inversión para el desarrollo, junto con la adquisición de nuevos equipos en tierra y embarcados, que permitan la sincronización precisa de la visión de la trayectoria, basada en el intercambio de información entre ambos actores. Durante los últimos 30 a 40 años las aerolíneas han generado uno de los menores retornos de la inversión de entre todas las industrias. Sin beneficios tangibles, la industria aérea tiene dificultades para atraer el capital requerido para su modernización, lo que retrasa la implantación de dichas mejoras. Esta tesis tiene como objetivo responder a la pregunta de si las capacidades actualmente instaladas en las aeronaves comerciales se pueden aplicar para lograr la sincronización de la trayectoria con el nivel de calidad requerido. Además, se analiza en ella si, conjuntamente con mejoras en las herramientas de predicción trayectorias instaladas en tierra en para facilitar la gestión de las arribadas, dichas capacidades permiten obtener los beneficios esperados en el marco de las operaciones basadas en trayectoria. Esto podría proporcionar un incentivo para futuras actualizaciones de la aviónica que podrían llevar a mejoras adicionales. El concepto operacional propuesto en esta tesis tiene como objetivo permitir que los aviones sean pilotados de una manera consistente con las técnicas actuales de vuelo optimizado. Se permite a las aeronaves que desciendan en el denominado “modo de ángulo de descenso gestionado” (path-managed mode), que es el preferido por la mayoría de las compañías aéreas, debido a que conlleva un reducido consumo de combustible. El problema de este modo es que en él no se controla de forma activa el tiempo de llegada al punto de interés. En nuestro concepto operacional, la incertidumbre temporal se gestiona en mediante de la medición del tiempo en puntos estratégicamente escogidos a lo largo de la trayectoria de la aeronave, y permitiendo la modificación por el control de tierra de la velocidad de la aeronave. Aunque la base del concepto es la gestión de las ordenes de velocidad que se proporcionan al piloto, para ser capaces de operar con los niveles de equipamiento típicos actualmente, dicho concepto también constituye un marco en el que la aviónica más avanzada (por ejemplo, que permita el control por el FMS del tiempo de llegada) puede integrarse de forma natural, una vez que esta tecnología este instalada. Además de gestionar la incertidumbre temporal a través de la medición en múltiples puntos, se intenta reducir dicha incertidumbre al mínimo mediante la mejora de las herramienta de predicción de la trayectoria en tierra. En esta tesis se presenta una novedosa descomposición del proceso de predicción de trayectorias en dos etapas. Dicha descomposición permite integrar adecuadamente los datos de la trayectoria de referencia calculada por el Flight Management System (FMS), disponibles usando Futuro Sistema de Navegación Aérea (FANS), en el sistema de predicción de trayectorias en tierra. FANS es un equipo presente en los aviones comerciales de fuselaje ancho actualmente en la producción, e incluso algunos aviones de fuselaje estrecho pueden tener instalada avionica FANS. Además de informar automáticamente de la posición de la aeronave, FANS permite proporcionar (parte de) la trayectoria de referencia en poder de los FMS, pero la explotación de esta capacidad para la mejora de la predicción de trayectorias no se ha estudiado en profundidad en el pasado. La predicción en dos etapas proporciona una solución adecuada al problema de sincronización de trayectorias aire-tierra dado que permite la sincronización de las dimensiones controladas por el sistema de guiado utilizando la información de la trayectoria de referencia proporcionada mediante FANS, y también facilita la mejora en la predicción de las dimensiones abiertas restantes usado un modelo del guiado que explota los modelos meteorológicos mejorados disponibles en tierra. Este proceso de predicción de la trayectoria de dos etapas se aplicó a una muestra de 438 vuelos reales que realizaron un descenso continuo (sin intervención del controlador) con destino Melbourne. Dichos vuelos son de aeronaves del modelo Boeing 737-800, si bien la metodología descrita es extrapolable a otros tipos de aeronave. El método propuesto de predicción de trayectorias permite una mejora en la desviación estándar del error de la estimación del tiempo de llegada al punto de interés, que es un 30% menor que la que obtiene el FMS. Dicha trayectoria prevista mejorada se puede utilizar para establecer la secuencia de arribadas y para la asignación de las franjas horarias para cada aterrizaje (slots). Sobre la base del slot asignado, se determina un perfil de velocidades que permita cumplir con dicho slot con un impacto mínimo en la eficiencia del vuelo. En la tesis se propone un nuevo algoritmo que determina las velocidades requeridas sin necesidad de un proceso iterativo de búsqueda sobre el sistema de predicción de trayectorias. El algoritmo se basa en una parametrización inteligente del proceso de predicción de la trayectoria, que permite relacionar el tiempo estimado de llegada con una función polinómica. Resolviendo dicho polinomio para el tiempo de llegada deseado, se obtiene de forma natural el perfil de velocidades optimo para cumplir con dicho tiempo de llegada sin comprometer la eficiencia. El diseño de los sistemas de gestión de arribadas propuesto en esta tesis aprovecha la aviónica y los sistemas de comunicación instalados de un modo mucho más eficiente, proporcionando valor añadido para la industria. Por tanto, la solución es compatible con la transición hacia los sistemas de aviónica avanzados que están desarrollándose actualmente. Los beneficios que se obtengan a lo largo de dicha transición son un incentivo para inversiones subsiguientes en la aviónica y en los sistemas de control de tráfico en tierra. ABSTRACT Air traffic management (ATM) is undergoing a paradigm shift towards trajectory based operations where the role of an air traffic controller evolves from that of continuous intervention towards supervision, as decision making is improved based on increased confidence in the solutions provided by advanced automation. To support this concept, significant investment for the development and acquisition of new equipment is required on the ground as well as in the air, to facilitate the high degree of trajectory synchronisation and information exchange required. Over the past 30-40 years the airline industry has generated one of the lowest returns on invested capital among all industries. Without tangible benefits realised, the airline industry may find it difficult to attract the required investment capital and delay acquiring equipment needed to realise the concept of trajectory based operations. In response to these challenges facing the modernisation of ATM, this thesis aims to answer the question whether existing aircraft capabilities can be applied to achieve sufficient trajectory synchronisation and improvements to ground-based trajectory prediction in support of the arrival management process, to realise some of the benefits envisioned under trajectory based operations, and to provide an incentive for further avionics upgrades. The proposed operational concept aims to permit aircraft to operate in a manner consistent with current optimal aircraft operating techniques. It allows aircraft to descend in the fuel efficient path managed mode as preferred by a majority of airlines, with arrival time not actively controlled by the airborne automation. The temporal uncertainty is managed through metering at strategically chosen points along the aircraft’s trajectory with primary use of speed advisories. While the focus is on speed advisories to support all aircraft and different levels of equipage, the concept also constitutes a framework in which advanced avionics as airborne time-of-arrival control can be integrated once this technology is widely available. In addition to managing temporal uncertainty through metering at multiple points, this temporal uncertainty is minimised by improving the supporting trajectory prediction capability. A novel two-stage trajectory prediction process is presented to adequately integrate aircraft trajectory data available through Future Air Navigation Systems (FANS) into the ground-based trajectory predictor. FANS is standard equipment on any wide-body aircraft in production today, and some single-aisle aircraft are easily capable of being fitted with FANS. In addition to automatic position reporting, FANS provides the ability to provide (part of) the reference trajectory held by the aircraft’s Flight Management System (FMS), but this capability has yet been widely overlooked. The two-stage process provides a ‘best of both world’s’ solution to the air-ground synchronisation problem by synchronising with the FMS reference trajectory those dimensions controlled by the guidance mode, and improving on the prediction of the remaining open dimensions by exploiting the high resolution meteorological forecast available to a ground-based system. The two-stage trajectory prediction process was applied to a sample of 438 FANS-equipped Boeing 737-800 flights into Melbourne conducting a continuous descent free from ATC intervention, and can be extrapolated to other types of aircraft. Trajectories predicted through the two-stage approach provided estimated time of arrivals with a 30% reduction in standard deviation of the error compared to estimated time of arrival calculated by the FMS. This improved predicted trajectory can subsequently be used to set the sequence and allocate landing slots. Based on the allocated landing slot, the proposed system calculates a speed schedule for the aircraft to meet this landing slot at minimal flight efficiency impact. A novel algorithm is presented that determines this speed schedule without requiring an iterative process in which multiple calls to a trajectory predictor need to be made. The algorithm is based on parameterisation of the trajectory prediction process, allowing the estimate time of arrival to be represented by a polynomial function of the speed schedule, providing an analytical solution to the speed schedule required to meet a set arrival time. The arrival management solution proposed in this thesis leverages the use of existing avionics and communications systems resulting in new value for industry for current investment. The solution therefore supports a transition concept from mixed equipage towards advanced avionics currently under development. Benefits realised under this transition may provide an incentive for ongoing investment in avionics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Un escenario habitualmente considerado para el uso sostenible y prolongado de la energía nuclear contempla un parque de reactores rápidos refrigerados por metales líquidos (LMFR) dedicados al reciclado de Pu y la transmutación de actínidos minoritarios (MA). Otra opción es combinar dichos reactores con algunos sistemas subcríticos asistidos por acelerador (ADS), exclusivamente destinados a la eliminación de MA. El diseño y licenciamiento de estos reactores innovadores requiere herramientas computacionales prácticas y precisas, que incorporen el conocimiento obtenido en la investigación experimental de nuevas configuraciones de reactores, materiales y sistemas. A pesar de que se han construido y operado un cierto número de reactores rápidos a nivel mundial, la experiencia operacional es todavía reducida y no todos los transitorios se han podido entender completamente. Por tanto, los análisis de seguridad de nuevos LMFR están basados fundamentalmente en métodos deterministas, al contrario que las aproximaciones modernas para reactores de agua ligera (LWR), que se benefician también de los métodos probabilistas. La aproximación más usada en los estudios de seguridad de LMFR es utilizar una variedad de códigos, desarrollados a base de distintas teorías, en busca de soluciones integrales para los transitorios e incluyendo incertidumbres. En este marco, los nuevos códigos para cálculos de mejor estimación ("best estimate") que no incluyen aproximaciones conservadoras, son de una importancia primordial para analizar estacionarios y transitorios en reactores rápidos. Esta tesis se centra en el desarrollo de un código acoplado para realizar análisis realistas en reactores rápidos críticos aplicando el método de Monte Carlo. Hoy en día, dado el mayor potencial de recursos computacionales, los códigos de transporte neutrónico por Monte Carlo se pueden usar de manera práctica para realizar cálculos detallados de núcleos completos, incluso de elevada heterogeneidad material. Además, los códigos de Monte Carlo se toman normalmente como referencia para los códigos deterministas de difusión en multigrupos en aplicaciones con reactores rápidos, porque usan secciones eficaces punto a punto, un modelo geométrico exacto y tienen en cuenta intrínsecamente la dependencia angular de flujo. En esta tesis se presenta una metodología de acoplamiento entre el conocido código MCNP, que calcula la generación de potencia en el reactor, y el código de termohidráulica de subcanal COBRA-IV, que obtiene las distribuciones de temperatura y densidad en el sistema. COBRA-IV es un código apropiado para aplicaciones en reactores rápidos ya que ha sido validado con resultados experimentales en haces de barras con sodio, incluyendo las correlaciones más apropiadas para metales líquidos. En una primera fase de la tesis, ambos códigos se han acoplado en estado estacionario utilizando un método iterativo con intercambio de archivos externos. El principal problema en el acoplamiento neutrónico y termohidráulico en estacionario con códigos de Monte Carlo es la manipulación de las secciones eficaces para tener en cuenta el ensanchamiento Doppler cuando la temperatura del combustible aumenta. Entre todas las opciones disponibles, en esta tesis se ha escogido la aproximación de pseudo materiales, y se ha comprobado que proporciona resultados aceptables en su aplicación con reactores rápidos. Por otro lado, los cambios geométricos originados por grandes gradientes de temperatura en el núcleo de reactores rápidos resultan importantes para la neutrónica como consecuencia del elevado recorrido libre medio del neutrón en estos sistemas. Por tanto, se ha desarrollado un módulo adicional que simula la geometría del reactor en caliente y permite estimar la reactividad debido a la expansión del núcleo en un transitorio. éste módulo calcula automáticamente la longitud del combustible, el radio de la vaina, la separación de los elementos de combustible y el radio de la placa soporte en función de la temperatura. éste efecto es muy relevante en transitorios sin inserción de bancos de parada. También relacionado con los cambios geométricos, se ha implementado una herramienta que, automatiza el movimiento de las barras de control en busca d la criticidad del reactor, o bien calcula el valor de inserción axial las barras de control. Una segunda fase en la plataforma de cálculo que se ha desarrollado es la simulació dinámica. Puesto que MCNP sólo realiza cálculos estacionarios para sistemas críticos o supercríticos, la solución más directa que se propone sin modificar el código fuente de MCNP es usar la aproximación de factorización de flujo, que resuelve por separado la forma del flujo y la amplitud. En este caso se han estudiado en profundidad dos aproximaciones: adiabática y quasiestática. El método adiabático usa un esquema de acoplamiento que alterna en el tiempo los cálculos neutrónicos y termohidráulicos. MCNP calcula el modo fundamental de la distribución de neutrones y la reactividad al final de cada paso de tiempo, y COBRA-IV calcula las propiedades térmicas en el punto intermedio de los pasos de tiempo. La evolución de la amplitud de flujo se calcula resolviendo las ecuaciones de cinética puntual. Este método calcula la reactividad estática en cada paso de tiempo que, en general, difiere de la reactividad dinámica que se obtendría con la distribución de flujo exacta y dependiente de tiempo. No obstante, para entornos no excesivamente alejados de la criticidad ambas reactividades son similares y el método conduce a resultados prácticos aceptables. Siguiendo esta línea, se ha desarrollado después un método mejorado para intentar tener en cuenta el efecto de la fuente de neutrones retardados en la evolución de la forma del flujo durante el transitorio. El esquema consiste en realizar un cálculo cuasiestacionario por cada paso de tiempo con MCNP. La simulación cuasiestacionaria se basa EN la aproximación de fuente constante de neutrones retardados, y consiste en dar un determinado peso o importancia a cada ciclo computacial del cálculo de criticidad con MCNP para la estimación del flujo final. Ambos métodos se han verificado tomando como referencia los resultados del código de difusión COBAYA3 frente a un ejercicio común y suficientemente significativo. Finalmente, con objeto de demostrar la posibilidad de uso práctico del código, se ha simulado un transitorio en el concepto de reactor crítico en fase de diseño MYRRHA/FASTEF, de 100 MW de potencia térmica y refrigerado por plomo-bismuto. ABSTRACT Long term sustainable nuclear energy scenarios envisage a fleet of Liquid Metal Fast Reactors (LMFR) for the Pu recycling and minor actinides (MAs) transmutation or combined with some accelerator driven systems (ADS) just for MAs elimination. Design and licensing of these innovative reactor concepts require accurate computational tools, implementing the knowledge obtained in experimental research for new reactor configurations, materials and associated systems. Although a number of fast reactor systems have already been built, the operational experience is still reduced, especially for lead reactors, and not all the transients are fully understood. The safety analysis approach for LMFR is therefore based only on deterministic methods, different from modern approach for Light Water Reactors (LWR) which also benefit from probabilistic methods. Usually, the approach adopted in LMFR safety assessments is to employ a variety of codes, somewhat different for the each other, to analyze transients looking for a comprehensive solution and including uncertainties. In this frame, new best estimate simulation codes are of prime importance in order to analyze fast reactors steady state and transients. This thesis is focused on the development of a coupled code system for best estimate analysis in fast critical reactor. Currently due to the increase in the computational resources, Monte Carlo methods for neutrons transport can be used for detailed full core calculations. Furthermore, Monte Carlo codes are usually taken as reference for deterministic diffusion multigroups codes in fast reactors applications because they employ point-wise cross sections in an exact geometry model and intrinsically account for directional dependence of the ux. The coupling methodology presented here uses MCNP to calculate the power deposition within the reactor. The subchannel code COBRA-IV calculates the temperature and density distribution within the reactor. COBRA-IV is suitable for fast reactors applications because it has been validated against experimental results in sodium rod bundles. The proper correlations for liquid metal applications have been added to the thermal-hydraulics program. Both codes are coupled at steady state using an iterative method and external files exchange. The main issue in the Monte Carlo/thermal-hydraulics steady state coupling is the cross section handling to take into account Doppler broadening when temperature rises. Among every available options, the pseudo materials approach has been chosen in this thesis. This approach obtains reasonable results in fast reactor applications. Furthermore, geometrical changes caused by large temperature gradients in the core, are of major importance in fast reactor due to the large neutron mean free path. An additional module has therefore been included in order to simulate the reactor geometry in hot state or to estimate the reactivity due to core expansion in a transient. The module automatically calculates the fuel length, cladding radius, fuel assembly pitch and diagrid radius with the temperature. This effect will be crucial in some unprotected transients. Also related to geometrical changes, an automatic control rod movement feature has been implemented in order to achieve a just critical reactor or to calculate control rod worth. A step forward in the coupling platform is the dynamic simulation. Since MCNP performs only steady state calculations for critical systems, the more straight forward option without modifying MCNP source code, is to use the flux factorization approach solving separately the flux shape and amplitude. In this thesis two options have been studied to tackle time dependent neutronic simulations using a Monte Carlo code: adiabatic and quasistatic methods. The adiabatic methods uses a staggered time coupling scheme for the time advance of neutronics and the thermal-hydraulics calculations. MCNP computes the fundamental mode of the neutron flux distribution and the reactivity at the end of each time step and COBRA-IV the thermal properties at half of the the time steps. To calculate the flux amplitude evolution a solver of the point kinetics equations is used. This method calculates the static reactivity in each time step that in general is different from the dynamic reactivity calculated with the exact flux distribution. Nevertheless, for close to critical situations, both reactivities are similar and the method leads to acceptable practical results. In this line, an improved method as an attempt to take into account the effect of delayed neutron source in the transient flux shape evolutions is developed. The scheme performs a quasistationary calculation per time step with MCNP. This quasistationary simulations is based con the constant delayed source approach, taking into account the importance of each criticality cycle in the final flux estimation. Both adiabatic and quasistatic methods have been verified against the diffusion code COBAYA3, using a theoretical kinetic exercise. Finally, a transient in a critical 100 MWth lead-bismuth-eutectic reactor concept is analyzed using the adiabatic method as an application example in a real system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La telepesencia combina diferentes modalidades sensoriales, incluyendo, entre otras, la visual y la del tacto, para producir una sensación de presencia remota en el operador. Un elemento clave en la implementación de sistemas de telepresencia para permitir una telemanipulación del entorno remoto es el retorno de fuerza. Durante una telemanipulación, la energía mecánica es transferida entre el operador humano y el entorno remoto. En general, la energía es una propiedad de los objetos físicos, fundamental en su mutual interacción. En esta interacción, la energía se puede transmitir entre los objetos, puede cambiar de forma pero no puede crearse ni destruirse. En esta tesis, se aplica este principio fundamental para derivar un nuevo método de control bilateral que permite el diseño de sistemas de teleoperación estables para cualquier arquitectura concebible. El razonamiento parte del hecho de que la energía mecánica insertada por el operador humano en el sistema debe transferirse hacia el entorno remoto y viceversa. Tal como se verá, el uso de la energía como variable de control permite un tratamiento más general del sistema que el control convencional basado en variables específicas del sistema. Mediante el concepto de Red de Potencia de Retardo Temporal (RPRT), el problema de definir los flujos de energía en un sistema de teleoperación es solucionado con independencia de la arquitectura de comunicación. Como se verá, los retardos temporales son la principal causa de generación de energía virtual. Este hecho se observa con retardos a partir de 1 milisegundo. Esta energía virtual es añadida al sistema de forma intrínseca y representa la causa principal de inestabilidad. Se demuestra que las RPRTs son transportadoras de la energía deseada intercambiada entre maestro y esclavo pero a la vez generadoras de energía virtual debido al retardo temporal. Una vez estas redes son identificadas, el método de Control de Pasividad en el Dominio Temporal para RPRTs se propone como mecanismo de control para asegurar la pasividad del sistema, y as__ la estabilidad. El método se basa en el simple hecho de que esta energía virtual debido al retardo debe transformarse en disipación. As__ el sistema se aproxima al sistema deseado, donde solo la energía insertada desde un extremo es transferida hacia el otro. El sistema resultante presenta dos cualidades: por un lado la estabilidad del sistema queda garantizada con independencia de la arquitectura del sistema y del canal de comunicación; por el otro, el rendimiento es maximizado en términos de fidelidad de transmisión energética. Los métodos propuestos se sustentan con sistemas experimentales con diferentes arquitecturas de control y retardos entre 2 y 900 ms. La tesis concluye con un experimento que incluye una comunicación espacial basada en el satélite geoestacionario ASTRA. ABSTRACT Telepresence combines different sensorial modalities, including vision and touch, to produce a feeling of being present in a remote location. The key element to successfully implement a telepresence system and thus to allow telemanipulation of a remote environment is force feedback. In a telemanipulation, mechanical energy must convey from the human operator to the manipulated object found in the remote environment. In general, energy is a property of all physical objects, fundamental to their mutual interactions in which the energy can be transferred among the objects and can change form but cannot be created or destroyed. In this thesis, we exploit this fundamental principle to derive a novel bilateral control mechanism that allows designing stable teleoperation systems with any conceivable communication architecture. The rationale starts from the fact that the mechanical energy injected by a human operator into the system must be conveyed to the remote environment and Vice Versa. As will be seen, setting energy as the control variable allows a more general treatment of the controlled system in contrast to the more conventional control of specific systems variables. Through the Time Delay Power Network (TDPN) concept, the issue of defining the energy flows involved in a teleoperation system is solved with independence of the communication architecture. In particular, communication time delays are found to be a source of virtual energy. This fact is observed with delays starting from 1 millisecond. Since this energy is added, the resulting teleoperation system can be non-passive and thus become unstable. The Time Delay Power Networks are found to be carriers of the desired exchanged energy but also generators of virtual energy due to the time delay. Once these networks are identified, the Time Domain Passivity Control approach for TDPNs is proposed as a control mechanism to ensure system passivity and therefore, system stability. The proposed method is based on the simple fact that this intrinsically added energy due to the communication must be transformed into dissipation. Then the system becomes closer to the ambitioned one, where only the energy injected from one end of the system is conveyed to the other one. The resulting system presents two benefits: On one hand, system stability is guaranteed through passivity independently from the chosen control architecture and communication channel; on the other, performance is maximized in terms of energy transfer faithfulness. The proposed methods are sustained with a set of experimental implementations using different control architectures and communication delays ranging from 2 to 900 milliseconds. An experiment that includes a communication Space link based on the geostationary satellite ASTRA concludes this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La arquitectura china ha experimentado grandes cambios a lo largo de un extenso proceso histórico. El hito de mayor importancia es el que da paso al denominado Tiempo Moderno, periodo en el cual irrumpe por vez primera en China la arquitectura occidental, que comienza a tener una influencia muy activa y significativa sobre los rasgos y la identidad de la arquitectura tradicional china, hasta ese momento el único estilo o forma de hacer –muy diferente, en cuanto a su concepción y fisonomía, de los planteamientos occidentales- que había sobrevivido sin desvíos significativos, configurando un panorama milenario bastante homogéneo en los aspectos técnicos y artísticos en el desarrollo de esa arquitectura. Por un cúmulo de complejas razones, la mayor parte de la arquitectura china del periodo feudal -es decir el que forman todos los años anteriores a 1849- ha desaparecido. Sin embargo, desde la fecha indicada hasta la Revolución de 1949 (el denominado periodo semicolonial o semifeudal), sí se conservan muchas edificaciones, que fueron mejor construidas y mantenidas luego, destacando por su importancia en ese sentido las iglesias cristianas. Dichos templos representan cronológicamente, no sólo la primera irrupción de la arquitectura clásica occidental en China, sino el inicio de un proceso de modernización de la profundamente enraizada y, en buena medida, estancada arquitectura vernácula, combinando técnicas y estilos de ambos planteamientos, para dar como resultado originales edificaciones de un singular eclecticismo que caracterizarían buena parte de la arquitectura de dicha etapa semicolonial. En términos generales, últimamente se ha ido prestando cada vez más atención a esta arquitectura de los tiempos modernos, aunque las iglesias cristianas de la provincia de Shaanxi no han sido objeto de estudio específico, a pesar de que su tipología es muy representativa de las construcciones de esta clase en otras regiones del interior de China. La investigación que desarrolla la presente tesis doctoral sale al paso de esa deficiencia, abriendo puertas a la continuación del trabajo referido, extendido a otras zonas o arquitecturas, y, por extensión, a la profundización analítica de la hibridación arquitectónica y cultural entre China y Occidente. Sobre las bases de investigación documental, estudios de campo y dibujo, la tesis plantea un estudio aclaratorio de los rasgos y raíces de la arquitectura tradicional china, al que sigue otro histórico y tipológico de los templos cristianos en la provincia de Shaanxi, deteniéndose en sus características fundamentales, situación (uso) actual y estado de conservación. Se ha considerado imprescindible añadir al trabajo, como apéndice, un elaborado glosario conceptual ilustrado de términos básicos arquitectónicos y constructivos, en chino, inglés y español. ABSTRACT The Chinese architecture has gone through great changes during the long process of history. The tremendous changing period was the named Modern Times of China when, for the very first time, the western architecture was introduced into China and became to influence majorly on the traditional Chinese architecture. Before that, the traditional Chinese architecture which has its own, yet totally different system from the occidental architecture system was the only architectural style could be found in China. Although, due to many historical, conceptual and architectural characteristic reasons, large amount of the ancient Chinese architecture built in the feudal China was not preserved, there are a lot of buildings of semi-feudal China that was well constructed and conserved. The most important architectural type of the semi-feudal China is the Christian Churches. It was not only the first western architectural form that was brought into and well developed in China, but also was the beginner of the modernization process of Chinese architecture. Because of the deep root of the 2000-year traditional Chinese architecture, all the Christian Churches built in China during the semi-colonial society has a combined style of both the traditional Chinese architecture and the classic western churches. They are a priceless asset of the Chinese architectural history. Recently, more and more attention had been paid on the Chinese Modern Times architecture, however, the Christian Churches in Shaanxi Province, the province which has a unique history with the Christian, but less economically developed have never been researched yet. The Christian Churches of Shaanxi Province reflect the general feature of developing history of the Christian Churches of common inner-land regions in China. The research opens the door to further study on other Christian Churches and related buildings, and also for the further study on the Chinese-western architectural and culture communication. On the base of document research, field survey and mapping, in this thesis, an in-depth study had been done on the general history of the features and roots of the traditional Chinese architecture, the developing history of the Christian Churches of Shaanxi Province and the architectural types, examples, characteristics, present situation and conservation status. By comparing the Christian Churches of the cities in Shaanxi province to the Christian Churches in other more developed cities, and by comparing the Christian Churches in China to the classic western churches, the architectural combination feature of the Christian Churches in China are highlighted. The thesis is a fundamental research on which many further studies about the architectural developing history, characteristics and conservation of the Christian Churches in China could be done. It is considered essential to add to the work, as an appendix, an elaborate conceptual illustrated glossary of architectural and construction terms in Chinese, English and Spanish.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En la actualidad existen cada vez más dispositivos móviles que utilizamos diariamente. Estos dispositivos usan las nuevas tecnologías inalámbricas, ya sean redes de telefonía, Wifi o Bluetooth, lo que conlleva un consumo de energía elevado. Estos dispositivos además tienen una limitación que es la capacidad de la batería. Un ejemplo claro son los smartphones, los usamos a diario y la batería dura un día o poco más. Dada esta problemática del alto consumo de energía el mundo de la electrónica de consumo se ve obligado a desarrollar aplicaciones y sistemas operativos que realicen un consumo de potencia más eficientes, baterías de otro tipo de composiciones, etc. Para lo que es necesario que exista una forma eficaz de medir el consumo de energía. En la actualidad, en el laboratorio del GDEM (Grupo de Diseño Electrónico y Microeletrónico) existen varias corrientes de acción a la hora de resolver o paliar esta problemática. Aquí podemos dividirlo en dos grupos: trabajos que se dediquen a conseguir que el sistema realice un consumo más eficiente de la energía y trabajos dedicados a realizar medidas más precisas de este consumo para que, a su vez, sean utilizadas por el propio sistema para decidir formas de actuar. Con estas motivaciones se ha diseñado una tarjeta capaz de medir la potencia consumida por la BeagleBoard usando un método de medida novedoso. Los resultados obtenidos validan el diseño y el presupuesto total de la fabricación ha sido inferior a diez euros. Por lo tanto, los objetivos se han cumplido fabricando una tarjeta caracterizada por su sencillez y su bajo coste, además de abrir la puerta a que, junto con un trabajo futuro, se consiga que la BeagleBoard sea capaz de conocer el consumo de potencia en tiempo real. ABSTRACT. At present, the number of mobile devices that we use normally are increasing. These devices use the new wireless technologies, whether telephone network, wireless or Bluetooth, which carries a large power consumption. These devices also have a limitation which is the battery capacity. One clear example is the smartphones, we use them daily and the battery is spent in a day. With this problem of high energy consumption the world of consumer electronics is forced to develop applications and operating systems with more efficient power consumption or a battery of other compositions. For that purposese it is necessary to have an effective way to measure energy consumption. In the GDEM (Microelectronic and Electronic Design Group) lab there are several streams action for solving or alleviating this problem. Here we can divide into two groups: jobs that are dedicated to getting the system that perform more efficient consumption of energy and works dedicated to doing more precise measures of this consumption. With these motivations we designed a board which was able to measure the power consumed by the BeagleBoard using a innovative measurement method. The results validate the design and the price of the board is less than 10 euros. Therefore, the goals have been accomplished by making a board which is characterized by its simplicity and low cost. It has also opened the door to, in a future work, the BeagleBoard be able to know the power consumption in real time by adding the necessary software.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In current industrial environments there is an increasing need for practical and inexpensive quality control systems to detect the foreign food materials in powder food processing lines. This demand is especially important for the detection of product adulteration with traces of highly allergenic products, such as peanuts and tree nuts. Manufacturing industries dealing with the processing of multiple powder food products present a substantial risk for the contamination of powder foods with traces of tree nuts and other adulterants, which might result in unintentional ingestion of nuts by the sensitised population. Hence, the need for an in-line system to detect nut traces at the early stages of food manufacturing is of crucial importance. In this present work, a feasibility study of a spectral index for revealing adulteration of tree nut and peanut traces in wheat flour samples with hyperspectral images is reported. The main nuts responsible for allergenic reactions considered in this work were peanut, hazelnut and walnut. Enhanced contrast between nuts and wheat flour was obtained after the application of the index. Furthermore, the segmentation of these images by selecting different thresholds for different nut and flour mixtures allowed the identification of nut traces in the samples. Pixels identified as nuts were counted and compared with the actual percentage of peanut adulteration. As a result, the multispectral system was able to detect and provide good visualisation of tree nut and peanut trace levels down to 0.01% by weight. In this context, multispectral imaging could operate in conjuction with chemical procedures, such as Real Time Polymerase Chain Reaction and Enzyme-Linked Immunosorbent Assay to save time, money and skilled labour on product quality control. This approach could enable not only a few selected samples to be assessed but also to extensively incorporate quality control surveyance on product processing lines.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In current industrial environments there is an increasing need for practical and inexpensive quality control systems to detect the foreign food materials in powder food processing lines. This demand is especially important for the detection of product adulteration with traces of highly allergenic products, such as peanuts and tree nuts. Manufacturing industries dealing with the processing of multiple powder food products present a substantial risk for the contamination of powder foods with traces of tree nuts and other adulterants, which might result in unintentional ingestion of nuts by the sensitised population. Hence, the need for an in-line system to detect nut traces at the early stages of food manufacturing is of crucial importance. In this present work, a feasibility study of a spectral index for revealing adulteration of tree nut and peanut traces in wheat flour samples with hyperspectral images is reported. The main nuts responsible for allergenic reactions considered in this work were peanut, hazelnut and walnut. Enhanced contrast between nuts and wheat flour was obtained after the application of the index. Furthermore, the segmentation of these images by selecting different thresholds for different nut and flour mixtures allowed the identification of nut traces in the samples. Pixels identified as nuts were counted and with the actual percentage of peanut adulteration. As a result, the multispectral system was able to detect and provide good visualisation of tree nut and peanut trace levels down to 0.01% by weight. In this context, multispectral imaging could operate in conjuction with chemical procedures, such as Real Time Polymerase Chain Reaction and Enzyme-Linked Immunosorbent Assay to save time, money and skilled labour on product quality control. This approach could enable not only a few selected samples to be assessed but also to extensively incorporate quality control surveyance on product processing lines.